A functional alternative to returning null

A functional alternative to returning null

Last week I wrote about alternatives to returning null. There was however one alternative that I left out, the Maybe functor.

The reason I left it out last week was that I hadn’t had the time to read up on it properly. Now that I do have read up on it, and had some time to implement it and play around with it a bit, it is time to write a bit about it.

The Maybe functor, or Maybe monad, is a concept that is used in many functional languages such as Haskell, Scala, and F# (although it is called option in F#), and also in Rust (see docs).

In C# there is no support for the Maybe functor in the language itself, you have to implement it yourself. What you want to create is a generic class, Maybe<T>, that may, or may not, have an Item of type T associated to it. A method that maybe returns an int can look like this:

public Maybe<int> Parse(string s)
{
  if (int.TryParse(s, out var i))
    return new Maybe<int>(i);

  return new Maybe<int>();
}

As can be seen above the method signature makes it very clear that the parsing might fail, this makes it really hard for the caller to forget to cover the error case:

var parsed = Parse("42");
if (parsed.HasItem)
  Console.WriteLine($"The value was successfully parsed as {parsed.Item}");
else
  Console.WriteLine("Parsing failed");

Personally I like this alternative, but I am unsure how well it will fly with other C# developers.

If you like to read more about it, and see how it can be implemented, I strongly recommend you to visit Mark Seeman’s excellent blog. He writes about the Maybe functor in http://blog.ploeh.dk/2018/03/26/the-maybe-functor/

C# alternatives to returning null

What is problematic with returning null?

A common pattern, both in the code I am used to work with, and in parts of the .NET Framework, is to return null in methods if for some reason a valid return value is not available.

One example of this is the Find method of List<T>:

var persons = new List<Person>();
var bandit = persons.Find(p => p.Name == "Billy the Kid"); // Returns default(Person) which is Null (assuming Person is a reference type)
if (bandit == null)
{
  ...
}
else
{
  ...
}

So, why would you consider handling this, and similar cases, differently?

My first argument is, returning null makes code hard to use. Let me show you by example.

Assume that you write code that will call the following public method:

public Person GetPersonByName(string name)
{
  ...
}

Is there any way for the user to tell, by looking at the method signature, whether he needs to guard for the return value being null or not? No there is not. He will have to check the documentation, or the code (if available). Would it not be better if he could tell directly? You could achieve that by naming the method GetPersonByNameOrNullIfNotFound but that is not very desirable.

My second argument is, returning null forces the caller to pollute his code with multiple checks and if/else forks:

var dude = persons.GetPersonByName("Jesse");
if (dude == null)
{
  log.Error("Could not find Jesse");
}
else
{
  var car = cars.FindByOwner(dude);
  if (car == null)
  {
    log.Error("Dude, Where's My Car?");
  }
  else
  {
    ...
  }
}

This makes the code much harder to read.

So what alternatives are there?

Alternative 1: The null object pattern

The Null Object Pattern (wikipedia link) says that instead of returning null you should return a valid object, but with empty methods and fields. That is, an instance of the class that just doesn’t do anything. For example:

public Person GetPersonByName(string name)
{
  var id = _db.Find(name);
  if (id == 0)
  {
    return Person.Nobody;
  }
  return Person(id);
}

Here the Person class implements a static property, Nobody, that returns the null object version of Person.

Advantages

There are a couple of advantages of using this pattern over returning null.

  • Users do not need to add null checks in the calling code, making it simpler.
  • The risk of NullReferenceException being thrown is eliminated.

Disadvantages

All alternative ways have some disadvantages, using the null object pattern may:

  • Hide errors/bugs, since the program might appear to be running as expected
  • Force the introduction of just a different type of error checking

The last point here is interesting. If you, when implementing this pattern, realize that need to check the returned value anyway. Then this pattern is not suitable for your situation and you should consider a different solution.

Alternative 2: Fail fast

If you analyze your code and come to the conclusion that the case where null is returned is an exceptional case and really indicates an error condition, you can choose to throw an exception instead of returning null. One example of this is the File class in the .NET framework. Calling File.Open with an invalid path throws an exception (different exceptions depending on the type of error, for example FileNotFoundException if the file does not exist). A system that fails directly when an error condition is detected is called a Fail-fast system (wikipedia link).

I have worked in a large project where this philosophy was applied. Actually we didn’t throw exceptions, we directly halted the entire system, dumped all memory, stack and logs and reported the error. The result was that once the system went live it was really robust (having multiple levels of testing, some that ran for days or weeks simulating real load, also helped a lot).

Advantages

  • Makes errors visible
  • Forces you to fix any errors early, leading to a more robust system once in production
  • Reduces cost of fixing failures and bugs since it is cheaper to fix them early in the development process

Disadvantages

Failing fast might not be suitable in all situations. Assume for example that you are dependent on data from an external system, or user. If that system provides invalid data you do not want your system to fail. However, in situations where you are in control I recommend failing fast.

Alternative 3: Tester-Doer pattern

If you are depending on external systems and need to consider cases like your system being provided with corrupt data, shaky networks, missing files, database servers being overloaded, etc, throwing exceptions and halting the system won’t work for you. You could still throw exceptions and let the user add a try-catch clause to handle the exceptions, but if some scenarios are really error prone, throwing exceptions a lot, it might impact performance to the extend it is unacceptable (microsoft link). One way to approach this situation is to split the operation in two parts, one that checks if the resource is available and a second that gets the data. For example if you want to read a file but don’t know in advance that it is available you can do this:

if (File.Exists(path)) // Test if file exist
{
  var content = File.ReadAllText(path); // And if it does, read it
}

This idea can be expanded to test a lot of different preconditions and if they are fulfilled, do the operations.

Advantages

  • Allows you to verify that the operation will probably succeed
  • Removes the overhead of exception handling (exceptions are really bad for performance)
  • The calling code can be made quite clear

Disadvantages

  • Even though the test passes, the accessing method might fail. For example, in a multi threaded system the resource may have been deleted by another thread between the test and the accessing method.
  • Requires the caller to remember to do both calls and not just call the accessing method.

Alternative 4: Try-Parse pattern

A different version of the Tester-Doer Pattern is the Try-Parse pattern. One example where this is used in the .NET framework is the int.TryParse  method that tries to parse a string to an integer. It returns a boolean value that indicates whether the parsing succeeded or failed. The actual integer value is supplied by an out-parameter in the method call:

if (int.TryParse(aString, out var i))
{
  Console.WriteLine($"The value is {i}");
}

Advantages

  • Same as tester-doer, with the addition that you only need one call, hence the thread safety issue is taken care of.

Disadvantages

  • Obscure method signature where the return value is not the data you requested, instead an out variable is needed.

Summary

This post have hopefully provided you with some alternatives to returning null and some ideas on why and when it can be good to do so. As always, most important is that you try to make the code as clear and simple as possible. Now, code!

How to debug a Blazor project

Why is debugging Blazor applications different from Angular or React?

When you run a JavaScript framework, like Angular or React, the JavaScript code is available on the client-side. This makes it possible to use the built-in developer tools in the browser to inspect and step through the code. When running Blazor applications you execute a .NET runtime that runs your compiled C# code, a totally different story.

The first thing to realize is that you will not be able to use the regular developer tools, since they only support JavaScript at this time. A second ”aha” moment comes when you realize that you need to have your application compiled in debug mode in order to have debugging symbols available.

But if the built in developer tools does not support Blazor, what to use?

Current support

At this time there is only very early support for debugging client-side Blazor applications. And the only browser where debugging is supported is in Chrome.

The reason only Chrome is supported at this time is because Blazor provides a debugging proxy that implements the Chrome DevTools Protocol. This allows the DevTools in Chrome to talk to the proxy, which in turn connects to your running Blazor application. In order for the proxy to be able to connect to the running application, remote debugging must be enabled in Chrome. It’s a bit cumbersome, but I will go through the steps required in detail below.

Hopefully the Blazor team will focus on improving the debugging support since it is a very important ingredient if Blazor is to be popular among developers.

Debugging – step-by-step guide

Follow these steps and you will have a debugging session up and running in no time. I will assume you have Chrome installed and a working Blazor application that you wish to debug.

  1. Open up the application in Visual Studio (I was unable to start the debugging session when started on the command line)
  2. Ensure that Visual Studio is set up for building the application in Debug mode and the target browser is set to Chrome (see image below)
  3. Press F5 to start the application and it should open up Chrome and load it
  4. Press Shift+Alt+D (I use Dvorak keyboard layout where the QWERTY layout D is mapped to the letter E, so I had to press Shift+Alt+E)
  5. Chrome will open a new tab, showing an error message that it wasn’t started with remote debugging enabled
  6. Follow the instructions in the error message (close Chrome, then restart it using Win+R and paste a command similar to ”%programfiles(x86)%\Google\Chrome\Application\chrome.exe” –remote-debugging-port=9222 http://localhost:54308/)
  7. In the new Chrome window, press Shift+Alt+D again
  8. A new tab should open in Chrome showing the remote debug utils
  9. You should be able to find a heading named ”Sources” (see image below) where you find the Blazor DLL and under that the source files
  10. Add breakpoints and switch back to the other tab to interact with the application. Once a breakpoint is hit Chrome will display a message saying that the application is paused since it is stopped at a breakpoint

Figure 1. Visual Studio set up for debugging the Blazor application

Figure 2. Debugging the Blazor application in Chrome

Limitations

Remember that I wrote that debugging support is still in a very early stage. This means that there are a lot of things that are not supported in the debugger yet. Limitations include, but may not be limited to:

  • No support for stepping into child methods
  • Values of locals of other types than int, string, and bool cannot be inspected
  • Values of class properties and fields cannot be inspected
  • It is not possible to see values of variables by hoovering over them
  • Expressions cannot be evaluated in the console

Ending words

As you understand by now there is still a lot of work to do in order to get full debugging support with Blazor, but the fact that there are some debugging support in place is promising. It is a bit cumbersome to start a debug session, but it is not hard. I have worked in really large projects with custom build tools without a working debugger and it is not a good spot to be in. However, with Blazor I have good hopes that the development team understands the importance of a good debugger.

How to consume a REST API using Blazor

The RESTful service

A couple of blog posts ago I wrote about how to design a REST API with ASP.NET Core and now I intend to show how to create a front end that can interface this service in order to create and remove products from the repository.

To recap, the RESTful service was designed to manage a repository containing Products where a Product is defined like this:

public class Product
{
  public long Id { get; set; }
  public string Name { get; set; }
  public string Description { get; set; }
}

And the API to manage the Products looks like this:

URI Operation Description
/products GET Lists all the products
/products/{id} GET Returns information about a specific product
/products POST Creates a new product
/products/{id} PUT Replaces an existing product
/products/{id} DELETE Deletes a product

I will create an application, using Blazor, that can be used to list all products, create a new product, and delete an existing product.

Creating the Blazor application

Start by creating a new Blazor application. If you’ve never done this before there are some components that you need to install, follow the instruction on https://blazor.net/docs/get-started.html for details on how to do this.

Once you have all dependencies in place run the following command:

dotnet new blazor -o ProductsManager

which will create a new Blazor project named ProductsManager.

You can test that it works by changing directory to the new ProductsManager directory and running the application:

cd ProductsManager

dotnet run

You should now be able to open your web browser and see you Blazor application in action.

 

Notice the Fetch data link. Clicking this loads the FetchData component which we now will modify to fetch data via our REST API.

Fetching our Data

Open the ProductsManager project file in Visual Studio. A solution will be created automatically for you.

In the Pages folder you will find the FetchData component. If you open the source file (FetchData.cshtml) and scroll to the bottom you will find a call to HttpClient.GetJsonAsync that currently just reads a sample file located in wwwroot/sample-data.

The first thing we do is to replace the weather.json file with products.json in the wwwroot/sample-date/ and enter some sample data. Rename the weather.json file to products.json and replace the content with the following:

[
  {
    "Id": 1,
    "Name": "Unsteady chair",
    "Description":  "Chess player's secret weapon"
  },
  {
    "Id": 2,
    "Name": "Weather controller",
    "Description": "Control the weather with a simple click of a button. Requires two AA batteries (not included)"
  }
]

Now you will also need to update the FetchData component. Change the WeatherForecast class to look like the Product class above and update all related code to match these changes. The updated file is listed below:

@page "/fetchdata"
@inject HttpClient Http

<h1>Products listing</h1>

@if (products == null)
{
    <p><em>Loading...</em></p>
}
else
{
    <table class="table">
        <thead>
            <tr>
                <th>Id</th>
                <th>Name</th>
                <th>Description</th>
            </tr>
        </thead>
        <tbody>
            @foreach (var product in products)
            {
                <tr>
                    <td>@product.Id</td>
                    <td>@product.Name</td>
                    <td>@product.Description</td>
                </tr>
            }
        </tbody>
    </table>
}

@functions {
    Product[] products;

    protected override async Task OnInitAsync()
    {
        products = await Http.GetJsonAsync<Product[]>("sample-data/products.json");
    }

    class Product
    {
        public long Id { get; set; }
        public string Name { get; set; }
        public string Description { get; set; }
    }
}

Now, if everything works as expected you should be able to see the product listing from the sample file instead of the Weather Forecast when you click the Fetch Data link in your browser.

To get the data from an external web service via a REST API you simply need to change the ”sample-data/products.json” string to the URL of the web service. In my test I ran the server on localhost on port 44359. So when testing I added a const string:

private const string APIServer = "https://localhost:44359/api/products";

and then just changed the OnInitAsync method to look like this:

protected override async Task OnInitAsync()
{
  products = await Http.GetJsonAsync<Product[]>(APIServer);
}

Important: If you get access denied errors you need to ensure that you have set up Cross-Origin Resource Sharing (CORS) correctly in the RESTful service. You can read about how to do that here: https://docs.microsoft.com/en-us/aspnet/core/security/cors?view=aspnetcore-2.1

Once you have that working it should be quite straight forward how to extend the FetchData component to be able to add and remove products by using the REST API. Below I have a code listing of my working component:

@page "/fetchdata"
@inject HttpClient Http

<h1>Interfacing the Products API</h1>

@if (products == null)
{
  <em>Loading...</em>
}
else
{
  <table class="table">      
  <thead>          
  <tr>
    <th>Id</th> 
    <th>Name</th>            
    <th>Description</th>            
    <th></th>
  </tr>
  </thead>      
  <tbody>
  @foreach (var product in products)
  {              
    <tr>
      <td>@product.Id</td>
      <td>@product.Name</td>              
      <td>@product.Description</td>
      <td><input type="button" value="Delete" onclick="@(async () => await Delete(product.Id))" /></td>
    </tr>
  }
  </tbody>
  </table>

  <h2>Add a new product</h2>
  <form>
  <table>
  <tr>
    <td><label>Name</label></td>              
    <td><input type="text" bind="@Name" /></td>
  </tr>
  <tr>
    <td><label>Description</label></td>
    <td><input type="text" bind="@Description" /></td>
  </tr>
  <tr>
    <td></td>              
    <td><input type="button" value="Add" onclick="@(async () => await Add())" /></td>
  </tr>
  </table>
  </form>
}

@functions {
    private const string APIServer = "https://localhost:44359/api/products";

    private Product[] products;

    private string Name { get; set; } = "";
    private string Description { get; set; } = "";

    protected override async Task OnInitAsync()
    {
        products = await Http.GetJsonAsync<Product[]>(APIServer);
    }

    private async Task Add()
    {
        var newProduct = new Product { Name = Name, Description = Description };
        Name = string.Empty;
        Description = string.Empty;
        await Http.SendJsonAsync(HttpMethod.Post, APIServer, newProduct);
        products = await Http.GetJsonAsync<Product[]>(APIServer);
    }

    private async Task Delete(long productId)
    {
        await Http.DeleteAsync(APIServer + "/" + productId);
        products = await Http.GetJsonAsync<Product[]>(APIServer);
    }

    private class Product
    {
        public long Id { get; set; }
        public string Name { get; set; }
        public string Description { get; set; }
    }
}

I hope you found this at least a bit helpful. If anything is unclear just add a comment. And If you want me to publish the code on GitHub just let me know and I will do that as well.

C# In-Memory File System

Unit testing code that reads or writes to the filesystem can be difficult. One thing that can help is an in-memory file system that lets you emulate the actual file system just for testing purposes.

This morning I stumbled across a blog post by Jason Roberts, a five time Microsoft MVP, freelance developer, writer, and Pluralsight course author, where he describes how to use System.IO.Abstractions and System.IO.Abstractions.TestingHelpers (available through Nuget) to set up an in-memory file system that you can use when you want to unit test your file system dependent classes.

Check out his blog at http://dontcodetired.com/blog/post/Unit-Testing-C-File-Access-Code-with-SystemIOAbstractions

Blazor – first impressions

What is Blazor?

To quote the official Blazor homepage

”Blazor is a single-page web app framework built on .NET that runs in the browser with WebAssembly.”

The natural follow-up question is then ”What is WebAssembly?”. From the official WebAssembly homepage we can read that

”WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable target for compilation of high-level languages like C/C++/Rust, enabling deployment on the web for client and server applications.”

That is a very technical description, most likely formulated by a software engineer. I will make an attempt to describe Blazor and WebAssembly in my own words.

Blazor is a framework for building applications for the web, similar to AngularJS and React. However, Blazor makes it possible to write your applications in a .NET language, such as C#, instead of JavaScript. Since the JavaScript engines in the browsers are limited to executing JavaScript another solution is required for running compiled programs in binary format. This new format is called WebAssembly, Wasm, and is supported by Chrome, Edge, Firefox, and Safari.

Part of the Blazor project is to create a .NET Runtime in the Wasm format that runs in the browser and executes .NET bytecode.

Why Blazor over AngularJS or React?

Currently Blazor is still experimental so you should not use it in live products, yet. However, when it reaches a more stable state it should make a pretty awesome alternative to the JavaScript based frameworks. If you already write your Middleware and Backend code using .NET or .NET Core than it should be appealing to be able to use C# for the front-end as well, being able to go full stack using C#.

Another big thing is that compiled binary code in WebAssembly executes up to 5-10 times faster than JavaScript. In a time where JavaScript performance is becoming a bottleneck for web applications this is major deal.

Then there are a bunch of other things that makes it appealing to run .NET in the browser, like great development tools, well known APIs and stable build tools.

How is it to develop with Blazor right now?

I set up Blazor and went through a couple of tutorials and I must say that it feels really stable and performant already, even though it is still at an experimental stage. Being a .NET developer spending most of my time writing code in C# it felt really nice to be able to use that instead of JavaScript. Even though JavaScript is perfectly fine and I have nothing against developing in JavaScript if the job requires it, I feel a lot more comfortable with C# so it’s nice to be able to write both the backend code and the frontend code using C#.

When you create your web application with Blazor you create components which are HTML and C# code that you can either choose to display separately or as part of other components. The concept is easy to grasp and if you are comfortable working with HTML and C# you should be able to understand what’s going on in the code right away.

If you are a C# developer interested in web development I highly recommend that you give Blazor a try. My guess is that it will become one of the major web frameworks for creating Single Page Applications (SPA).

How can I get started?

Visit the official Blazor web site at https://blazor.net where you will find instructions on how to get started as well as a tutorial that will guide you through the basic concepts.

You may also want to visit the official WebAssembly homepage at https://webassembly.org to learn more about Wasm.

How to design a REST API with C# in ASP.NET Core 2

What is REST?

REST is short for Representational State Transfer,  and is way to design web services. So called RESTful web services.

Usually the RESTful service is accessible via a URI and HTTP operations such as  GET, POST, PUT, and DELETE are used to get or modify the data that is often stored in a database behind the web service.

Data being retrieved or sent to the web service is often formatted as JavaScript Object Notation, JSON.

Defining the API

Let’s assume we have a database containing Products that we wish to be able to Create, Read, Update, and Delete (CRUD) using the API. The service that handles the requests will be accessed using HTTP operations as listed below:

URI Operation Description
/products GET Lists all the products
/products/{id} GET Returns information about a specific product
/products POST Creates a new product
/products/{id} PUT Replaces an existing product
/products/{id} DELETE Deletes a product

Note that it does not matter whether the client that will be using the API is a mobile app, a desktop app, a web app, or something else.

Creating the RESTful Service

I am using Visual Studio 2017, Version 15.8.5 and .NET Core 2.1. If you use different versions, things may work a little different.

Start by selecting File -> New -> Project and in the ”New Project” dialog box choose Visual C# -> Web -> ASP.NET Core Web Application. Name the project ”ProductsAPI” and click OK.

 

In the next dialog that pops up, select ‘API’ and click OK.

When you use this template, code will be generated that sets up some basic configuration and adds a dummy controller so that you can build and run the application. If you press ‘Ctrl + F5’ the application should start and open up a new browser tab displaying ”value1” and ”value2” formatted as JSON.

Implementing the API

Creating a Product model

Let’s start by adding a Product model. Looking at the API definition above we can see that we want the Product to have an Id. It should also have a Name and a Description, so let’s add that as well:

public class Product
{
  public long Id { get; set; }
  public string Name { get; set; }
  public string Description { get; set; }
}

I created a Models folder on project level and added the Product class there.

Creating a ProductContext and a DbSet

Since we are using ASP.NET I want to take advantage of Entity Framework when working with the database. Entity Framework is an Object-Relational Mapper (O/RM) which let’s you work with domain specific objects and don’t have to worry so much about how to interface with the database. The Entity Framework types that we will be using are the DbContext and the DbSet. The DbContext represents the connection to the database and one or more tables. The tables are represented by DbSets.

Confusing? Let’s look at some code:

public class ProductContext : DbContext
{
  public DbSet<Product> Products { get; set; }

  public ProductContext(DbContextOptions<ProductContext> options) : base(options)
  {
  }
}

The code above informs us that we expect to have a database containing a table of Products. The constructor argument is of type DbContextOptions and will contain configuration options for the database connection.

Registering the ProductContext with the Dependency Injection container

In order to make it easy to be able to query and update the database I will register the ProductContext with the Dependency Injection container which is a built in component of ASP.NET Core. Doing that will make it possible to automatically instantiate objects that takes a ProductContext as a constructor parameter. I will use that later on when designing the ProductsController.

When the project was generated from the API template one of the files that were automatically added and populated was the Startup.cs. This class contains a couple of methods that gets called by the runtime during application startup. We will register the ProductContext in the ConfigureServices method by adding this line of code:

services.AddDbContext<ProductContext>(options => options.UseInMemoryDatabase("ProductList"));

This single line registers the ProductContext as a service in the Dependency Injection service collection and specifies an in memory database, named ProductList to be injected into the service container. Not bad for a single line of code, don’t you think?

Creating a controller to handle the HTTP requests

So far we have only created scaffolding, but no implementation to actually handle any HTTP requests. Now it’s time to change that.

Looking back at the API definition we expect that a user should be able to list all the products in the database by a GET request using the URI /products. The most straightforward way to accomplish this is to add a ProductsController, that takes a ProductContext in it’s constructor, and then add a Get action that returns the list of products.

There should be a folder in the projects that is named Controllers. Add a new API Controller to it and name it ProductsController. The code should look like this:

[Route("api/[controller]"]
[ApiController]
public class ProductsController : ControllerBase
{
  private readonly ProductContext _context;
  public ProductsController(ProductContext context)
  {
    _context = context;
    if (_context.Products.Count() == 0)
    {
      _context.Add(new Product
      {
        Name = "Ford Mustang",
        Description = "Classic American car."
      });
      _context.SaveChanges();
    }
  }

  [HttpGet]
  public ActionResult<IEnumerable<Product>> GetAll()
  {
    return _context.Products.ToList();
  }
}

As you can see, I add a new Product, a Ford Mustang, in the case that the database is empty. This is just so that we can see that it works as expected.

Pressing Ctrl+F5 and browsing to http://localhost:<some_port>/api/products should now result in something similar to this:

Implementing the rest of the API is pretty similar. I will just throw the code right at you:

[Route("api/[controller]")]
[ApiController]
public class ProductsController : ControllerBase
{
  private readonly ProductContext _context;

  public ProductsController(ProductContext context)
  {
    _context = context;
    if (_context.Products.Count() == 0)
    {
      _context.Add(new Product
      {
        Name = "Ford Mustang",
        Description = "Classic American car."
      });
      _context.SaveChanges();
    }
  }

  [HttpGet]
  public ActionResult<IEnumerable<Product>> GetAll()
  {
    return _context.Products.ToList();
  }

  [HttpGet("{id}", Name = "GetProduct")]
  public ActionResult<Product> GetById(long id)
  {
    var p = _context.Products.Find(id);
    if (p == null)
    {
      return NotFound();
    }
    return p;
  }

  [HttpPost]
  public IActionResult Create(Product product)
  {
    _context.Products.Add(product);
    _context.SaveChanges();

    return CreatedAtRoute("GetProduct", new { id = product.Id }, product);
  }

  [HttpPut("{id}")]
  public IActionResult Update(long id, Product product)
  {
    var p = _context.Products.Find(id);
    if (p == null)
    {
      return NotFound();
    }
    p.Name = product.Name;
    p.Description = product.Description;

    _context.Products.Update(p);
    _context.SaveChanges();

    return NoContent();
  }

  [HttpDelete("{id}")]
  public IActionResult Delete(long id)
  {
    var p = _context.Products.Find(id);
    if (p == null)
    {
      return NotFound();
    }

    _context.Products.Remove(p);
    _context.SaveChanges();

    return NoContent();
  }
}

Take some time to look through the code, it should hopefully be quite straight forward to figure out how it works. If you want me to add a working example on GitHub, let me know in the comments.

I you want to test the API during development I recommend a tool like Postman. It is a good tool for sending HTTP requests and checking the responses.

How to design a Priority Queue in C#

What is a Priority Queue?

To be a bit formal, a priority queue is an abstract data type (ADT), similar to a regular queue, but where each element has a priority associated to it. An element with high priority is served before an element with a low priority. Elements which have the same priority are usually served in the order they were added, but that is not a formal requirement.

In other words, it’s like the queue to the fancy club down town, where all we regular people have to stay in line outside and the VIPs just walks right in.

Choosing the internal collection type

Since we are designing a queue, we want a method for Enqueuing elements and a method for Dequeuing elements. Somehow we need to keep track of the elements that are in the queue and the straight forward way to do this is to use a Collection type. One commonly used Collection type that supports both adding and removing elements is the generic List<T> from the System.Collections.Generic namespace. Sounds like a good fit? Yes! Let’s write some code:

public class PriorityQueue<T>
{
  private readonly List<T> _pq = new List<T>();
}

Making sure enqueued elements can be prioritized

In order to know which element that has the highest priority we need to be able to prioritize one against the other in some way. But how can we put such restrictions on the elements from within the PriorityQueue class? Fortunately there is a way in C# to put constraints on generic types (in our case our generic type is T). We can do this by informing the compiler that objects of type T must be comparable to each other so they can be sorted, in other words, T must implement IComparable (part of the .NET framework). Let’s add that restriction to our PriorityQueue:

public class PriorityQueue<T> where T : IComparable<T>
{
  private readonly List<T> _pq = new List<T>();
}

Enqueuing and Dequeuing elements

A queue is useless if you can’t enqueue and dequeue items from it. It will get a bit tricky here in a while when we will need to ensure elements are added in the right position in regards to priority, but let’s start with something simple. Let’s pretend that we don’t care about priority and just want a regular queue. This can be achieved by adding elements to the end of the List and removing them at the beginning:

public class PriorityQueue<T> where T : IComparable<T>
{
  private readonly List<T> _pq = new List<T>();
  
  public void Enqueue(T item)
  {
    _pq.Add(item);
  }

  public T Dequeue()
  {
    var item = _pq[0];
    _pq.RemoveAt(0);
  
    return item;
  }
}

Cool, now we have a regular queue, but how do we ensure we always Dequeue the top prioritized item? One way we could do that is to sort the items after each time we add a new item:

public class PriorityQueue<T> where T : IComparable<T>
{
  private readonly List<T> _pq = new List<T>();
  
  public void Enqueue(T item)
  {
    _pq.Add(item);
    _pq.Sort();
  }

  public T Dequeue()
  {
    var item = _pq[0];
    _pq.RemoveAt(0);
    
    return item;
  }
}

This should work, and it’s a descent solution. However, sorting the List<T> like this every time an element is enqueued is not the optimal solution. We can do it better.

Making it scale

Now we have to dive into the realms of Computer Science. If concepts like Big O notation and Binary Heaps just are strange words that you don’t know what they mean I recommend reading up on those first and then returning here. You can find an introduction to Big O notation here and a good explanation of Binary Min and Max Heaps here.

All ready to go? Great! So, using the solution above we get an O(nlogn) dependency when Enqueuing elements, that is due to the sort that occurs after each addition. However, if we order the data in the List<T> as a Binary Min Heap both the Enqueue and Dequeue operations can be improved to O(logn) which scales much better.

I will not explain how the Insert and Delete operations work in detail in a Binary Min Heap. You can find a good explanation, with fancy animations, by following the link above. Instead, lets look at the resulting code:

public class PriorityQueue<T> where T : IComparable<T>
{
  private readonly List<T> _pq = new List<T>();

  public void Enqueue(T item)
  {
    _pq.Add(item);
    BubbleUp();
  }
  
  public T Dequeue()
  {
    var item = _pq[0];
    MoveLastItemToTheTop();
    SinkDown();
    return item;
  }

  private void BubbleUp() // Implementation of the Min Heap Bubble Up operation
  {
    var childIndex = _pq.Count - 1;
    while (childIndex > 0)
    {
      var parentIndex = (childIndex - 1) / 2;
      if (_pq[childIndex].CompareTo(_pq[parentIndex]) >= 0)
        break;
      Swap(childIndex, parentIndex);
      childIndex = parentIndex;
    }
  }

  private void MoveLastItemToTheTop()
  {
    var lastIndex = _pq.Count - 1;
    _pq[0] = _pq[lastIndex];
    _pq.RemoveAt(lastIndex);
  }

  private void SinkDown() // Implementation of the Min Heap Sink Down operation
  {
    var lastIndex = _pq.Count - 1;
    var parentIndex = 0;
    
    while (true)
    {
      var firstChildIndex = parentIndex * 2 + 1;
      if (firstChildIndex > lastIndex)
      {
        break;
      }
      var secondChildIndex = firstChildIndex + 1;
      if (secondChildIndex <= lastIndex && _pq[secondChildIndex].CompareTo(_pq[firstChildIndex]) < 0)
      {
        firstChildIndex = secondChildIndex;
      }
      if (_pq[parentIndex].CompareTo(_items[firstChildIndex]) < 0)
      {
        break;
      }
      Swap(parentIndex, firstChildIndex);
      parentIndex = firstChildIndex;
    }
  }

  private void Swap(int index1, int index2)
  {
    var tmp = _pq[index1];
    _pq[index1] = _pq[index2];
    _pq[index2] = tmp;
  }
}

There you have it! A fully working Priority Queue implementation in C# that scales.

You can find a, very similar but not quite identical, implementation on my GitHub page: https://github.com/Backhage/PriorityQueue

C# Threading Gotchas

Introduction

Threading and concurrency is a big topic and there are plenty of resources out there that covers the hows and whats related to starting new threads and avoiding locking up your UI and so on. I will not go into those details but rather try to focus on things that are good to know, but aren’t covered in the normal threading howtos you find online.

When is it worth starting a thread?

The best thread is the one you don’t need. However, a rule of thumb is that operations that might take longer than 50 ms to complete are candidates to run on a separate thread. The reason for that is that there is overhead involved in creating and switching between threads. Also, remember that for I/O bound operations there are often asynchronous methods you can use instead of spawning a thread on your own.

What is the difference between a background and a foreground thread?

The main thread and any thread you create a using System.Threading.Thread is by default a foreground thread. Any task you put on the System.Threading.ThreadPool is run on a background thread.

var tf = new Thread(MyMethod);
tf.Start(); // Starts a new thread that runs in the foreground
...
ThreadPool.QueueUserWorkItem(MyMethod); // Runs on a background thread

There is only one thing that differs between a background and a foreground thread. That is that foreground threads will block the application from exiting until they have completed, but background threads will be abruptly aborted when the application exits. Note: This means that any clean-up actions you have defined, such as removing temporary files, will not be run if they are supposed to happen on a background thread that gets interrupted by the application being shut down. You can however use the Thread.Join method to avoid this problem.

It is also possible to set a new thread to run as background thread if you wish to avoid that it may block application exit. This is a good idea for long running threads that otherwise can lock up the application. Most of us have probably experienced applications becoming unresponsive and the only way to shut them down is via the task manager. This is often caused by hanged foreground threads.

var t = new Thread(MyMethod) { IsBackground = true };
t.Start(); // Runs as a background thread

Also note that all Task-based operations, such as Task.Run, but also await-ed methods, are run on the thread pool, and hence on background threads.

How to catch exceptions on threads?

Take a look at this code sample:

public static void Main()
{
  try
  {
    var t = new Thread(MyMethod);
    t.Start();
  }
  catch (exception ex)
  {
    ...
  }
}

private static void MyMethod { throw null; } // Throws a NullReferenceException

Will the NullReferenceException thrown from MyDelegate be caught? Unfortunately no. Instead the program will terminate due to an unhandled exception.

The reason why the exception cannot be caught this way is simply because each thread has it’s own independent execution path, they progress independently of each other (until they hit a lock or some signaling, like ManualResetEvent). To be able to handle the exception you will have to move the try-catch block into MyMethod:

public static void Main()
{
  var t = new Thread(MyMethod).Start();
}

private static void MyMethod()
{
  try
  {
    throw null;
  }
  catch (exception ex) // Here the exception will be caught
  {
    // Exception handling code. Typically including error logging.
    ... 
  }
}

Note that Tasks, unlike Threads, propagate exceptions. So in the case of using Task.Run you can do this:

public static void Main()
{
  var t = Task.Run(() => { throw null; });
  try
  {
    t.Wait();
  }
  catch (AggregateException ex)
  {
    // Exception handling code.
    // The NullReferenceException is found at ex.InnerException
    ...
  }
}

Tricky captured variables

Consider the following code:

for (var i = 0; i < 10; i++)
{
  var t = new Thread(() => Console.Write(i));
  t.Start();
}

When I ran this I got the following output:

2
3
5
4
5
1
6
9
7
10

Notice how the value 5 is written twice and 0 and 8 is missing, and we actually got the number 10 written. Why does this happen? The answer is that the variable i refers to the same memory during the entire lifetime of the loop. Inside the loop we start 10 different threads that all reads the same memory address when it is about to write the value of i. However, i is updated on the main thread which runs independently of the other 10.

How do you think the code will behave if we make this small change:

for (var i = 0; i < 10; i++)
{
  var temp = i;
  var t = new Thread(() => Console.Write(temp));
  t.Start();
}

This time the numbers 0 to 9 will written without duplicates or missing numbers, the order is still not deterministic though. This is because the line var temp = i creates a new variable for each iteration and copies the current value of i to that location. Each thread will therefore refer to a separate memory location. The threads are however not guaranteed to run in the order they are started.

Ending words

There are lots of things to keep in mind when working with threads. I have touched on some things in this post that I think can be tricky. As usual I recommend having a good book near by that you can use to look up things when they don’t work as you expect.

Book Review – The imposter’s handbook

Introduction

Just by reading the heading of this blog post won’t tell you anything about what kind of book I am writing a review on. The full title is actually The imposter’s Handbook – A CS Primer For Self-Taught Programmers, where CS is short for Computer Science.

The book is written by Rob Conery, a self-taught programmer without a CS degree. In 2014 Rob spent a year studying Computer Science fundamentals and wrote about the things he learned, which resulted in this book. When I heard the book being discussed on the Coding Blocks podcast I got interested and decided to order a copy of it for myself. Just like Rob I am (mostly) self-taught when it comes to Computer Science subjects. I do have a master’s degree, but in electrical engineering, so none of the courses I took on the University covered the subjects that Rob writes about.

CS subjects covered in the book

The book touches on many areas, and does not deep dive into any of them, so it is probably wrong to say that any of the subjects are ”covered”. However, the author introduces each subject and gives you enough understanding about them to cover the basics. And if you want to deep dive into anyone there are a lot of books out there that do cover the details.

Subjects discussed are:

  • Computation
  • Complexity
  • Lamdba Calculus
  • Machinery
  • Big O
  • Data Structures
  • Algorithms
  • Compilation
  • Software Design Patterns
  • Software Design Principles
  • Functional Programming
  • Databases
  • Testing
  • Essential Unix

As you understand, with these many subjects, you cannot dive into details and still have everything in a single book.

Is this book for me?

I would say that I depends. For me personally I enjoyed reading the first chapters, but from Big O and forward I pretty much already knew the things that the book brings up. However, I recognize that I am not the typical self-taught programmer. I read, a lot, of books on programming, I have taken Coursera courses on algorithms, and I do programming challenges on Codewars, Hackerrank, and Codility just for the fun of it, I listen to several programming podcasts, and subscribe to several programming newsletters. But if you look at the subjects listing above and feel that you don’t have a basic understanding on these subjects, this book is most certainly quite useful.

Rating

I would really like to give this book a high rating, since I know that the author has put a lot of effort into learning the things himself, as well as writing about them in a way that is useful for others. There are however some things that lowers the score. These are:

  • Not correctly formatted for print
    I have the printed version of the book, and it is obvious that the source needs to be looked over to avoid having pages where the last line on the page is a heading, and similar formatting errors.
  • Questionable code quality
    I found many of the code samples to be questionable in regards of code structure, naming of variables, etc. I expected the book to contain code samples that clearly showed the expected functionality.
  • Questionable text quality
    When I read a technical book I expect it have been proof read and reworked a couple of times. This book often feel more like an early draft than the finished book.

Taking the above into account I give this book a score of: 3/5. It can definitely be good to get a brief understand on some important CS subjects, but if you want to learn any of the subjects really well, I recommend complementing with some books from a well known publisher.

Links

The homepage for the Imposter’s handbook