SOLID – The Open/Closed Principle

In this third post on the SOLID principles we will look at the Open/Closed Principle (OCP).


Bertrand Mayer coined the Open/Closed Principle in 1988, and it states that ”A software entity (class, module, function) should be open for extension but closed for modification”.

A software entity complies to OCP if its behavior can be extended, and extending the behavior does not result in changes to it’s existing code.

Code smells

The most obvious code smell that appears for entities that does not follow OCP is Rigidity i.e. that the software is hard to change. A change to one entity results in a cascade of changes in other entities.

This also leads to Fragility, since being forced to make changes in many different entities makes it easy to break something. Your code base may also show signs of Immobility.

How to apply

NOTE: As I wrote in the previous posts, applying patterns and abstractions should be done only when smells are appearing. I recommend that you use the ”Fool me once” approach. That is, start with writing the code as if it is never going to change, then the first time you do need to change it, you apply the appropriate patterns and abstractions.

Your goal when applying OCP should be to make it easy to introduce changes by adding new code (open for extension), but not by modifying existing code that already works (closed for modification).

But how can that be done? The two statements sounds like contradictions. The answer, abstraction. There are two patterns that can be used to achieve this goal, and they both have the approach of splitting up concrete classes into abstract and concrete parts. The patters are The Template Pattern and The Strategy Pattern.

The Template Pattern

The Template Pattern uses inheritance to separate a generic algorithm from a detailed context. You apply it by splitting your current concrete class into two:

  1. An abstract base class containing the generic code/algorithm, this is your template
  2. The concrete context, that inherits from the abstract base class and implements the abstract methods

Applying this pattern opens up your code for extension by making it possible to add new behavior by adding new concrete classes that inherits from the abstract base class. You can do this without modifying the existing base class or any existing concrete classes.

The Strategy Pattern

The Strategy Pattern is also used to separate a generic algorithm from a detailed context. Unlike the Template Pattern it does so by delegation and not inheritance.

To apply the Strategy Pattern you split up your concrete implementation into three parts:

  1. A concrete class containing the generic algorithm
  2. An interface that abstracts the detailed context
  3. A concrete strategy class that implements the interface
The components of the Strategy Pattern

To use these classes you first instantiate the strategy class (3) and hand it to the class containing the generic algorithm (1), preferably by constructor injection.

This opens up your code for extension by allowing you to add new strategies (3) without having to modify any of the existing code. The generic algorithm can also be re-used by given different strategies.


The Open/Closed Principle is the second of the five principles of SOLID. It can be a bit hard to understand what it means that something is ”open for extension but closed for modification”, hopefully this post has made it a bit clearer. I advise you to read up a bit more on both the template, and the strategy, pattern.

Next, we will dive into the third principle of SOLID, The Liskov Substitution Principle.

SOLID – The Single-Responsibility Principle

You are now reading the second part of my blog series on the SOLID principles. In the first post I introduced the concept and gave some reasons to why you should learn about SOLID. In this part I will write about the first of the five SOLID principles, the Single-Responsibility Principle (SRP).


The Single-Responsibility Principle states that ”A class should have only one responsibility” or ”A class should have only one reason to change”.

This means that, if you need to change a class for more than one reason, it has multiple responsibilities, and therefore violates SRP.

Code smells

There are a couple of different code smells that might appear due to violation of SRP. These are:

  • Fragility – If a class has more than one responsibility, classes that should not be related becomes coupled. Changes related to one of the responsibilities may break the application in unexpected ways.
  • Rigidity – Changes related to one responsibility may force classes depending on other responsibilities to be re-compiled / re-deployed, making the application harder to change.

How to apply

NOTE: It is not wise to apply SRP, or any other principle for that matter, if there is no symptom! Only once you have identified that there are changes being made and it is troublesome, then it should be fixed.

The most obvious way to apply SRP is to separate implementation of the identified responsibilities into different classes. This might however not be possible due to details of the hardware or the operating system. There are however a few patterns that can help. Though not all of them may apply to your specific implementation, I will list them here for you and you should be able to decide if it is good fit for you.

The Facade Pattern

The Facade Pattern is used when you want to provide a simple and specific interface onto a group of objects that have a complex and general interface.

Assume that it is not possible to split up the implementation, so you are stuck with a complex, multi responsibility object. You can then create simple interfaces each cover a single responsibility of the implementation. Users then only have to care about the methods of that specific interface.

Data Access Object

In the case where business logic and persistence functionality have been mixed into a single class (for example if you have a Product class with it’s data fields that also knows how to read and write to the database) the responsibility of database access can be moved to a Data Access Object (DAO).

To implement this you create an interface for CRUD operations towards the database, for the specific business logic. Then you create a separate implementation, removing the persistence logic/responsibility from the business logic class.

The Proxy Pattern

The Proxy Pattern is another way to separate, for example, persistence logic from business logic. Proxies are not trivial to implement but provide good separation. I recommend that you do read up on the proxy pattern, and the other patterns as well, from different sources and look at a few examples before implementing it in your own code.

Assume again that you have business logic and database logic mixed in the same class. In short, the proxy pattern is implemented by breaking up the current implementation into three parts:

  1. An interface
  2. Business logic
  3. Database aware logic

Both the business logic and the database aware logic will implement the interface. The database logic will be aware of the business logic class and have the responsibility of accessing the database. The business logic will now however have no dependency to the database.

The database aware logic will work as a proxy for the business logic. Users can be handed and use the proxy, and since the interface is the same as for the business logic, they can treat it as a normal business logic object, happily unaware of that it accesses the database under the surface.


The Single-Responsibility Principle is easy to understand, but hard to get right. The reason being we naturally group responsibilities together.

After reading this post you hopefully have some ideas what to look out for and how to tackle the issues. I am also quite certain that you will need to read a bit more, see some code examples, and work a bit with your code before you start feeling comfortable with applying SRP.

In the next post we will be looking into the next principle of SOLID, the Open/Closed Principle (OCP).

SOLID – Introduction

If you have been working with object oriented programming (OOP) for some time you have probably heard of the SOLID principles. There is a lot of information available online on SOLID, of various quality. In this blog series, where you are currently reading the first part, I will cover the principles in detail and explain them in a way that hopefully makes sense to most developers.


The first question we should answer is the why. Why should you put time and effort into learning about SOLID?

To answer this question we must first recognize that code that is worked on without any clear structure or rules will grow into an unmaintainable nightmare over time, it will rot. Adding or changing features becomes harder and harder and requires an increasing amount of time and effort. Finally a decision is usually made to start over and re-write large portions of the application. Then the whole process starts over.

More detailed, the symptoms of poor design are:

  • Rigidity – It is difficult to make changes to the source code
  • Fragility – It is easy to break the application when making changes
  • Immobility – It is hard to re-use portions of the code
  • Viscosity – It is easier to make ”hacky” changes than following the intended design
  • Needless complexity – The code is littered with unnecessary constructs that makes the code complex and difficult to understand
  • Needless repetitions – The code contains a lot of code blocks that are just copy/paste, maybe with some smaller modifications
  • Opacity – The code is difficult to understand

If you can accept that, with changing requirements and changes to the code to address these changes, over time the code will start to show one or more of the symptoms listed above. Then you can also accept that some sort of strategy or set of rules needs to be applied to address these symptoms. This is where SOLID comes in. The goal of the SOLID principles is to make your code easier to develop, understand, maintain, and expand. And to keep it that way in the long run. Sounds like a nice goal doesn’t it?

Taking action

It is important to understand that principles are applied in order to solve symptoms of poor design. They are not applied all over the system just because someone told us it would be a good thing to do. Over-conformance to the principles leads to needless complexity. Understand why you do things a certain way and when it is a good time to do it!

When introducing new functionality or making changes to existing code a good approach is to try to introduce the simplest thing that could possibly work. Avoid trying to think ahead too much and adding complexity in an attempt to future proof the code for requirement changes that might come sometime in the future.

Then, when you realize that you do have, or are about to introduce, a code smell, then it is the time to take action. Do not let code smells accumulate and believe you can fix them later. When you have identified a smell, take action immediately.

  1. Identify the problem
  2. Diagnose the problem by applying design principles
  3. Solve the problem by applying an appropriate design pattern

Brief history

The theory of the SOLID principles were introduced by Robert C. Martin in his 2000 paper Design Principles and Design Patterns but the acronym was introduced by Michael Feathers later. Martin did however not come up with all the principles himself but rather collected them and introduced them as a set of principles that has great benefits when combined.

The five different principles that makes up SOLID are:

  • Single Responsibility Principle (SRP)
  • Open Closed Principle (OCP)
  • Liskov Substitution Principle (LSP)
  • Interface Segregation Principle (ISP)
  • Dependency Inversion Principle (DIP)


Code that is being developed without certain principles and strategies will start to rot, introducing code smells that will grow bigger and more severe over time. The SOLID principles helps diagnosing these code smells and adhering to them will help to keep the code in a maintainable state over time.

Next post will be on the first principle of SOLID, the Single Responsibility Principle, SRP.

Visual Studio 2019, .NET Core 3.0, and C# 8.0

This week it has been really fun being a .NET developer. Previews of Visual Studio 2019 and .NET Core 3.0 has been made available to the community, making it possible to try out the new upcoming features.

I will not just repeat information that you can find on other sites, instead I will provide some good links, some tips on how to set up VS 2019 in order to test .NET Core 3 and C# 8, and some personal reflections.

Visual Studio 2019 Preview

You can find a summary on all the new features in Visual Studio 2019 in the official release notes and download it from the official site.

One feature that I find really interesting that I didn’t know about before testing the 2019 preview is Visual Studio Live Share which comes enabled by default in the preview. I had the opportunity to try it out with one of my colleagues running a test TDD session where he wrote the tests and I wrote the implementation. It was a lot of fun! Live share is also available for Visual Studio Code, so you are not limited to .NET development or even Windows.

I was also able to install extensions, such as VsVim, without any issues. So hopefully your favorite extensions in Visual Studio 2017 works in 2019 as well.

So far I haven’t had any major issues with the preview, I had one crash where it stopped responding and automatically restarted after a short while, but nothing else. I will however keep testing it and try out more complex use cases, such as profiling and visual editing of XAML files.

If you are interested in trying out new things I really recommend testing the preview. It can be installed alongside your current Visual Studio version without impacting it, so you will be able to continue to work as normal when you wish.

.NET Core 3.0 preview and C# 8.0

.NET Core 3.0 will support C# 8.0. The preview supports some of the new features such as Nullable References, Ranges and indicies, and Asynchronous Streams. You can read more about upcoming C# 8.0 features here.

Visual Studio 2019 Preview has support for .NET Core 3.0 but it requires some configuration. First you need to install Visual Studio 2019 and select to install the .NET Core workflow (it can be done later through the Visual Studio Installer if you forgot to add it initially). Then you need to download and install the .NET Core 3.0 Preview SDK, create a new .NET core project and configure it for .NET Core 3.0 and C# 8.0.

Also, Nullable References is not enabled by default so once you have configured your project to use .NET Core 3.0 and C# 8.0 you still have to edit your project file (.csproj) to enable it. Note that this has to be done for each new project you add to the solution.

A tip is to also configure your projects to ”Treat warnings as errors”. By doing that your code will not compile if you don’t handle possible null references properly.

New null-related errors


It feels like it will be a good year to be a .NET developer next year. The new Visual Studio 2019 has the potential to make a great IDE even better and .NET Core 3.0 and C# 8.0 will add new improvements that will make it easier to write cleaner and safer .NET code.

The different implementations of .NET

Even if you have been working with .NET for some time it can be hard to know the differences between the .NET Framework, .NET Core, and Mono. And what is this .NET Standard thing?

In this post I will describe the different architectural components of .NET and hopefully make all of this a bit clearer.

Implementations of .NET

Try to imagine that you are just starting out with .NET development. You open up Visual Studio for the first time ever to create a new C# project, and you are faced with this:

How on earth are you supposed to know what to choose? Even if you might figure out that it probably is easiest to start with a Console App, you still need to know whether you should choose the one ending with (.NET Core) or the one ending with (.NET Framework).

Let’s start by explaining the different implementations of .NET.

.NET Framework

This is the original .NET implementation that has existed since 2002 (announced 2000). It supports all the standard framework functionality, APIs, and a number of Windows-specific APIs. It is optimized for building Windows desktop applications, and is pretty much the only option if you are planning on building a graphical .NET application for the Windows desktop.

.NET Core

This is a newer implementation of .NET, version 1.0 was released in 2016. It is cross platform, i.e. it runs on Windows, macOS, and Linux. It supports the standard framework functionality but contains no support for building graphical user interfaces (this might change, for applications targeting Windows, in version 3.0, planned to be released sometime during 2019).

It is recommended to choose .NET Core for new desktop and server projects that does not require the Windows specific features that is part of the original .NET Framework.


The Mono implementation of the .NET framework is mainly targeting systems where a small runtime is needed, such as Android and iOS. But also games built with Unity and the new web framework Blazor, uses Mono. It supports the standard framework functionality as well as everything in .NET Framework 4.7 except Windows Presentation Foundation (WPF), Windows Workflow Foundation (WWF), limited Windows Communication Foundation (WCF) and limited ASP.NET async stack.

If you are targeting mobile, Unity, or Blazor, then you should read up on Mono.

Universal Windows Platform (UWP)

This implementation of .NET is intended for targeting touch-enabled Windows devices such as tablets and phones, and also the XBOX One. It supports the standard framework functionality as well as many services such as a centralized app store, an execution environment, and alternative Windows APIs.

Common APIs

While explaining the different implementations of .NET above I wrote that they ”support the standard framework functionality”. But what is that?

.NET Standard

There is some confusion around .NET Standard and how it fits into the .NET ecosystem. Let’s clear it out. .NET Standard is a set of APIs that is common to all implementations of .NET.

In other words, .NET Framework implements .NET Standard, and so does .NET Core and Mono. This means that if you create a library that targets .NET Standard, it will be compatible with all implementations of .NET.

I will not be able to list the APIs in .NET Standard. The current version, 2.0, includes more than 32 000 APIs. You can however find the complete listing here.

Components of a .NET implementation

Now that we know how different implementations of .NET have different use cases, and that they implement a set of standard APIs, we can take a look at what makes up an implementation of .NET. As part of the implementation you can expect to find:

  • One or more runtimes, such as the Common Language Runtime (CLR) for the .NET Framework, and CoreCLR for .NET Core
  • A library that implements the .NET Standard APIs, such as the Base Class Library (BCL) for the .NET Framework and .NET Core

There are also some optional components such as:

  • WPF and WCF for the .NET Framework
  • Different compilers
  • Tools for organizing your code into projects and projects into solutions
  • Tools for handling and organizing external libraries, for example nuget


As we have seen there are several different implementations of .NET. However, they all have slightly different use cases, even though there is some overlapping. When you have finished reading through this post you hopefully have a good enough understanding on the differences to know which implementation that is suitable for your use case.

Cleaner Code with Command Query Separation

What is Command Query Separation?

The Command Query Separation (CQS) concept is a way to structure your code and methods. The idea is that a method may either be a Command or a Query but not both. It was coined by Bertram Meyer in his book ”Object Oriented Software Construction” from 1994.

What identifies a Command method?

A Command is a method that typically alters the state of the program or system, but does not return any data. One example is adding a new item to an inventory:

public class Inventory
  // Command to add an Item to the inventory
  public void AddItem(Item item)

A Command method may call Query methods to request information about the current state if needed.

What identifies a Query method?

A Query is a method that returns information about current state of (a part of) the system. It may in no way alter the state of the system. Hence, a Query method may not call any Command methods. One example of a Query method is requesting information about an item in an inventory:

public class Inventory
  public Item Find(Query q)
    // Look in the inventory for an Item matching the query
    return item;

What are the benefits of CQS?

The really valuable benefit is that it makes it really easy to see which methods that modifies the state of the system and which don’t. Those methods that don’t modify the system can be re-arranged and called whenever needed, and those that do modify the state of the system must be handled more carefully. This makes it easier to work with the code and understand what happens in the system for different scenarios.

One of the worst practices I have seen is when the get section of a property modifies the state of the system:

// Never ever write code like this!
private int _x;
public int X { get => ++_x; set => _x = value; }

When would you not use CQS?

There are some scenarios when CQS doesn’t really fit. Martin Fowler mentions the Pop method on a Stack as an example. Pop usually removes the top element of the stack and returns it, hence both modifying the state and returning a value. Most of the time it is however a good idea to apply CQS.

In the Wikipedia article on CQS it is stated that CQS can make it harder to implement multi-threaded software. As an example it shows a method that increments and returns a value:

private readonly object _lock = new object();
private int _x;
public int IncrementAndGetValue()
  lock (_lock)
    return _x;

which, when adhering to CQS would need to be separated into two different methods, one to increment x and one for returning it’s value:

private int _x;

public void Increment()

public int GetValue()
  return _x;

With this, the locking has to be done everywhere Increment and GetValue is called instead of locally in IncrementAndGetValue. However, I do not see this as a real issue since it is often better avoid locks in low level methods in order to avoid the overhead in all cases where locking isn’t needed.

My suggestion is that you try to use CQS as often as possible. But when cases presents itself where breaking CQS is the most intuitive, like Pop on a Stack, or when enforcing CQS clearly makes the code more complex, it is okay to break the rule.

C# 8 and The Future of Null

The nullable types

If you have been coding C# for a while you have probably come across Nullable types (Microsoft link). A Nullable type is in instance of System.Nullable<T> where T can be any non-nullable value type (such as int, float, bool). An instance of System.Nullable<T> can represent any value of the underlying type T and also null. When declaring Nullable types the ’?’ shorthand is usually used. For example:

int? a = null; // a is an instance of System.Nullable<int>

Naturally this does not apply to Reference Types (instances of classes) since they can always be null, without wrapping them in a System.Nullable. However, in C# 8, the upcoming major version of C#, this is about to change.

Nullable Reference Types

The header above might sound strange, aren’t all reference types nullable? The answer is, yes, they are. However, starting from C# 8, references that are welcome to be null should be clearly marked as such. The way to mark a reference as nullable will be to use the ’?’ shorthand, these new reference types are what will be called Nullable Reference Types. An example:

// NOTE: This code is only valid from C# 8
public string? FindJohnMayReturnNull()
  IEnumerable<string> names = GetNames();
  return names.Where(name => name.StartsWith("John").FirstOrDefault(); // Returns null if there is no match

public string FindJohnShouldNotReturnNull
  IEnumerable<string> names = GetNames();
  return names.Where(name => name.StartsWith("John").FirstOrDefault() ?? string.Empty; // Returns empty string if there is no match

Note that I use the word should. It will be possible to compile your code even without using ’?’ to indicate nullable references, but it will give you a warning.


The introduction of Nullable Reference Types will most certainly make parts of the code better by making developers clearly express their intent (should this reference be nullable or not), and it will hopefully increase awareness for when you need to handle possible null values.

I would love to see code where we can avoid situations like this:

public class MyClass
  MyClass(ObjectOne o1, ObjectTwo o2, ObjectThree o3)
    if (o1 == null) throw new ArgumentNullException(nameof(o1));
    if (o2 == null) throw new ArgumentNullException(nameof(o2));
    if (o3 == null) throw new ArgumentNullException(nameof(o3)); 

  public void DoSomething(Param1 p1, Param2 p2)
    if (p1 == null) throw new ArgumentNullException(nameof(p1));
    if (p2 == null) throw new ArgumentNullException(nameof(p2));

    var values = p1.GetValues();
    if (values == null) throw new ArgumentException(...);

However, I don’t think the introduction of Nullable Reference Types will completely remove the need for null checks for all references that aren’t declared as nullable. There will most likely be cases where this is still needed, but hopefully we can reduce it, and over time the code will hopefully be both easier to read and more robust.

For more information see this article from Microsoft on the topic.

A functional alternative to returning null

Last week I wrote about alternatives to returning null. There was however one alternative that I left out, the Maybe functor.

The reason I left it out last week was that I hadn’t had the time to read up on it properly. Now that I do have read up on it, and had some time to implement it and play around with it a bit, it is time to write a bit about it.

The Maybe functor, or Maybe monad, is a concept that is used in many functional languages such as Haskell, Scala, and F# (although it is called option in F#), and also in Rust (see docs).

In C# there is no support for the Maybe functor in the language itself, you have to implement it yourself. What you want to create is a generic class, Maybe<T>, that may, or may not, have an Item of type T associated to it. A method that maybe returns an int can look like this:

public Maybe<int> Parse(string s)
  if (int.TryParse(s, out var i))
    return new Maybe<int>(i);

  return new Maybe<int>();

As can be seen above the method signature makes it very clear that the parsing might fail, this makes it really hard for the caller to forget to cover the error case:

var parsed = Parse("42");
if (parsed.HasItem)
  Console.WriteLine($"The value was successfully parsed as {parsed.Item}");
  Console.WriteLine("Parsing failed");

Personally I like this alternative, but I am unsure how well it will fly with other C# developers.

If you like to read more about it, and see how it can be implemented, I strongly recommend you to visit Mark Seeman’s excellent blog. He writes about the Maybe functor in

C# alternatives to returning null

What is problematic with returning null?

A common pattern, both in the code I am used to work with, and in parts of the .NET Framework, is to return null in methods if for some reason a valid return value is not available.

One example of this is the Find method of List<T>:

var persons = new List<Person>();
var bandit = persons.Find(p => p.Name == "Billy the Kid"); // Returns default(Person) which is Null (assuming Person is a reference type)
if (bandit == null)

So, why would you consider handling this, and similar cases, differently?

My first argument is, returning null makes code hard to use. Let me show you by example.

Assume that you write code that will call the following public method:

public Person GetPersonByName(string name)

Is there any way for the user to tell, by looking at the method signature, whether he needs to guard for the return value being null or not? No there is not. He will have to check the documentation, or the code (if available). Would it not be better if he could tell directly? You could achieve that by naming the method GetPersonByNameOrNullIfNotFound but that is not very desirable.

My second argument is, returning null forces the caller to pollute his code with multiple checks and if/else forks:

var dude = persons.GetPersonByName("Jesse");
if (dude == null)
  log.Error("Could not find Jesse");
  var car = cars.FindByOwner(dude);
  if (car == null)
    log.Error("Dude, Where's My Car?");

This makes the code much harder to read.

So what alternatives are there?

Alternative 1: The null object pattern

The Null Object Pattern (wikipedia link) says that instead of returning null you should return a valid object, but with empty methods and fields. That is, an instance of the class that just doesn’t do anything. For example:

public Person GetPersonByName(string name)
  var id = _db.Find(name);
  if (id == 0)
    return Person.Nobody;
  return Person(id);

Here the Person class implements a static property, Nobody, that returns the null object version of Person.


There are a couple of advantages of using this pattern over returning null.

  • Users do not need to add null checks in the calling code, making it simpler.
  • The risk of NullReferenceException being thrown is eliminated.


All alternative ways have some disadvantages, using the null object pattern may:

  • Hide errors/bugs, since the program might appear to be running as expected
  • Force the introduction of just a different type of error checking

The last point here is interesting. If you, when implementing this pattern, realize that need to check the returned value anyway. Then this pattern is not suitable for your situation and you should consider a different solution.

Alternative 2: Fail fast

If you analyze your code and come to the conclusion that the case where null is returned is an exceptional case and really indicates an error condition, you can choose to throw an exception instead of returning null. One example of this is the File class in the .NET framework. Calling File.Open with an invalid path throws an exception (different exceptions depending on the type of error, for example FileNotFoundException if the file does not exist). A system that fails directly when an error condition is detected is called a Fail-fast system (wikipedia link).

I have worked in a large project where this philosophy was applied. Actually we didn’t throw exceptions, we directly halted the entire system, dumped all memory, stack and logs and reported the error. The result was that once the system went live it was really robust (having multiple levels of testing, some that ran for days or weeks simulating real load, also helped a lot).


  • Makes errors visible
  • Forces you to fix any errors early, leading to a more robust system once in production
  • Reduces cost of fixing failures and bugs since it is cheaper to fix them early in the development process


Failing fast might not be suitable in all situations. Assume for example that you are dependent on data from an external system, or user. If that system provides invalid data you do not want your system to fail. However, in situations where you are in control I recommend failing fast.

Alternative 3: Tester-Doer pattern

If you are depending on external systems and need to consider cases like your system being provided with corrupt data, shaky networks, missing files, database servers being overloaded, etc, throwing exceptions and halting the system won’t work for you. You could still throw exceptions and let the user add a try-catch clause to handle the exceptions, but if some scenarios are really error prone, throwing exceptions a lot, it might impact performance to the extend it is unacceptable (microsoft link). One way to approach this situation is to split the operation in two parts, one that checks if the resource is available and a second that gets the data. For example if you want to read a file but don’t know in advance that it is available you can do this:

if (File.Exists(path)) // Test if file exist
  var content = File.ReadAllText(path); // And if it does, read it

This idea can be expanded to test a lot of different preconditions and if they are fulfilled, do the operations.


  • Allows you to verify that the operation will probably succeed
  • Removes the overhead of exception handling (exceptions are really bad for performance)
  • The calling code can be made quite clear


  • Even though the test passes, the accessing method might fail. For example, in a multi threaded system the resource may have been deleted by another thread between the test and the accessing method.
  • Requires the caller to remember to do both calls and not just call the accessing method.

Alternative 4: Try-Parse pattern

A different version of the Tester-Doer Pattern is the Try-Parse pattern. One example where this is used in the .NET framework is the int.TryParse  method that tries to parse a string to an integer. It returns a boolean value that indicates whether the parsing succeeded or failed. The actual integer value is supplied by an out-parameter in the method call:

if (int.TryParse(aString, out var i))
  Console.WriteLine($"The value is {i}");


  • Same as tester-doer, with the addition that you only need one call, hence the thread safety issue is taken care of.


  • Obscure method signature where the return value is not the data you requested, instead an out variable is needed.


This post have hopefully provided you with some alternatives to returning null and some ideas on why and when it can be good to do so. As always, most important is that you try to make the code as clear and simple as possible. Now, code!

How to debug a Blazor project

Why is debugging Blazor applications different from Angular or React?

When you run a JavaScript framework, like Angular or React, the JavaScript code is available on the client-side. This makes it possible to use the built-in developer tools in the browser to inspect and step through the code. When running Blazor applications you execute a .NET runtime that runs your compiled C# code, a totally different story.

The first thing to realize is that you will not be able to use the regular developer tools, since they only support JavaScript at this time. A second ”aha” moment comes when you realize that you need to have your application compiled in debug mode in order to have debugging symbols available.

But if the built in developer tools does not support Blazor, what to use?

Current support

At this time there is only very early support for debugging client-side Blazor applications. And the only browser where debugging is supported is in Chrome.

The reason only Chrome is supported at this time is because Blazor provides a debugging proxy that implements the Chrome DevTools Protocol. This allows the DevTools in Chrome to talk to the proxy, which in turn connects to your running Blazor application. In order for the proxy to be able to connect to the running application, remote debugging must be enabled in Chrome. It’s a bit cumbersome, but I will go through the steps required in detail below.

Hopefully the Blazor team will focus on improving the debugging support since it is a very important ingredient if Blazor is to be popular among developers.

Debugging – step-by-step guide

Follow these steps and you will have a debugging session up and running in no time. I will assume you have Chrome installed and a working Blazor application that you wish to debug.

  1. Open up the application in Visual Studio (I was unable to start the debugging session when started on the command line)
  2. Ensure that Visual Studio is set up for building the application in Debug mode and the target browser is set to Chrome (see image below)
  3. Press F5 to start the application and it should open up Chrome and load it
  4. Press Shift+Alt+D (I use Dvorak keyboard layout where the QWERTY layout D is mapped to the letter E, so I had to press Shift+Alt+E)
  5. Chrome will open a new tab, showing an error message that it wasn’t started with remote debugging enabled
  6. Follow the instructions in the error message (close Chrome, then restart it using Win+R and paste a command similar to ”%programfiles(x86)%\Google\Chrome\Application\chrome.exe” –remote-debugging-port=9222 http://localhost:54308/)
  7. In the new Chrome window, press Shift+Alt+D again
  8. A new tab should open in Chrome showing the remote debug utils
  9. You should be able to find a heading named ”Sources” (see image below) where you find the Blazor DLL and under that the source files
  10. Add breakpoints and switch back to the other tab to interact with the application. Once a breakpoint is hit Chrome will display a message saying that the application is paused since it is stopped at a breakpoint

Figure 1. Visual Studio set up for debugging the Blazor application

Figure 2. Debugging the Blazor application in Chrome


Remember that I wrote that debugging support is still in a very early stage. This means that there are a lot of things that are not supported in the debugger yet. Limitations include, but may not be limited to:

  • No support for stepping into child methods
  • Values of locals of other types than int, string, and bool cannot be inspected
  • Values of class properties and fields cannot be inspected
  • It is not possible to see values of variables by hoovering over them
  • Expressions cannot be evaluated in the console

Ending words

As you understand by now there is still a lot of work to do in order to get full debugging support with Blazor, but the fact that there are some debugging support in place is promising. It is a bit cumbersome to start a debug session, but it is not hard. I have worked in really large projects with custom build tools without a working debugger and it is not a good spot to be in. However, with Blazor I have good hopes that the development team understands the importance of a good debugger.