SOLID – The Dependency-Inversion Principle

We have now reached the grande finale of the blog series on SOLID, the big D, The Dependency-Inversion Principle. Let’s jump right into it.

Definition

The Dependency Inversion Principle (DIP) states:

A: High-level modules should not depend on low-level modules. Both should depend on abstractions.

B: Abstractions should not depend upon details. Details should depend upon abstractions.

This needs some explaining. More traditional software development methods tend to create software structures in which high-level modules depends on low-level modules. DIP says these dependencies should be exchanged with abstractions. I’ll show you an example on how this can be done in the ”How to implement” section.

We must also define ”high-level” and ”low-level” in order to understand this properly. I like to define high-level modules as close to the Domain/Business logic and low-level as those close to the input and output of the program. A typical high-level module contains the domain models while a typical low-level module contains code that deals with I/O (for example reading user input commands or persisting data to a database).

Code smells

Code that does not adhere to DIP can be hard to change and re-use. In other words it smells of Rigidity and Immobility. The smells comes from the direct dependencies, making it hard to change lower-level modules without making large changes to the high-level modules.

Assume as an example that you have a high-level class, Product, that have a direct dependency to a low-level class, SqlDb. Now, changes to the low-level class will force the high-level class to be changed. And it may also be hard to re-use the high-level class due to this direct dependency.

How to implement

Traditional layering

In order to invert the dependencies a set of service interfaces can be added, at the same level as the current module. These interfaces should then be implemented by the lower level module. See the diagrams.

In the traditional layering style there are direct dependencies from higher-level classes to lower level classes. This means that the top level classes will be dependent on changes on the lowest level. A change to the lowest level may propagate all the way up to the top forcing a complete recompile, re-test and re-deploy of all modules in the system.

Dependency Inverted Layering

If we add an abstract service interface at all layers that have dependencies to lower layers we effectively break this dependency chain. The high-level, mid-level, and low-level modules can be put into different assemblies (projects) so that a change in one layer does not affect any other layers as long as the interface isn’t changed.

Another aspect that is different from the traditional way of putting code into layers is that the abstract interfaces are grouped with the clients, and not together with their implementations. The interfaces should be designed from the clients needs and not the other way around. Changes to the interfaces should be driven from the clients, i.e. inverted compared to traditional layering.

Summary

This post ends the series on the SOLID principles. I started with an introduction to SOLID where I explained why you should care to learn what SOLID is and also introduced some code smells that is common in large software projects. I then went through the five different SOLID principles, The Single-Responsibility Principle, The Open/Closed Principle, The Liskov Substitution Principle, The Interface Segregation Principle, and The Dependency-Inversion Principle and explained what they say, which code smells that may appear if not adhered to, and how to implement them.

I must remind you to take it easy and not add abstractions, apply patterns, and introduce these principles until you detect the smells. Doing that will add unnecessary complexity to your code, making it harder to work with and understand.

However, when you do introduce the principles, use all of them on the smelly part of the code. Now stop reading and write some code!

SOLID – The Interface Segregation Principle

We are closing in on the final principle of SOLID. In the last post I wrote about the Liskov Substitution Principle, and now it is time to take a look at The Interface Segregation Principle (ISP).

Definition

The Interface Segregation Principle was coined by Robert C. Martin. It is explained in his book ”Agile Software Development Principles, Patterns, and Practices” from 2002 but has probably been around longer than that. It states that:

Clients should not be forced to depend on methods they do not use

The goal of it is to reduce the side effects and frequency of required changes by splitting the software into multiple, independent, parts.

Code Smells

As the code evolves and more features are added it is common that classes grow and more and more functionality is added to them. This has a tendency to grow into ”fat” interfaces, which can be identified by that their methods can broken up into groups, each group serving a different set of clients. The code will start to smell of:

  • Needless complexity and needless redundancy – when interfaces grow you risk ”interface pollution”, i.e. interfaces that contain methods only needed by one or a few clients. Some implementations may not need all the methods of the interface but are forced to implement degenerated versions of the methods (methods that are empty or throws an exception when called). It may also be tempting to push empty implementations and exception throwing down to the base class, which leads to violations of LSP.
  • Rigidity and Viscosity – A change to an interface, enforced by a single client, will force all other derivatives to be updated. Even though they do note require the functionality.

Fortunately there are some straight forward ways to fix these violations.

How to implement

In order to realize how to implement ISP you need to understand that clients of an object does not need to access it directly, they can instead access it through an abstract base class or a thin interface. In other words, split your fat interfaces into several thinner ones and serve each client the specific interface(s) they need. Let’s see an example, assume you have a class that looks like this:

public class ComplexClass
{
  // Methods called by clients in group A
  public void MethodA1() { ... }
  public void MethodA2() { ... }

  // Methods called by clients in group B
  public void MethodB1() { ... }
  public void MethodB2() { ... }
  public void MethodB3() { ... }

  // Methods called by clients in group C
  public void MethodC() { ... }
}

public class ClientInGroupA
{
  // Only calls methods MethodA1 and A2, but are dependent on the full interface
  private readonly ComplexClass _cc;
  public ClientInGroupA(ComplexClass cc)
  {
     _cc = cc;
  }
  ...
}

public class ClientInGroupB
{
  // Only calls methods MethodB1, B2, and B2, but are dependent on the full interface
  private readonly ComplexClass _cc;
  public ClientInGroupB(ComplexClass cc)
  {
    _cc = cc;
  }
  ...
}

public class ClientInGroupC
{
  // Only calls method MethodC, but are dependent on the full interface
  private readonly ComplexClass _cc;
  public ClientInGroupC(ComplexClass cc)
  {
    _cc = cc;
  }
  ...
}

Instead of handing all the clients the same interface, all public methods of an object of type ComplexClass. You can segregate the interface into three separate new interfaces, one for each group of clients:

public interface IGroupA
{
  void MethodA1();
  void MethodA2();
}

public interface IGroupB
{
  void MethodB1();
  void MethodB2();
  void MethodB3();
}

public interface IGroupC
{
  void MethodC();
}

public class ComplexClass() : IGroupA, IGroupB, IGroupC
{
  ...
}

public class ClientInGroupA
{
  // Only dependent on methods it uses
  private readonly IGroupA _oa;
  public ClientInGroupA(IGroupA oa)
  {
    _oa = oa;
  }
  ...
}

public class ClientInGroupB
{
  // Only dependent on methods it uses
  private readonly IGroupB _ob;
  public ClientInGroupB(IGroupB ob)
  {
    _ob = ob;
  }
  ...
}

public class ClientInGroupC
{
  // Only dependent on methods it uses
  private readonly IGroupC _oc;
  public ClientInGroupC(IGroupC oc)
  {
    _oc = oc;
  }
  ...
}

Summary

Applying the Interface Segregation Principle can help against code smells such as Needless Complexity, Needless Redundancy, Rigidity, and Viscosity. Even though it might not be possible to split up the implementation, it is possible to define thinner interfaces, streamlined towards the need of the clients.

Next up is the last part of SOLID, the Dependency Inversion Principle, which will help you design your interfaces in a way that is suitable for Object Oriented Design.

SOLID – The Liskov Substitution Principle

In the previous post I wrote about the O in SOLID, The Open/Closed Principle. Now it is time for the L, The Liskov Substitution Principle (LSP).

Definition

In the year 1988, the American Computer Scientist Barbara Liskov, wrote

What is wanted here is something like the following substitution property: If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for o2 then S is a subtype of T.

In other words this means that subtypes must be substitutable for their base types. An example of a violation of LSP would be: Given a method M(B) where B is an object of type BaseClass. If M(B) behaves badly given an object D of type DerivedFromBaseClass then D violates LSP.

Code Smells

Violation of LSP leads to Fragility. Parts of your code might break in subtle ways when adding new derived classes.

Attempts to fix this issues might lead to violations of OCP, which in turn introduces Rigidity and possibly also Immobility.

How to apply

Unlike the two previous principles, SRP and OCP, there are no named patterns to apply in order to fix violations of LSP. Manual inspection and unit testing comes a far way when it comes to detect violations and then you will need to determine from case to case how to fix the violation.

It can be tempting to put a test in the code to determine the type of the object passed to the method:

public void Method(Base b)
{
  // Do NOT do this. It violates OCP
  if (b is Derived d) { ... }
  else { ... }
}

These types of tests however violates OCP since it is no longer possible to add new derived types without changing existing code.

What can be done is, instead of adding type tests, you separate the violating methods from the existing interface. This will leave you with a common interface, that all classes can implement, and a specialized interface that contains the methods that one or more classes should not implement. For example, let’s assume that we have an interface, IF, that is implemented by the classes CA, CB, and CC. Now, assume CC conforms to LSP for methods MA, MB, and MC but violates LSP for methods MD and ME. You can then break out MA, MB and MC from IF and move them to a new interface IFCommon. You then let IF derive from IFCommon, let CA and CB still implement IF but let CC implement only IFCommon and it’s own versions of MD and ME. See the diagrams below:

CC violates LSP since it’s implementations of MD and ME behaves badly
The code now conforms to LSP. It is no longer possible to use CC where IF is expected.

Summary

The Liskov Substitution Principle puts tougher demands on derived classes than just the fact that they have a common base class. I demands that it should be possible to use the derived classes instead of the base classes, without breaking anything. Conformance to LSP is an enabler for OCP, since for it to be possible to extend the current behavior, by adding new derived classes, these must work as expected when used as their base class.

We are now more than half way through SOLID, next up is The Interface Segregation Principle.

SOLID – The Open/Closed Principle

In this third post on the SOLID principles we will look at the Open/Closed Principle (OCP).

Definition

Bertrand Mayer coined the Open/Closed Principle in 1988, and it states that ”A software entity (class, module, function) should be open for extension but closed for modification”.

A software entity complies to OCP if its behavior can be extended, and extending the behavior does not result in changes to it’s existing code.

Code smells

The most obvious code smell that appears for entities that does not follow OCP is Rigidity i.e. that the software is hard to change. A change to one entity results in a cascade of changes in other entities.

This also leads to Fragility, since being forced to make changes in many different entities makes it easy to break something. Your code base may also show signs of Immobility.

How to apply

NOTE: As I wrote in the previous posts, applying patterns and abstractions should be done only when smells are appearing. I recommend that you use the ”Fool me once” approach. That is, start with writing the code as if it is never going to change, then the first time you do need to change it, you apply the appropriate patterns and abstractions.

Your goal when applying OCP should be to make it easy to introduce changes by adding new code (open for extension), but not by modifying existing code that already works (closed for modification).

But how can that be done? The two statements sounds like contradictions. The answer, abstraction. There are two patterns that can be used to achieve this goal, and they both have the approach of splitting up concrete classes into abstract and concrete parts. The patters are The Template Pattern and The Strategy Pattern.

The Template Pattern

The Template Pattern uses inheritance to separate a generic algorithm from a detailed context. You apply it by splitting your current concrete class into two:

  1. An abstract base class containing the generic code/algorithm, this is your template
  2. The concrete context, that inherits from the abstract base class and implements the abstract methods

Applying this pattern opens up your code for extension by making it possible to add new behavior by adding new concrete classes that inherits from the abstract base class. You can do this without modifying the existing base class or any existing concrete classes.

The Strategy Pattern

The Strategy Pattern is also used to separate a generic algorithm from a detailed context. Unlike the Template Pattern it does so by delegation and not inheritance.

To apply the Strategy Pattern you split up your concrete implementation into three parts:

  1. A concrete class containing the generic algorithm
  2. An interface that abstracts the detailed context
  3. A concrete strategy class that implements the interface
The components of the Strategy Pattern

To use these classes you first instantiate the strategy class (3) and hand it to the class containing the generic algorithm (1), preferably by constructor injection.

This opens up your code for extension by allowing you to add new strategies (3) without having to modify any of the existing code. The generic algorithm can also be re-used by given different strategies.

Summary

The Open/Closed Principle is the second of the five principles of SOLID. It can be a bit hard to understand what it means that something is ”open for extension but closed for modification”, hopefully this post has made it a bit clearer. I advise you to read up a bit more on both the template, and the strategy, pattern.

Next, we will dive into the third principle of SOLID, The Liskov Substitution Principle.

SOLID – The Single-Responsibility Principle

You are now reading the second part of my blog series on the SOLID principles. In the first post I introduced the concept and gave some reasons to why you should learn about SOLID. In this part I will write about the first of the five SOLID principles, the Single-Responsibility Principle (SRP).

Definition

The Single-Responsibility Principle states that ”A class should have only one responsibility” or ”A class should have only one reason to change”.

This means that, if you need to change a class for more than one reason, it has multiple responsibilities, and therefore violates SRP.

Code smells

There are a couple of different code smells that might appear due to violation of SRP. These are:

  • Fragility – If a class has more than one responsibility, classes that should not be related becomes coupled. Changes related to one of the responsibilities may break the application in unexpected ways.
  • Rigidity – Changes related to one responsibility may force classes depending on other responsibilities to be re-compiled / re-deployed, making the application harder to change.

How to apply

NOTE: It is not wise to apply SRP, or any other principle for that matter, if there is no symptom! Only once you have identified that there are changes being made and it is troublesome, then it should be fixed.

The most obvious way to apply SRP is to separate implementation of the identified responsibilities into different classes. This might however not be possible due to details of the hardware or the operating system. There are however a few patterns that can help. Though not all of them may apply to your specific implementation, I will list them here for you and you should be able to decide if it is good fit for you.

The Facade Pattern

The Facade Pattern is used when you want to provide a simple and specific interface onto a group of objects that have a complex and general interface.

Assume that it is not possible to split up the implementation, so you are stuck with a complex, multi responsibility object. You can then create simple interfaces each cover a single responsibility of the implementation. Users then only have to care about the methods of that specific interface.

Data Access Object

In the case where business logic and persistence functionality have been mixed into a single class (for example if you have a Product class with it’s data fields that also knows how to read and write to the database) the responsibility of database access can be moved to a Data Access Object (DAO).

To implement this you create an interface for CRUD operations towards the database, for the specific business logic. Then you create a separate implementation, removing the persistence logic/responsibility from the business logic class.

The Proxy Pattern

The Proxy Pattern is another way to separate, for example, persistence logic from business logic. Proxies are not trivial to implement but provide good separation. I recommend that you do read up on the proxy pattern, and the other patterns as well, from different sources and look at a few examples before implementing it in your own code.

Assume again that you have business logic and database logic mixed in the same class. In short, the proxy pattern is implemented by breaking up the current implementation into three parts:

  1. An interface
  2. Business logic
  3. Database aware logic

Both the business logic and the database aware logic will implement the interface. The database logic will be aware of the business logic class and have the responsibility of accessing the database. The business logic will now however have no dependency to the database.

The database aware logic will work as a proxy for the business logic. Users can be handed and use the proxy, and since the interface is the same as for the business logic, they can treat it as a normal business logic object, happily unaware of that it accesses the database under the surface.

Summary

The Single-Responsibility Principle is easy to understand, but hard to get right. The reason being we naturally group responsibilities together.

After reading this post you hopefully have some ideas what to look out for and how to tackle the issues. I am also quite certain that you will need to read a bit more, see some code examples, and work a bit with your code before you start feeling comfortable with applying SRP.

In the next post we will be looking into the next principle of SOLID, the Open/Closed Principle (OCP).

SOLID – Introduction

If you have been working with object oriented programming (OOP) for some time you have probably heard of the SOLID principles. There is a lot of information available online on SOLID, of various quality. In this blog series, where you are currently reading the first part, I will cover the principles in detail and explain them in a way that hopefully makes sense to most developers.

Benefits

The first question we should answer is the why. Why should you put time and effort into learning about SOLID?

To answer this question we must first recognize that code that is worked on without any clear structure or rules will grow into an unmaintainable nightmare over time, it will rot. Adding or changing features becomes harder and harder and requires an increasing amount of time and effort. Finally a decision is usually made to start over and re-write large portions of the application. Then the whole process starts over.

More detailed, the symptoms of poor design are:

  • Rigidity – It is difficult to make changes to the source code
  • Fragility – It is easy to break the application when making changes
  • Immobility – It is hard to re-use portions of the code
  • Viscosity – It is easier to make ”hacky” changes than following the intended design
  • Needless complexity – The code is littered with unnecessary constructs that makes the code complex and difficult to understand
  • Needless repetitions – The code contains a lot of code blocks that are just copy/paste, maybe with some smaller modifications
  • Opacity – The code is difficult to understand

If you can accept that, with changing requirements and changes to the code to address these changes, over time the code will start to show one or more of the symptoms listed above. Then you can also accept that some sort of strategy or set of rules needs to be applied to address these symptoms. This is where SOLID comes in. The goal of the SOLID principles is to make your code easier to develop, understand, maintain, and expand. And to keep it that way in the long run. Sounds like a nice goal doesn’t it?

Taking action

It is important to understand that principles are applied in order to solve symptoms of poor design. They are not applied all over the system just because someone told us it would be a good thing to do. Over-conformance to the principles leads to needless complexity. Understand why you do things a certain way and when it is a good time to do it!

When introducing new functionality or making changes to existing code a good approach is to try to introduce the simplest thing that could possibly work. Avoid trying to think ahead too much and adding complexity in an attempt to future proof the code for requirement changes that might come sometime in the future.

Then, when you realize that you do have, or are about to introduce, a code smell, then it is the time to take action. Do not let code smells accumulate and believe you can fix them later. When you have identified a smell, take action immediately.

  1. Identify the problem
  2. Diagnose the problem by applying design principles
  3. Solve the problem by applying an appropriate design pattern

Brief history

The theory of the SOLID principles were introduced by Robert C. Martin in his 2000 paper Design Principles and Design Patterns but the acronym was introduced by Michael Feathers later. Martin did however not come up with all the principles himself but rather collected them and introduced them as a set of principles that has great benefits when combined.

The five different principles that makes up SOLID are:

  • Single Responsibility Principle (SRP)
  • Open Closed Principle (OCP)
  • Liskov Substitution Principle (LSP)
  • Interface Segregation Principle (ISP)
  • Dependency Inversion Principle (DIP)

Conclusion

Code that is being developed without certain principles and strategies will start to rot, introducing code smells that will grow bigger and more severe over time. The SOLID principles helps diagnosing these code smells and adhering to them will help to keep the code in a maintainable state over time.

Next post will be on the first principle of SOLID, the Single Responsibility Principle, SRP.

Visual Studio 2019, .NET Core 3.0, and C# 8.0

This week it has been really fun being a .NET developer. Previews of Visual Studio 2019 and .NET Core 3.0 has been made available to the community, making it possible to try out the new upcoming features.

I will not just repeat information that you can find on other sites, instead I will provide some good links, some tips on how to set up VS 2019 in order to test .NET Core 3 and C# 8, and some personal reflections.

Visual Studio 2019 Preview

You can find a summary on all the new features in Visual Studio 2019 in the official release notes and download it from the official site.

One feature that I find really interesting that I didn’t know about before testing the 2019 preview is Visual Studio Live Share which comes enabled by default in the preview. I had the opportunity to try it out with one of my colleagues running a test TDD session where he wrote the tests and I wrote the implementation. It was a lot of fun! Live share is also available for Visual Studio Code, so you are not limited to .NET development or even Windows.

I was also able to install extensions, such as VsVim, without any issues. So hopefully your favorite extensions in Visual Studio 2017 works in 2019 as well.

So far I haven’t had any major issues with the preview, I had one crash where it stopped responding and automatically restarted after a short while, but nothing else. I will however keep testing it and try out more complex use cases, such as profiling and visual editing of XAML files.

If you are interested in trying out new things I really recommend testing the preview. It can be installed alongside your current Visual Studio version without impacting it, so you will be able to continue to work as normal when you wish.

.NET Core 3.0 preview and C# 8.0

.NET Core 3.0 will support C# 8.0. The preview supports some of the new features such as Nullable References, Ranges and indicies, and Asynchronous Streams. You can read more about upcoming C# 8.0 features here.

Visual Studio 2019 Preview has support for .NET Core 3.0 but it requires some configuration. First you need to install Visual Studio 2019 and select to install the .NET Core workflow (it can be done later through the Visual Studio Installer if you forgot to add it initially). Then you need to download and install the .NET Core 3.0 Preview SDK, create a new .NET core project and configure it for .NET Core 3.0 and C# 8.0.

Also, Nullable References is not enabled by default so once you have configured your project to use .NET Core 3.0 and C# 8.0 you still have to edit your project file (.csproj) to enable it. Note that this has to be done for each new project you add to the solution.

A tip is to also configure your projects to ”Treat warnings as errors”. By doing that your code will not compile if you don’t handle possible null references properly.

New null-related errors

Conclusion

It feels like it will be a good year to be a .NET developer next year. The new Visual Studio 2019 has the potential to make a great IDE even better and .NET Core 3.0 and C# 8.0 will add new improvements that will make it easier to write cleaner and safer .NET code.



The different implementations of .NET

Even if you have been working with .NET for some time it can be hard to know the differences between the .NET Framework, .NET Core, and Mono. And what is this .NET Standard thing?

In this post I will describe the different architectural components of .NET and hopefully make all of this a bit clearer.

Implementations of .NET

Try to imagine that you are just starting out with .NET development. You open up Visual Studio for the first time ever to create a new C# project, and you are faced with this:

How on earth are you supposed to know what to choose? Even if you might figure out that it probably is easiest to start with a Console App, you still need to know whether you should choose the one ending with (.NET Core) or the one ending with (.NET Framework).

Let’s start by explaining the different implementations of .NET.

.NET Framework

This is the original .NET implementation that has existed since 2002 (announced 2000). It supports all the standard framework functionality, APIs, and a number of Windows-specific APIs. It is optimized for building Windows desktop applications, and is pretty much the only option if you are planning on building a graphical .NET application for the Windows desktop.

.NET Core

This is a newer implementation of .NET, version 1.0 was released in 2016. It is cross platform, i.e. it runs on Windows, macOS, and Linux. It supports the standard framework functionality but contains no support for building graphical user interfaces (this might change, for applications targeting Windows, in version 3.0, planned to be released sometime during 2019).

It is recommended to choose .NET Core for new desktop and server projects that does not require the Windows specific features that is part of the original .NET Framework.

Mono

The Mono implementation of the .NET framework is mainly targeting systems where a small runtime is needed, such as Android and iOS. But also games built with Unity and the new web framework Blazor, uses Mono. It supports the standard framework functionality as well as everything in .NET Framework 4.7 except Windows Presentation Foundation (WPF), Windows Workflow Foundation (WWF), limited Windows Communication Foundation (WCF) and limited ASP.NET async stack.

If you are targeting mobile, Unity, or Blazor, then you should read up on Mono.

Universal Windows Platform (UWP)

This implementation of .NET is intended for targeting touch-enabled Windows devices such as tablets and phones, and also the XBOX One. It supports the standard framework functionality as well as many services such as a centralized app store, an execution environment, and alternative Windows APIs.

Common APIs

While explaining the different implementations of .NET above I wrote that they ”support the standard framework functionality”. But what is that?

.NET Standard

There is some confusion around .NET Standard and how it fits into the .NET ecosystem. Let’s clear it out. .NET Standard is a set of APIs that is common to all implementations of .NET.

In other words, .NET Framework implements .NET Standard, and so does .NET Core and Mono. This means that if you create a library that targets .NET Standard, it will be compatible with all implementations of .NET.

I will not be able to list the APIs in .NET Standard. The current version, 2.0, includes more than 32 000 APIs. You can however find the complete listing here.

Components of a .NET implementation

Now that we know how different implementations of .NET have different use cases, and that they implement a set of standard APIs, we can take a look at what makes up an implementation of .NET. As part of the implementation you can expect to find:

  • One or more runtimes, such as the Common Language Runtime (CLR) for the .NET Framework, and CoreCLR for .NET Core
  • A library that implements the .NET Standard APIs, such as the Base Class Library (BCL) for the .NET Framework and .NET Core

There are also some optional components such as:

  • WPF and WCF for the .NET Framework
  • Different compilers
  • Tools for organizing your code into projects and projects into solutions
  • Tools for handling and organizing external libraries, for example nuget

Conclusion

As we have seen there are several different implementations of .NET. However, they all have slightly different use cases, even though there is some overlapping. When you have finished reading through this post you hopefully have a good enough understanding on the differences to know which implementation that is suitable for your use case.

Cleaner Code with Command Query Separation

What is Command Query Separation?

The Command Query Separation (CQS) concept is a way to structure your code and methods. The idea is that a method may either be a Command or a Query but not both. It was coined by Bertram Meyer in his book ”Object Oriented Software Construction” from 1994.

What identifies a Command method?

A Command is a method that typically alters the state of the program or system, but does not return any data. One example is adding a new item to an inventory:

public class Inventory
{
  // Command to add an Item to the inventory
  public void AddItem(Item item)
  {
     ...
  }
}

A Command method may call Query methods to request information about the current state if needed.

What identifies a Query method?

A Query is a method that returns information about current state of (a part of) the system. It may in no way alter the state of the system. Hence, a Query method may not call any Command methods. One example of a Query method is requesting information about an item in an inventory:

public class Inventory
{
  public Item Find(Query q)
  {
    // Look in the inventory for an Item matching the query
    ...
    return item;
  }
}

What are the benefits of CQS?

The really valuable benefit is that it makes it really easy to see which methods that modifies the state of the system and which don’t. Those methods that don’t modify the system can be re-arranged and called whenever needed, and those that do modify the state of the system must be handled more carefully. This makes it easier to work with the code and understand what happens in the system for different scenarios.

One of the worst practices I have seen is when the get section of a property modifies the state of the system:

// Never ever write code like this!
private int _x;
public int X { get => ++_x; set => _x = value; }

When would you not use CQS?

There are some scenarios when CQS doesn’t really fit. Martin Fowler mentions the Pop method on a Stack as an example. Pop usually removes the top element of the stack and returns it, hence both modifying the state and returning a value. Most of the time it is however a good idea to apply CQS.

In the Wikipedia article on CQS it is stated that CQS can make it harder to implement multi-threaded software. As an example it shows a method that increments and returns a value:

private readonly object _lock = new object();
private int _x;
public int IncrementAndGetValue()
{
  lock (_lock)
  {
    _x++;
    return _x;
  }
}

which, when adhering to CQS would need to be separated into two different methods, one to increment x and one for returning it’s value:

private int _x;

public void Increment()
{
  _x++;
}

public int GetValue()
{
  return _x;
}

With this, the locking has to be done everywhere Increment and GetValue is called instead of locally in IncrementAndGetValue. However, I do not see this as a real issue since it is often better avoid locks in low level methods in order to avoid the overhead in all cases where locking isn’t needed.

My suggestion is that you try to use CQS as often as possible. But when cases presents itself where breaking CQS is the most intuitive, like Pop on a Stack, or when enforcing CQS clearly makes the code more complex, it is okay to break the rule.

C# 8 and The Future of Null

The nullable types

If you have been coding C# for a while you have probably come across Nullable types (Microsoft link). A Nullable type is in instance of System.Nullable<T> where T can be any non-nullable value type (such as int, float, bool). An instance of System.Nullable<T> can represent any value of the underlying type T and also null. When declaring Nullable types the ’?’ shorthand is usually used. For example:

int? a = null; // a is an instance of System.Nullable<int>

Naturally this does not apply to Reference Types (instances of classes) since they can always be null, without wrapping them in a System.Nullable. However, in C# 8, the upcoming major version of C#, this is about to change.

Nullable Reference Types

The header above might sound strange, aren’t all reference types nullable? The answer is, yes, they are. However, starting from C# 8, references that are welcome to be null should be clearly marked as such. The way to mark a reference as nullable will be to use the ’?’ shorthand, these new reference types are what will be called Nullable Reference Types. An example:

// NOTE: This code is only valid from C# 8
public string? FindJohnMayReturnNull()
{
  IEnumerable<string> names = GetNames();
  return names.Where(name => name.StartsWith("John").FirstOrDefault(); // Returns null if there is no match
}

public string FindJohnShouldNotReturnNull
{
  IEnumerable<string> names = GetNames();
  return names.Where(name => name.StartsWith("John").FirstOrDefault() ?? string.Empty; // Returns empty string if there is no match
}

Note that I use the word should. It will be possible to compile your code even without using ’?’ to indicate nullable references, but it will give you a warning.

Conclusion

The introduction of Nullable Reference Types will most certainly make parts of the code better by making developers clearly express their intent (should this reference be nullable or not), and it will hopefully increase awareness for when you need to handle possible null values.

I would love to see code where we can avoid situations like this:

public class MyClass
{
  MyClass(ObjectOne o1, ObjectTwo o2, ObjectThree o3)
  {
    if (o1 == null) throw new ArgumentNullException(nameof(o1));
    if (o2 == null) throw new ArgumentNullException(nameof(o2));
    if (o3 == null) throw new ArgumentNullException(nameof(o3)); 
    ...
  }

  public void DoSomething(Param1 p1, Param2 p2)
  {
    if (p1 == null) throw new ArgumentNullException(nameof(p1));
    if (p2 == null) throw new ArgumentNullException(nameof(p2));

    var values = p1.GetValues();
    if (values == null) throw new ArgumentException(...);
  }
}

However, I don’t think the introduction of Nullable Reference Types will completely remove the need for null checks for all references that aren’t declared as nullable. There will most likely be cases where this is still needed, but hopefully we can reduce it, and over time the code will hopefully be both easier to read and more robust.

For more information see this article from Microsoft on the topic.