A functional approach to error handling in C#

Imagine that you want to write a simple console application that queries the user for two integers, divides the first integer with the second, and writes the result to the console window. What can go wrong in a simple program like this? The first thing that comes to mind is probably that the user might enter something other than integers, like ”rainbow unicorns” or ”l33t h4x0r”. Secondly, he might enter a zero as the second integer, trying to crash the program by having it perform a division by zero (throws a DivideByZeroException).

In order to handle these cases you can use the Int32.TryParse method for parsing the input and a try-catch statement to cover a possible division by zero. Your code might look something like the following:

static void Main()
    var numerator = QueryUser("Numerator: ");
    var denominator = QueryUser("Denominator: ");

    if (int.TryParse(numerator, out var n) &&
        int.TryParse(denominator, out var d))
            WriteLine($"{numerator} / {denominator} = {n / d}");
        catch (Exception e)
            WriteLine($"An error occurred: {e.Message}");
        WriteLine("Only integers are allowed as input.");

In the code above the QueryUser method is a simple helper method that writes a message to the console and reads the user’s response. The implementation follows the standard imperative pattern using if-statements for controlling the program flow and the standard OOP error handling using try-catch statements.

Now, let me present to you a different approach for handling lack of values and possible errors. It involves using the C# Functional Programming Language Extensions, language-ext, by Paul Louth.

static void Main()
    var numerator = QueryUser("Numerator: ");
    var denominator = QueryUser("Denominator: ");

    var result =
        from n in Some(numerator).Bind(parseInt)
        from d in Some(denominator).Bind(parseInt)
        select Divide(n, d);

        Some: s => s.Match(
            Succ: res => WriteLine($"{numerator} / {denominator} = {res}"),
            Fail: err => WriteLine($"An error occurred: {err.Message}")),
        None: () => WriteLine("Only integers are allowed as input."));

If you haven’t seen this approach to error handling before it might be difficult to understand what’s going on here. Where is the if-statement and the try-catch block? And what is Some, Bind, and Match?

Explaining everything in detail would require more than what is possible in a single blog post, and if you are interested in the details I recommend reading Paul Louths presentation of language-ext and also the great book Functional Programming in C# which explains all the details of the Option- and Either monads and how Monadic Bind is used to chain monadic returning functions together.

However, to understand the code above you need to know that QueryUser has not been changed, it will still return whatever the user enters, which might not be possible to parse as an integer. Some is a function that is defined in language-ext. It takes a value, in this case of type string, and lifts it into an Option-type. An Option is a container that can either contain a value, Some, or be empty, None. In this case the Option-container will have a value, the string that the user entered.

Next comes Bind. What Bind does is to take the Option and apply it’s value to the function that is given as argument to Bind. In this case Bind takes the string that the user entered and calls the parseInt function with the string as parameter. Now if the user have supplied a value that can be parsed as an int then Bind will return an Option containing the resulting value. However, if the value could not be parsed then Bind will return an Option containing None. This replaces the TryParse calls in the imperative code.

Now that we have two Option variables, n and d, we use select with the Divide function to calculate the result. It is not visible in the above code but this use of select depends on a special implementation of Select and SelectMany that accepts Option-types as input parameters (you might be familiar with the Select and SelectMany implementations for IEnumerable that is part of LINQ, these are similar but for Option).

The Divide function is an implementation of the Try delegate of language-ext which handles any possible exceptions that might occur and wraps them in a Result type. This might be a lot to wrap your head around, but if you’ve gotten this far you might be wondering about the Match functions at the end.

So, the type of the result variable is Option<Result<int>> where the outer type, Option is Some iff both the input strings were successfully parsed as int values (otherwise None), and the Result is Succ (success) iff no exception were thrown when performing the division (otherwise Fail). Match is used to check which of these four possible cases the result ended up as and outputs different texts depending on the outcome.

There are quite a lot of new concepts to understand and a very different way of thinking about errors if you want to use this approach to error handling. The small toy examples that you have to use in blog posts like this does not show any real benefits either so you might be very sceptic to all this right now. The value of the functional approach is that, once you understand the concepts, it makes your code much easier to understand and reason about. Traditional OOP and imperative code sprinkled with if-statements and exceptions being thrown and caught at different levels of the code quickly makes it extremely hard to follow. The functional approach pushes the error handling code to the end (the Match in the code above).

Finally I would like to recommend Scott Wlaschin’s introduction to Railway Oriented Programming which also includes a link to his presentation at NDC on the topic.

Is it possible to automate too much?

One goal that seems to be common among most organizations is to automate as many repetative manual tasks as possible. Automation has many advantages, it removes the risk of human mistakes, it reduces the time it takes to get things done, it frees up human resources, and hopefully saves the business quite a lot of money.

As developers we have the ability to automate a lot of repetative tasks, and we do, but are we taking automation too far sometimes? Is there a need to also look at the negative aspects of automation?

I believe that, when it comes to automating manual work, we tend to focus solely on the benefits and forget to think about the drawbacks and risks involved. Below I will list some of the risks I have seen that comes with automation.

The process becomes hard to understand

Automating a process often makes it almost completely opaque to the majority of the people in the organization. It goes from being something that you can easily track and talk to people about to being something that only a few developers really understand (if you are lucky and the developers who automated it are still part of the organization).

This leads to the organization becoming dependent on the development team for support. Questions on why the outcome from the process looks a certain way, how to use and integrate with any tools and so on now needs to be answered by the development team, who often has a lot of other tasks and may not prioritize support very high.

Minor deviations becomes a nightmare to handle

It is often easy to make small adjustments to a manual process for those cases that doesn’t fit 100% into the ordinary workflow. You may attach a note with some extra information, write something in the margin and then just continue with the next case.

In an automated process however, it is often required that every case is handled the exact same way. There is no room for adjustments. So when these cases happen, and they will, the process needs to be updated by the development team, or the development team might need to carefully make manual insert and adjustments to databases and handle that single case. This makes any small deviations very expensive, and risky, to handle.

When things go wrong, they go very wrong

In a manual process, mistakes are often minor and can be adjusted when discovered. When an automated process go wrong the impact may be huge.

There are many examples of automated systems causing havoc because of bugs or configuration mistakes. The computers works extremely fast and will continue to produce incorrect output as long as the bug is present or the configuration mistake is discovered and fixed. The manual work needed to fix all the incorrect output might be very costly. There is even an example of a company loosing $1 Billion in 45 minutes due to a configuration error in an automated process.

Here is a list of the 7 worst automation failures: https://www.csoonline.com/article/3188426/the-7-worst-automation-failures.html

A single equipment error causes your entire organization to halt

When a process is automated the manual way of doing the work is often removed. This can cause large parts of you organization to become dependent on a single piece of computer equipment. I have seen this happen many times during my years as a developer, with sometimes hundreds of employees being unable to do their work.


I am all in favor of automating repetative tasks, and I strongly believe that it is a good thing to free up human resources for more qualitative work. But I also believe that we need to be aware that automation comes with risks and drawbacks and we should think carefully before carelessly automating everything we think can be done by a computer.

My thoughts on ”Getting Things Done”

I just finished re-reading David Allen’s book ”Getting Things Done” (GTD). This time I thought would be a good time to share my thoughts on the methods described in the book.

The reason I think this time is more suitable than the last time I read the book is because I have been using most of the methods described for several months by now. I first read the book about five months ago and decided to give the system a go.

Book cover

Who is this system for

If you recognize any of the following, and wish to change it, you might benefit from learning and applying GTD:

  • You keep everything you need to remember to do in your head, resulting in that you often forget to do them.
  • You often remember things you need to do at times and places where you can’t do them.
  • You often come up with good ideas, but you don’t take any notes, resulting in that you often forget about them.
  • You have hundreds of e-mails in your inbox. Many are still unread, some are kept for reference for things you might want to check sometime again in the future, some works as reminders of things you might need to do some time and others you should probably take action on as soon as possible.
  • At work and at home you have piles of stuff laying around. They might contain notes from meetings, bills that you need to pay, magazines you want to read, form you need to fill in and hand in, and so on.
  • Your drawers are filled with random stuff and it’s hard to find the things you need when you need them.
  • You often don’t have any pre-defined plan for what to do but rather work on things that feels most urgent because someone throws it in your lap.
  • You have a hard time prioritizing your work because you don’t have a good overview on all the things you have to work on.

For me personally I had been trying to come up with a good system to handle all things I needed to do and all e-mails I recieved for a long time before I started reading about GTD for the first time, so I thought I had most of my stuff in pretty good order. What I have realized now is that it could be a lot better!

Changes I made after the first read-through

After I finished reading GTD the first time around I made quite a lot of changes. Here are the most important:

  • I bought a desk and office supplies and set up a workplace at home
  • I bought a physical inbox that I can put bills, forms, etc in that I need to handle somehow
  • I created lists where I started writing down all my projects
  • I started thinking about next actions needed to drive my projects forward
  • I went through all my e-mails and totally emptied my inboxes
  • I started following up all my projects and lists on a weekly basis
  • I set up a calendar where I can put reminders on things I need to follow up on in the future
  • I created an archive where I can store stuff I want to keep as reference

For the last months I have been using this system as much as possible, trying to make it a routine and something that I use without having to think about it.

Has it made any difference?

Yes! Looking back and reflecting on how things are now compared to before I started applying GTD I can honestly say that I will keep using the system. The most positive effect so far is that I no longer feel stressed over that there might be something that I have forgotten to do. Now I feel comfortable that I have a system where I can put things so that I do not need to keep everything in my head.

Having easy accessible, location based, lists has also helped me a lot when deciding what to do when I am at different places. And of course, having an empty e-mail inbox is a nice feeling. The lesson I learned here is that you should not use your inbox for anything else than a temporary holding place for incoming stuff that needs to be decided on how to handle. As soon as you handle it, it is removed from the inbox, and never put back.

So, Is GTD for you? There is no way for me to know, but I do recommend giving it a try. The worst thing that can happen is that you don’t like it and return to doing things as before.

Book review: Designing Data-Intensive Applications

I just finished reading Martin Kleppmann’s book ”Designing Data-Intensive Applications”. Reading it cover to cover takes time, a lot of time, but it is definitely worth it.

It is not a book for anybody, it is a book for software developers and designers who work with systems that handle a lot of data and have high demands on availability and data integrity. The book is quite technical so it is preferable if you have a good understanding of the infrastructure of your system and the different needs it is supposed to meet.

Book cover

The book is divided into three different parts, each made up of several chapters. The first part is called Foundations of Data Systems and consists of four different chapters. Examples of topics being explained are Reliability, Scalablity, and Maintainability, SQL and NoSQL, datastructures used in databases, different ways of encoding data, and modes of dataflow. The details are quite complex and you will most probably not be able to just scim over the pages and expect to be able to follow along.

The second part is called Distributed Data and has five chapters. It discusses Replication, Partitioning, Transactions, Troubles with Distributed Systems, and Consistency and Consensus. After reading this part of the book I remembered thinking that it is amazing that we have these big complex systems that actually work. Kleppman describes so many things that can go wrong in a system that you start to wonder how in the world it is possible that things actually do work…most of the time.

The third, and last, part of the book is called Derived Data and consists of three chapters. Here Kleppman describes different types of data processing, Batch Processing and Stream Processing. I am not at all familiar with the concepts of distributed filesystems, MapReduce, and other details discussed in the Batch Processing chapter so I found it a bit hard to keep focus while reading about it. However, the Stream Processing chapter was very interesting.

To sum up. I really enjoyed reading this book. I think it was a great read and it really helped me get a better understanding of the system I am working with (maybe most importantly what it is NOT designed for). I recommend anyone working with larger data intensive systems to read it, it will take time, but it is well invested time.

Finally, I would like to thank Kristoffer who told me to read this book. Thank you!

Book review: C# in depth (4th ed)

C# in depth is written by Jon Skeet, a software engineer currently working at Google. He is known for being an extremely active user on Stackoverflow, having an odd fascination in the field of date and time (he is one of the Noda Time library authors), and being very interested in the C# language.

As the C# language has evolved and new versions have been released, so have new editions of the book been published. The latest edition, the fourth, covers C# 1 – 7, and a little bit on the upcoming C# 8. It is pretty much a history book of the C# language with deep dives into the most important changes and features.

I really want to stress that this is a book on the C# language and not the .NET Framework. Hence, it covers how the syntax, keywords, and language itself has evolved. It is also not a book for someone starting out with C# wishing to learn how to write Hello world, it is for the intermediate to advanced developer.

Book cover

One reflection I had when reading this book is that Jon sometimes writes as he is explaining concepts to someone that has very little experience with C#, and in the next paragraph he writes for someone with deep knowledge of the language. Many parts of the book was on things I already knew quite well and I could just skim through them, and some parts I had to read really slowly to be able to follow along.

I also think that he sometimes takes it a bit too far, even though the title of the book is C# in depth. I never thought it was possible to write so much on tuples… Anyway, most sections of the book is interesting, well written, and well explained.

Summary: This is not a must read. You can be a great C# developer without having read it. But if you are interested in the history and evolution of the C# language, and wish to gain deeper understanding of the different parts that make up the language, then this book is for you.

The end of the .NET Framework?

I remember last autumn when I was out jogging. I was listening to an episode of a well known podcast on programming and the future of .NET was discussed. The hosts and the guest on the show were discussing the role of .NET Core and I remember that they recommended using Core for green field projects, and even older projects with a lot of active development. However, they were all very convinced that the original .NET Framework would be actively developed and maintained for many many years to come, no, need to worry. It seems that they were wrong.

Today, the future of .NET Framework has taken a different turn. You probably know that a new version of the C# language is to be released later this year. The new version will be called C# 8 (current version is 7.3). C# 8 will introduce language support for new types that have been added to .NET Standard 2.1.

Here comes the interesting part. In order to support these new types fully, changes are required to the .NET runtime. It has however been decided that these changes will not be done to the .NET Framework. It will stay on .NET Standard 2.0. Only .NET Core 3.0 will support .NET Standard 2.1.

This also means that it will not be possible to compile C# 8 code if targeting .NET Framework. Not even with the newest .NET Framework 4.8 version.

It has also been announced that version 4.8 will be the last version of the original .NET Framework. After that, and after .NET Core 3, the plan is to release .NET 5 in year 2020. .NET 5 will however be based on .NET Core 3 and Mono.

My view on this is that it is probably the right path forward for .NET. The current situation, with .NET Framework, .NET Core, Mono, etc. is confusing for developers, and I am sure it takes a lot of energy and resources to maintain all different tracks.

To you .NET developers out there I would recommend starting to investigating what it would take to migrate your active projects to .NET Core.

You can find an official statement to back up claims in this post at https://devblogs.microsoft.com/dotnet/building-c-8-0/

When to NOT use interpolated strings

I am currently reading the newest edition of Jon Skeet’s book ”C# in depth” (https://csharpindepth.com/) which is a quite interesting read so far.

There is however one thing in the book that made me feel really bad about myself. You see, I really think interpolated strings helps improve readability quite a lot, and therefore I have been changing a lot of code similar to this:

Log.Debug("Value of property A is {0} and B is {1}", variable.A, variable.B);

with this:

Log.Debug($"Value of property A is {variable.A} and B is {variable.B}");

Can you think of a reason why this is a BAD thing to do? Assume that you have your code live in a production environment. In most cases debug level logging will then be turned off.

In the first variant of the code, where the properties are given as separate parameters, the Debug method will just return and nothing is done with the parameters. But in the second variant, where the parameter to the Debug method has been changed to an interpolated string, strings will be constructed for both properties and a resulting formatted string will be constructed before Debug is called. This means that the program will do all the work needed to construct the string, and then just throw it away.

Using an interpolated string in this scenario might slow down the application quite a bit, even with Debug level logging turned off!

To summarize, do not use interpolated strings unless you are 100% sure that the result will actually be used!

Book Review: Agile Principles, Patterns, and Practices in C#

I have been reading Robert C. Martin’s and Micah Martin’s book, Agile Principles, Patterns, and Practices in C#, for quite a while now. The reason it has taken me so long to finish is that it is packed with so much information and covers so many aspects of software development, enough for at least 3-4 different books of their own.

Book cover

The book begins with a section on Agile Development which covers topics such as Agile Practices, Extreme Programming, Planning, Testing, and Refactoring. It continues with a section on Agile Design where the famous SOLID principles are covered and also covers UML and how to effectively work with diagrams. In the third section a number of design patterns are introduced and the practices learnt so far are put into practice in a case study. Finally, the fourth and final section covers Principles of Package and Component Design (REP, CRP, CCP, ADP, SDP, and SAP) and introduces several other design patterns. It ends with a lot of code examples where database (SQL) support and a user interface is added to the application introduced in section three.

Even though the book is over 10 years old it is still highly relevant. Agile software development, good practices and principles, and patterns for OOP, are skills that all software developers today will benefit from educating themselves on. There are tons of online material, classes, and other books that covers these topics, but I don’t know of any other resource that have all of it in the same place.

With that said, I highly recommend this book. But to get the most of it you need to be prepared to put a lot of time and focus on reading it and really understanding the reasoning behind the principles and patterns. Personally I had to re-read some sections and take notes while I was reading, or I felt like I didn’t get all the details. It might be helpful to buy some copies to your workplace and run it as a book circle so that you get to discuss the contents with other developers.

Verdict: Highly recommended!

The Stable-Abstractions Principle


This is the last of the Principles of Package and Component Design, the Stable-Abstractions Principle (SAP). It says: ”A component should be as abstract as it is stable.”

In the Stable-Dependencies Principle (SDP) post we learned that stability, or in that case Instability, I, can be calculated using the formula I = Ce / (Ca + Ce) where Ca is the number of afferent couplings and Ce is the number of efferent couplings. A component that has many dependencies towards it, but depend only on a few external classes is considered stable, and vice versa.

Conforming to SAP leads to stable components containing many abstract classes, and instable components containing many concrete classes.


The goal of making stable components abstract is to allow for them to be easily extended and avoid them constraining the design.

Instable components on the other hand can contain concrete classes since they are easily changed


Just as for instability we can define a metric that helps us understand how abstract a component is. The formula is very simple:

A = Na / Nc

where Na is the number of abstract classes in the component and Nc is the total number of classes in the component. A component with A = 1 is completely abstract while a component with A = 0 is completely concrete.