Physical books in a connected world

Some might say that technical books, in the form that is printed on dead trees, have lost their raison d’être in the connected world we’re living in. To some extent they are right, there are many cases where it is better to look up the latest information online than to confide to a dusty old reference manual. In other cases a proper book will serve you much better than an online resource.

Some of the books in my shelf

For once, the quality level that has to be met in order to get a technical book published is just so much higher than anything posted online. If you buy a technical book you can be pretty sure that the author is an expert in his field and that and least one, in many cases several, other experts have been involved and given their input and approval.

Secondly, with a book you have all information collected in one place, usually nicely structured and with a comprehensive index at the back. I often find it just as easy to look up something in a book than it is to search for it online and in many cases the descriptions and examples in the books are much more educational which helps with the understanding on why things work as they do and not just how they work.

An argument you some times hear against buying technical books is that ”technology changes so fast that your books will be out of date before you have finished reading them”. In some cases this is true, in many cases not. The foundations on which most of the new technology is built upon have not changed in decades. Take the major programming paradigms as an example. Procedural, object oriented, and functional programming were all invented in the 50’s and 60’s and the core concepts are still the same. Hence, learning the core concepts is useful and will stay useful, regardless of the programming language you write your programs in (this is by the way something that is often overlooked when recruiting software developers where recruiters often look for experience with a particular programming language over a high level of understanding of the programming paradigm. The language used is often a minor detail, a good Java programmer will pick up C# in no time, and vice versa).
Other areas that haven’t aged much are data structures, algorithms, and relational databases. Sure, SQL server is always evolving and NOSQL databases has gained a lot of popularity recently, but the foundations laid out by Edgar F. Codd in the 1970’s are still highly relevant.

In conclusion, traditional technical books still has a place in the modern connected world. They are great for educational purposes when the goal is to learn about concepts and ideas that has proven to stay true over the years. It is also a nice feeling to grab a cup of coffee, pick up a real physical book, and flip between actual pages at a peaceful pace.

Options in F#

In the last post I compared the new Nullable reference types that were introduced in C# 8 with using Option from language-ext, which tries to mimic the functional paradigm approach to representing the abscence of a value. One of the major drawbacks of Option in C# is that there is no built in language-support, instead you have to rely on third party packages, or roll your own.

In F# however, as well as in other functional languages, Option is built in from the start and is the default way to represent a missing value. I usually don’t write code in F# but I wanted to take a look on how the scenario I used in the previous post would look like in a functional-first language so I made a test implementation in F#. Now this is just dummy code to test the scenario, in a real world application the functions would look a lot different:

open System

type UserId = UserId of uint32

type Customer =
    { Id: UserId }

type Email = Email of string

let tryParse (s: string) : UserId option =
    match UInt32.TryParse s with
    | true, num -> Some (UserId num)
    | _ -> None

let getCustomerById (id: UserId) : Customer option =
    match id with
    | UserId 1u -> Some { Id = UserId 1u }
    | UserId 2u -> Some { Id = UserId 2u }
    | _ -> None 

let getEmailForCustomer (customer: Customer) : Email option =
    match customer.Id with
    | UserId 1u -> Some (Email "")
    | _  -> None

let sendPromotionalEmail (email: Email) : unit =

let main argv =
    if argv.Length <> 1 then
        failwith "Usage: program <user_id>"

    let result = 
        tryParse argv.[0]
        |> Option.bind getCustomerById 
        |> Option.bind getEmailForCustomer
        |> sendPromotionalEmail

    match result with
    | Some _ -> printfn "Email sent successfully"
    | None -> printfn "No email sent"

Note that even though F# is a statically typed language it is not required to explicitly define types for all input parameters and return values, most of the time the compiler can resolve the types so you don’t have to. In the code above I have however defined the types anyway to make it clearer how to work with Option, bind and map.

As you can see there are three functions that returns Options, tryParse that returns Option<CustomerId>, getCustomerById that returns Option<Customer>, and getEmailForCustomer that returns Option<Email>. Note that even though none of the functions takes an Option as input we are able to pipe the returned value through the functions in the main function using Option.bind and Just like we were able to do in C# using language-ext.


In this simple scenario it was quite simple to port the C# implemenation to F#, so much that I would like to explore other scenarios as well and how to implement them using F#.

If using F# is an option it might be worth considering it, and get the full experience, over trying to copy the behavior by using language-ext. I will continue exploring F# and how it compares to C# in different scenarios. I believe it will be an interesting journey.

Nullable reference types compared to the Option monad

I wanted to investigate how Nullable reference types, the new big feature that was introduced in C# 8, compared to the Option type from language-ext. But let’s start with some information to set up the scene.

Nullable reference types

The way to indicate the abscence of a value in C#, as well as in many other languages, is usually to use null. For example, assume that we wish to look up a user in the database by supplying a user id. The method signature might look something like this:

public User GetUserById(int userId);

Now, this signature is a bit problematic since it does not say anything about what will happen when no user with the given userId exists. It might throw an exception, it might return a default User, or it might return null. The only way to really know is to look at the implementation. Also, if null is returned and the calling method doesn’t cover this case, we might end up with the dreaded null reference exception.

In C# 8 these problems have been addressed, by the introduction of Nullable reference types. If you turn on this feature in your project (see for details) the GetUserById method can be declared like this instead:

public User? GetUserById(int userId);

Notice the questionmark after User? That indicates that this method might return null. This is a strong indication to what will happen if no user with the given user id exists. Also, the compiler will now generate warnings if you try to deference the returnvalue without first checking for null. This is nice!

The Option monad

Now, don’t be scared of the M-word. You don’t need to understand what a monad is for the following text to make sense. Later in this post I will, however, show some nice things that comes from this, but you still don’t need to understand all the underlying details (in fact, if you’re a C# developer you probably use monads all the time without even knowing it. Every type that implements IEnumerable is actually a monad).

Anyway, the idea of using an Option type comes from the functional paradigm. In F#, as an example, the option type is part of the language out of the box. In C# you can find a nice implementation of it in language-ext, which is available as a nuget package from the official nuget repository.

Using the option type the GetUserById method would get a signature like this:

public Option<User> GetUserById(int userId);

The Option type is like a container that can be either empty or contain a single value. Just like the nullable reference type, using an option is a clear indication to the caller to what to expect in the case where no user with the given user id exists.

And, just like the nullable reference type, the calling code must handle the possibility that no user is returned. Actually, using option is even more secure since there is no way to get the code to even compile without handling both possible outcomes.

Comparing the two

So, are there any benefits with using nullable reference types over the option monad, or the other way around? Well, the obvious benefit of nullable reference types is that it has built-in language support in C# starting from C# 8. No need for extra nuget packages, only a quick change in the project file and it’s enabled. It is even possible to gradually introduce it by using more fine grained control mechanisms than entire projects.

Also, nullable reference types is probably a lot easier for existing C# programmers to accept than introducing the Option type, which might look and feel a bit strange and takes a while to understand and see the benefits of. It sure feels like nullable reference types is the better option here (no pun intended).

Now, with that said, you might take the blue pill, stop reading here and remain in ignorence.
Or, you may take the red pill and I’ll show you how deep the rabbit-hole goes (did not get the reference? Shame on you! Go see The Matrix and come back here afterwards).

The problem with null is that it has no value, it’s the abscence of a value. An Option on the other hand is always a valid instance. In language-ext the Option type is implemented as a struct, meaning it can’t be null. Also, in case the Option contains a value, that value can also not be null. The implementation of Option stops any attempts to create a new Option containing a null value. This means, if you use Option you eliminate null references from your code, if you use nullable reference types they are still there.

The question that should come to mind here is, what difference does it make if I use null or not? Let me show you by an example. Assume that you have a program that takes a user id, a string to be parsed as int, as input. It looks up the user in your database then, if a user is found, it looks up the user’s e-mail address and finally, if an e-mail address for the user could be found it sends a promotional e-mail to the user. Using nullable reference types you method signatures might look like this:

public Customer? GetCustomerById(int userId);
public Email? GetEmailForCustomer(Customer customer);
public void SendPromotionalEmail(Email email);

and to cover for invalid input and possible null values, the calling code needs to look something like this:

public void SendPromoToCustomer(string id)
  if (int.TryParse(id, out var userId)
    var customer = GetCustomerById();
    if (customer == null)
    var email = GetEmailForCustomer(customer);
    if (email == null)

Now, this is a really simple example but it clearly shows the noise that the handling of null values introduces to the code. In a large codebase, null checks like these are everywhere.

Let me show you how the code would look using Option, Bind and Do from language-ext (here comes the part where the fact that Option is a monad makes all the difference). First, let’s change the null reference type returning methods to return Option instead:

public Option<Customer> GetCustomerById(int userId);
public Option<Email> GetEmailForCustomer(Customer customer);
public void SendPromotionalEmail(Email email);

Now, the calling code can be re-written to this:

public void SendPromoToCustomer(string id)

This code still handles the possibility that the id string might not be possible to parse to an int, that GetCustomerById might not return a user, and that an e-mail address might be missing, but since we always have a valid instance to work with, an Option, we can just push it through the workflow and let the framework (language-ext) handle the error cases.

Bind and Do are extensions methods on Option. They inspect the content of the Option and if it contains a value the function given as parameter will be called with the value of the Option as a parameter. The return value of the function will then be used in the next call to Bind or Do. However, if the Option does not contain any value, the function will not be called. Instead, a new empty Option will be returned directly by Bind or Do.


I truly believe that Option is a better alternative to null for representing the abscence of a value. However, I also understand that using Option and language-ext is a big change which requires many developers to re-think and change the way they think and reason about handling these cases. Hence, I would not introduce this into a large existing codebase where developers are used to handling null. In that case I would recommend introducing nullable reference types, if possible (C# 8 is not supported on older frameworks).

But, starting a new, smaller, project I would opt for using language-ext and trying to get away from null as much as possible.

If you are interested in learning more, I recommend that you take a look at the book review that I wrote on the book ”Functional programming in C#”,

Book review: Functional programming in C#

I have been interested in functional programming for quite some time, experimenting with both F# and Clojure as well as trying out some Scala. But I must admit that I have never really put full focus on it for any longer period, not until I found this book:

Book cover

Functional programming in C# – How to write better C# code is a book by Enrico Buonanno, a CS graduate from Columbia University with over 15 years of experience as a developer, architect, and trainer. It is published by Manning and you can find it at

What made this book appealing was that it is written for developers who are fluent in C#, a language that I know quite well, meaning I did not have to struggle with both a new language and all the new concepts and ways to think that comes with functional programming. In retrospect this was a really nice way to learn what functional programming is all about.

This book is not for the beginner programmer. It goes into several advanced topics in depth and I would only recommend it to an experienced C# developer who is willing to spend a lot of time and energy on learning about functional concepts such as immutability, functors, monads, partial applicaction, currying and the likes.

With that said, if the above description fits you, I can really recommend that you give this book a try. It describes a lot of concepts and has a lot of sample code and coding excercises. The first eight chapters all ends with a bunch of challenging excercises for the reader and there is also a github repository available that includes the authors own functional library and excercises and solutions from the book. You can find the repository at

The only critique I can give is that I think that it would have been better if the author used the well established language-ext library instead of developing his own. When comparing the two I get impression that the authors library is pretty much a subset of language-ext and does not add anything I don’t get from language-ext.

I feels nice to finally understand what a monad is. My next step on this journey will probably be to take a deep dive in a pure functional language (like Haskell).

.NET under the hood – The ’null’ keyword

I assume that you are familiar with the null keyword in C#. It represents a null reference, one that does not reference any object, and even though it has no type it can be assigned to any reference of any type.

Null as a concept is quite abstract. It is often used to represent the abscence of a value, so in itself it has no concrete representation. Well, this may work in theory, but once you boil it down to machine code you have to represent null somehow. In this post I will discuss some Common Intermediate Language (CIL) code related to null and also look at some implementation details of the dotnet runtime. But let’s start with some C# code:

string s = "A string";
if (s == null)
    Console.WriteLine("s is null");

In the code above we have an object, s, of type string which we compare to see if it is equal to null. Let’s compile this and see what the compiler does with this code. Don’t worry if you’re not used to reading CIL, I will explain what the opcodes does. Using the tool ildasm you can inspect the CIL-code generated by the compiler. For a main method containing the above code the compiler will generate the following CIL-code:

.method private hidebysig static void  Main() cil managed
  // Code size       27 (0x1b)
  .maxstack  2
  .locals init (string V_0,
           bool V_1)
  IL_0000:  nop
  IL_0001:  ldstr      "A string"
  IL_0006:  stloc.0
  IL_0007:  ldloc.0
  IL_0008:  ldnull
  IL_0009:  ceq
  IL_000b:  stloc.1
  IL_000c:  ldloc.1
  IL_000d:  brfalse.s  IL_001a
  IL_000f:  ldstr      "s is null"
  IL_0014:  call       void [System.Console]System.Console::WriteLine(string)
  IL_0019:  nop
  IL_001a:  ret
} // end of method Program::Main

Instructions on line IL_0001 to IL_0007 will push the string ”A string” onto the stack. Then on line IL_0008 comes an interesting op-code, ldnull. This op-code means ”push null onto the stack”. The next op code, ceq, means ”check if [the two items on top of the stack are] equal”. But wait a minute…if null doesn’t have a value, how can it be put onto the stack? And how can you compare it with another value? Something fishy is going on here.

To understand what happens when ldnull is executed we need to take a look at the implementation of the runtime engine. Fortunately the source code of the dotnet runtime is available on GitHub, for anyone to dive into.

After browsing the code for a while I found an interesting piece of code in a file called interpreter.cpp, a function with the signature void Interpreter::LdNull() which seems to be executed when CEE_LDNULL is encountered. Looking in the file opcode.def it becomes obvious that CEE_LDNULL means opcode ldnull. The interesting part of the LdNull functions are the following two lines of C++ code:

OpStackTypeSet(m_curStackHt, InterpreterType(CORINFO_TYPE_CLASS));
OpStackSet<void*>(m_curStackHt, NULL);

This code sets the type of the stack variable to class and pushes a void pointer with value NULL, which is defined as 0 (zero), onto the operand stack.

If you are familiar with C and C++ you will recognize this. It is a regular null pointer ( And for those of you who aren’t familiar with null pointers, a pointer holds an address to a place in your computer’s memory where a value (or several values) are stored and a null pointer is a pointer that has the value 0, which is not a valid memory address for your program to store any data.

So, in summary, null in .NET is, in practice, a pointer to memory address 0 (zero).

A functional approach to error handling in C#

Imagine that you want to write a simple console application that queries the user for two integers, divides the first integer with the second, and writes the result to the console window. What can go wrong in a simple program like this? The first thing that comes to mind is probably that the user might enter something other than integers, like ”rainbow unicorns” or ”l33t h4x0r”. Secondly, he might enter a zero as the second integer, trying to crash the program by having it perform a division by zero (throws a DivideByZeroException).

In order to handle these cases you can use the Int32.TryParse method for parsing the input and a try-catch statement to cover a possible division by zero. Your code might look something like the following:

static void Main()
    var numerator = QueryUser("Numerator: ");
    var denominator = QueryUser("Denominator: ");

    if (int.TryParse(numerator, out var n) &amp;&amp;
        int.TryParse(denominator, out var d))
            WriteLine($"{numerator} / {denominator} = {n / d}");
        catch (Exception e)
            WriteLine($"An error occurred: {e.Message}");
        WriteLine("Only integers are allowed as input.");

In the code above the QueryUser method is a simple helper method that writes a message to the console and reads the user’s response. The implementation follows the standard imperative pattern using if-statements for controlling the program flow and the standard OOP error handling using try-catch statements.

Now, let me present to you a different approach for handling lack of values and possible errors. It involves using the C# Functional Programming Language Extensions, language-ext, by Paul Louth.

static void Main()
    var numerator = QueryUser("Numerator: ");
    var denominator = QueryUser("Denominator: ");

    var result =
        from n in Some(numerator).Bind(parseInt)
        from d in Some(denominator).Bind(parseInt)
        select Divide(n, d);

        Some: s => s.Match(
            Succ: res => WriteLine($"{numerator} / {denominator} = {res}"),
            Fail: err => WriteLine($"An error occurred: {err.Message}")),
        None: () => WriteLine("Only integers are allowed as input."));

If you haven’t seen this approach to error handling before it might be difficult to understand what’s going on here. Where is the if-statement and the try-catch block? And what is Some, Bind, and Match?

Explaining everything in detail would require more than what is possible in a single blog post, and if you are interested in the details I recommend reading Paul Louths presentation of language-ext and also the great book Functional Programming in C# which explains all the details of the Option- and Either monads and how Monadic Bind is used to chain monadic returning functions together.

However, to understand the code above you need to know that QueryUser has not been changed, it will still return whatever the user enters, which might not be possible to parse as an integer. Some is a function that is defined in language-ext. It takes a value, in this case of type string, and lifts it into an Option-type. An Option is a container that can either contain a value, Some, or be empty, None. In this case the Option-container will have a value, the string that the user entered.

Next comes Bind. What Bind does is to take the Option and apply it’s value to the function that is given as argument to Bind. In this case Bind takes the string that the user entered and calls the parseInt function with the string as parameter. Now if the user have supplied a value that can be parsed as an int then Bind will return an Option containing the resulting value. However, if the value could not be parsed then Bind will return an Option containing None. This replaces the TryParse calls in the imperative code.

Now that we have two Option variables, n and d, we use select with the Divide function to calculate the result. It is not visible in the above code but this use of select depends on a special implementation of Select and SelectMany that accepts Option-types as input parameters (you might be familiar with the Select and SelectMany implementations for IEnumerable that is part of LINQ, these are similar but for Option).

The Divide function is an implementation of the Try delegate of language-ext which handles any possible exceptions that might occur and wraps them in a Result type. This might be a lot to wrap your head around, but if you’ve gotten this far you might be wondering about the Match functions at the end.

So, the type of the result variable is Option<Result<int>> where the outer type, Option is Some iff both the input strings were successfully parsed as int values (otherwise None), and the Result is Succ (success) iff no exception were thrown when performing the division (otherwise Fail). Match is used to check which of these four possible cases the result ended up as and outputs different texts depending on the outcome.

There are quite a lot of new concepts to understand and a very different way of thinking about errors if you want to use this approach to error handling. The small toy examples that you have to use in blog posts like this does not show any real benefits either so you might be very sceptic to all this right now. The value of the functional approach is that, once you understand the concepts, it makes your code much easier to understand and reason about. Traditional OOP and imperative code sprinkled with if-statements and exceptions being thrown and caught at different levels of the code quickly makes it extremely hard to follow. The functional approach pushes the error handling code to the end (the Match in the code above).

Finally I would like to recommend Scott Wlaschin’s introduction to Railway Oriented Programming which also includes a link to his presentation at NDC on the topic.

Is it possible to automate too much?

One goal that seems to be common among most organizations is to automate as many repetative manual tasks as possible. Automation has many advantages, it removes the risk of human mistakes, it reduces the time it takes to get things done, it frees up human resources, and hopefully saves the business quite a lot of money.

As developers we have the ability to automate a lot of repetative tasks, and we do, but are we taking automation too far sometimes? Is there a need to also look at the negative aspects of automation?

I believe that, when it comes to automating manual work, we tend to focus solely on the benefits and forget to think about the drawbacks and risks involved. Below I will list some of the risks I have seen that comes with automation.

The process becomes hard to understand

Automating a process often makes it almost completely opaque to the majority of the people in the organization. It goes from being something that you can easily track and talk to people about to being something that only a few developers really understand (if you are lucky and the developers who automated it are still part of the organization).

This leads to the organization becoming dependent on the development team for support. Questions on why the outcome from the process looks a certain way, how to use and integrate with any tools and so on now needs to be answered by the development team, who often has a lot of other tasks and may not prioritize support very high.

Minor deviations becomes a nightmare to handle

It is often easy to make small adjustments to a manual process for those cases that doesn’t fit 100% into the ordinary workflow. You may attach a note with some extra information, write something in the margin and then just continue with the next case.

In an automated process however, it is often required that every case is handled the exact same way. There is no room for adjustments. So when these cases happen, and they will, the process needs to be updated by the development team, or the development team might need to carefully make manual insert and adjustments to databases and handle that single case. This makes any small deviations very expensive, and risky, to handle.

When things go wrong, they go very wrong

In a manual process, mistakes are often minor and can be adjusted when discovered. When an automated process go wrong the impact may be huge.

There are many examples of automated systems causing havoc because of bugs or configuration mistakes. The computers works extremely fast and will continue to produce incorrect output as long as the bug is present or the configuration mistake is discovered and fixed. The manual work needed to fix all the incorrect output might be very costly. There is even an example of a company loosing $1 Billion in 45 minutes due to a configuration error in an automated process.

Here is a list of the 7 worst automation failures:

A single equipment error causes your entire organization to halt

When a process is automated the manual way of doing the work is often removed. This can cause large parts of you organization to become dependent on a single piece of computer equipment. I have seen this happen many times during my years as a developer, with sometimes hundreds of employees being unable to do their work.


I am all in favor of automating repetative tasks, and I strongly believe that it is a good thing to free up human resources for more qualitative work. But I also believe that we need to be aware that automation comes with risks and drawbacks and we should think carefully before carelessly automating everything we think can be done by a computer.

My thoughts on ”Getting Things Done”

I just finished re-reading David Allen’s book ”Getting Things Done” (GTD). This time I thought would be a good time to share my thoughts on the methods described in the book.

The reason I think this time is more suitable than the last time I read the book is because I have been using most of the methods described for several months by now. I first read the book about five months ago and decided to give the system a go.

Book cover

Who is this system for

If you recognize any of the following, and wish to change it, you might benefit from learning and applying GTD:

  • You keep everything you need to remember to do in your head, resulting in that you often forget to do them.
  • You often remember things you need to do at times and places where you can’t do them.
  • You often come up with good ideas, but you don’t take any notes, resulting in that you often forget about them.
  • You have hundreds of e-mails in your inbox. Many are still unread, some are kept for reference for things you might want to check sometime again in the future, some works as reminders of things you might need to do some time and others you should probably take action on as soon as possible.
  • At work and at home you have piles of stuff laying around. They might contain notes from meetings, bills that you need to pay, magazines you want to read, form you need to fill in and hand in, and so on.
  • Your drawers are filled with random stuff and it’s hard to find the things you need when you need them.
  • You often don’t have any pre-defined plan for what to do but rather work on things that feels most urgent because someone throws it in your lap.
  • You have a hard time prioritizing your work because you don’t have a good overview on all the things you have to work on.

For me personally I had been trying to come up with a good system to handle all things I needed to do and all e-mails I recieved for a long time before I started reading about GTD for the first time, so I thought I had most of my stuff in pretty good order. What I have realized now is that it could be a lot better!

Changes I made after the first read-through

After I finished reading GTD the first time around I made quite a lot of changes. Here are the most important:

  • I bought a desk and office supplies and set up a workplace at home
  • I bought a physical inbox that I can put bills, forms, etc in that I need to handle somehow
  • I created lists where I started writing down all my projects
  • I started thinking about next actions needed to drive my projects forward
  • I went through all my e-mails and totally emptied my inboxes
  • I started following up all my projects and lists on a weekly basis
  • I set up a calendar where I can put reminders on things I need to follow up on in the future
  • I created an archive where I can store stuff I want to keep as reference

For the last months I have been using this system as much as possible, trying to make it a routine and something that I use without having to think about it.

Has it made any difference?

Yes! Looking back and reflecting on how things are now compared to before I started applying GTD I can honestly say that I will keep using the system. The most positive effect so far is that I no longer feel stressed over that there might be something that I have forgotten to do. Now I feel comfortable that I have a system where I can put things so that I do not need to keep everything in my head.

Having easy accessible, location based, lists has also helped me a lot when deciding what to do when I am at different places. And of course, having an empty e-mail inbox is a nice feeling. The lesson I learned here is that you should not use your inbox for anything else than a temporary holding place for incoming stuff that needs to be decided on how to handle. As soon as you handle it, it is removed from the inbox, and never put back.

So, Is GTD for you? There is no way for me to know, but I do recommend giving it a try. The worst thing that can happen is that you don’t like it and return to doing things as before.

Book review: Designing Data-Intensive Applications

I just finished reading Martin Kleppmann’s book ”Designing Data-Intensive Applications”. Reading it cover to cover takes time, a lot of time, but it is definitely worth it.

It is not a book for anybody, it is a book for software developers and designers who work with systems that handle a lot of data and have high demands on availability and data integrity. The book is quite technical so it is preferable if you have a good understanding of the infrastructure of your system and the different needs it is supposed to meet.

Book cover

The book is divided into three different parts, each made up of several chapters. The first part is called Foundations of Data Systems and consists of four different chapters. Examples of topics being explained are Reliability, Scalablity, and Maintainability, SQL and NoSQL, datastructures used in databases, different ways of encoding data, and modes of dataflow. The details are quite complex and you will most probably not be able to just scim over the pages and expect to be able to follow along.

The second part is called Distributed Data and has five chapters. It discusses Replication, Partitioning, Transactions, Troubles with Distributed Systems, and Consistency and Consensus. After reading this part of the book I remembered thinking that it is amazing that we have these big complex systems that actually work. Kleppman describes so many things that can go wrong in a system that you start to wonder how in the world it is possible that things actually do work…most of the time.

The third, and last, part of the book is called Derived Data and consists of three chapters. Here Kleppman describes different types of data processing, Batch Processing and Stream Processing. I am not at all familiar with the concepts of distributed filesystems, MapReduce, and other details discussed in the Batch Processing chapter so I found it a bit hard to keep focus while reading about it. However, the Stream Processing chapter was very interesting.

To sum up. I really enjoyed reading this book. I think it was a great read and it really helped me get a better understanding of the system I am working with (maybe most importantly what it is NOT designed for). I recommend anyone working with larger data intensive systems to read it, it will take time, but it is well invested time.

Finally, I would like to thank Kristoffer who told me to read this book. Thank you!

Book review: C# in depth (4th ed)

C# in depth is written by Jon Skeet, a software engineer currently working at Google. He is known for being an extremely active user on Stackoverflow, having an odd fascination in the field of date and time (he is one of the Noda Time library authors), and being very interested in the C# language.

As the C# language has evolved and new versions have been released, so have new editions of the book been published. The latest edition, the fourth, covers C# 1 – 7, and a little bit on the upcoming C# 8. It is pretty much a history book of the C# language with deep dives into the most important changes and features.

I really want to stress that this is a book on the C# language and not the .NET Framework. Hence, it covers how the syntax, keywords, and language itself has evolved. It is also not a book for someone starting out with C# wishing to learn how to write Hello world, it is for the intermediate to advanced developer.

Book cover

One reflection I had when reading this book is that Jon sometimes writes as he is explaining concepts to someone that has very little experience with C#, and in the next paragraph he writes for someone with deep knowledge of the language. Many parts of the book was on things I already knew quite well and I could just skim through them, and some parts I had to read really slowly to be able to follow along.

I also think that he sometimes takes it a bit too far, even though the title of the book is C# in depth. I never thought it was possible to write so much on tuples… Anyway, most sections of the book is interesting, well written, and well explained.

Summary: This is not a must read. You can be a great C# developer without having read it. But if you are interested in the history and evolution of the C# language, and wish to gain deeper understanding of the different parts that make up the language, then this book is for you.