Ways to pass Arguments in C++

In C++ there are many ways to pass an argument to a function. The following code shows some commonly used variants. It is also possible to add the const keyword at different places when passing by pointers to make either the pointer const or the value the pointer is pointing at const, or both.

You can also pass a reference to a pointer or a pointer to a pointer. I won’t cover these in detail but it is worth knowing that they do occur.

// Pass by value
void foo(Bar bar);
// Pass by reference
void foo(Bar& bar);
// Pass by const reference
void foo(const Bar& bar);
// Pass by rvalue reference
void foo(Bar&& bar);
// Pass by pointer
void foo(Bar* bar);

With all these possibilities it can be hard to know when to use which. Let’s try to find some use case for each of the examples above.

Pass by value

When you pass an argument by value a copy of the original value will be created and supplied to the function. Creating this copy is expensive for large objects but there are at least two use cases where pass by value is the best fit.

Passing primitive data types

Examples of primitive data types are bool, char, int, long, float, and double. These are small and cheap to copy and there is no winning in creating a reference or a pointer to them instead of just creating a copy of the actual value.

Passing enum values

Enumerations, or enums for short are usually represented by integers that can be passed by value without any extra overhead.

enum class Color {

void setBackground(Color color) {
  // Some implementation goes here...

int main() {
  // ...

Passing movable types for copying

This use case is a bit trickier to explain. From C++ 11 a type can be movable. This means that the ownership of its underlying data can be transferred – moved – to another object. One commonly used type that is movable it std::string. Without going into move semantics and rvalue references we can just say that moving the ownership of the data is in many cases a lot cheaper than creating a copy and freeing up the original memory.

Now, since passing by value will create a copy it is possible to use this fact in cases where you would want to create a copy anyway and then move the ownership of the data in the copy to the new object. Sounds confusing? Let’s take a look at an example:

class Person {
std::string m_name;
  Person(std::string name)
    : m_name{ std::move(name) } {}

Here a std::string is passed in to the constructor by value, then the ownership of the underlying data is transferred from the the argument name to the member variable m_name. If you are a seasoned C++ developer you might think that the correct way to do this would be to pass name as const reference instead. That would however not take advantage of the fact that a std::string is movable which allows for some optimization. An example:

int main() {
  std::string name{ "John" };
  Person p{ name }; // Creates a copy of 'name' as argument
  Person p2{ "Danny" }; // Optimized. No copy is created.

In the creation of p2 in the code above the compiler will be able to generate optimized code that does not create any copy of ”Danny”. In order for the same code and optimization to be possible using references two constructors would be needed, one taking the argument as const reference and another taking the argument as an rvalue reference.

Pass by reference

This option is to be used when you need to somehow modify the argument passed in, and let the caller have access to the modified object. I try to avoid using this since it can be unclear when reading the calling code that the object passed in is actually being changed. Still there might be some cases where you want to do this. My advice would then be to name the functions so that it is clear that the object is changed:

enum class CreditScores {

void setCreditScoreOn(Person& p) {
  p.creditScore = CreditScores.Low;

int main() {
  Person dylan{ "Dylan" };
  setCreditScoreOn(dylan); // Should be obvious that dylan is modified
  if (dylan.creditScore == CreditScores.Low) {
    std::cout << "Loan rejected\n";

Pass by const reference

This option should be the default way of passing user defined types that you do not wish to modify or copy, or user defined types that you do wish to copy but are not movable. Note that user defined types does include a lot of types in the standard library such as std::string, std::vector, std::map, and so on. The exception from the rule is enums. These can be passed by value since they are represented by a primitive data type.

std::string getFullName(const Person& p) {
  return p.firstName() + " " + p.lastName();

By using the const keyword the compiler can apply certain optimizations since it ”knows” that the Person object will not be modified when getFullName() is called.

Pass by rvalue reference

The double ampersand, &&, identifies an rvalue reference. An rvalue is a value that does not have a name, and therefore you can’t get the address of it or put it on the left side of an assignment:

int n{3}; // 3 is an rvalue and n is an lvalue. Since n has a name
n = 4;    // it can be put on the left side of an assignment

But if something does not have a name, how do you add it as an argument to a function? Well, you define it directly in the argument list, like this:

void print(const std::string& str);  // Normal reference
void print(const std::string&& str); // R-value reference

int main() {
  std::string str{ "Hello" };
  print(str); // Invokes the first print function
  print(std::string{ ", World!" }); // Invokes the 2:nd print function

The example above works but it is not something that you would normally do. What rvalue references really do is to enable move semantics. I will not go into details of move semantics here since it is a large topic that requires a post of its own, but I will mention that it is possible to cast a value to an rvalue reference with std::move.

Pass by pointer

In most cases passing raw pointers around should be avoided in modern C++. The reason for avoiding them is that it is easy to make mistakes – like forgetting to check if the pointer is a nullptr, accidentally de-referencing freed memory, or reading or writing outside of the allocated memory – which can lead to really nasty bugs that are next to impossible to find. With that said, there are still some use cases where pointers are needed. But when choosing between using a reference or a pointer, use references when you can and pointers when you have to.

One case when you might need to use a pointer is when interacting with a library written in C (there are no references in C) or when you have a need to indicate a special condition which you can do with a nullptr. In the case of a raw pointer without any const you are free to alter both the pointer and the object that the pointer is pointing to:

void zeroUntil(int* values, int sentinel) {
  if (!values) return;
  while (*values != sentinel) {
    *values = 0;
    values++; // Note that 'values' here is a copy of the pointer in the calling code. Incrementing it has no effect on the pointer in the caller.

int main() {
  int values[]{ 1, 2, 3, 4, 5, 6, 7 };
  zeroUntil(values, 5);
  // values is [0, 0, 0, 0, 5, 6, 7]

Adding const either restricts modification of the pointer or the value it points to:

void foo(const Bar* bar); // bar is a pointer to a Bar object that is const
void foo(Bar* const bar); // bar is a const pointer to a Bar object that is not const
void foo(const Bar* const bar); // bar is a const pointer to a Bar object that is const

Ending words

Hopefully this article gave you a good overview of when to use pass by value, pass by reference, and pass by pointer. It does not cover all variants that you might encounter but enough to get a good understanding of the different possibilities.

Live long and code well!

Representing missing value in C++

One thing that I have bothered me quite a lot when coding in C# is a good way to indicate the absence of a value. There are a few different alternatives how this can be done, the most common being using null. However, using null instead of an actual type quite often leads to crashes and bugs due to NullReferenceExceptions being thrown when a null check is forgotten somewhere in the code.

string GetName(int personId)
    var p = repository.Find(personId);
    return p.FirstName + " " + p.LastName; // What if p is null?

string GetNameWithNullCheck(int personId)
    var p = repository.Find(personId);
    if (p == null) return null; // The caller needs to check if null is returned
    return p.FirstName + " " + p.LastName;

As can be seen in the code above it is easy to forget to check for null and there is nothing in the last method that informs the caller that he GetNameWithNullCheck might return null.

Functional languages like Haskell and (functional first) F# have a different approach to representing the absence of a value; Maybe in Haskell and Option in F#. These can both be seen as a container that can either be empty or contain a single value. As you might have guessed, an empty container represents the absence of a value.

let getName personId =
    match find(personId) with
    | Some(p) -> Some(p.firstName + " " + p.lastName)
    | None -> None

In the code above, both find and getName returns an Option. By matching against Some and None it can be determined if find returned a value and act accordingly. The big difference here, comparing to the C# methods, is that the returned value must be checked if it has a value or the code will not even compile. Compile errors should always be preferred to runtime errors.

With this covered, lets investigate how the case of a missing value can be handled in the C++ language. The old way of handling the absence of value was probably to use pointers and in the case of no value, a nullptr. This is by no means better than null in C#. Dereferencing a nullptr can lead to all kinds of errors in an application and should be avoided at all costs. This is why it was happily surprised when I discovered that an optional type had been added to the C++ standard library as of C++17.

#include <optional>

std::optional<std::string> getName(int personId) {
    auto p = repository.find(personId);
    if (p)
        return p.getFirstName() + " " + p.getLastName();
    return std::nullopt;

Using optional has one major advantage over the other ways of handling a missing value; it is absolutely clear just by looking at the function’s signature that it may or may not return a value. You do not need to spend any time reading and understanding the function’s body to understand this.

The caller of the getName function will have to check whether the returned optional contains a value or not. This can be done in a couple of different ways depending on what fits best for the situation.

// n1 is assigned the name of the person or "No name" if there is no person with id 123
auto n1 = getName(123).value_or("No name");
// prints a greeting if a person with id 234 exists
auto o1 = getName(234);
if (o1)
    std::cout << "Hello " << *o1 << std::endl;
// throws an exception if no person with id 345 exists
auto o2 = getName(345);
if (!o2.has_value())
    throw std::invalid_argument("Invalid user id provided");

To conclude, whenever there is a place in the code that may or may not have a value, prefer using optional to clearly indicate that this is the case and help avoiding misunderstandings and bugs.

Typing speed and accuracy

Recently I have been practicing my typing speed and accuracy. When you do a lot of typing on a daily basis it can be a good idea to ramp up your typing speed. It is said that the average typing speed is about 40 words per minute (WPM). For this measure a ”word” is standardized as five letters och key strokes.

When you first start out practicing typing you learn where the keys are on the keyboard and how to type while looking at the screen instead of the keys, I assume most professional office workers know how to ”touch type” like this.

The next thing to improve is accuracy. Correcting mistakes will slow you down a lot when trying to type fast. So before attempting to improve your speed you need to practice accuracy. A good goal is to consistently reach above 97% accuracy when practicing typing. A few sites that help you with this is http://www.typingclub.com and https://www.keybr.com.

Once you can type at about 50 – 60 WPM with > 97% accuracy you can switch over to other web sites. There are three main sites I recommend, they all serve different purposes. First off, speed. To build speed you can use https://www.nitrotype.com where you can race other players with your race car. To finish first you need to type correctly and fast.

Next is accuracy. Here you want to practice typing normal texts, with capital letters, punctuation, numbers, and such. For this you can use https://www.typeracer.com. As with Nitrotype you race other players on this site. Typeracer forces you to go back and correct mistakes, which encourages you to not make any in the first place.

The third site is https://www.monkeytype.com which helps you measure and track your speed over time. Here you do not type any longer texts, instead you are presented with a list of words to type as accurate and fast as possible. Do a couple of these each day to get a good indication of your progress.

My current speed and accuracy on Monkeytype

Some final words, it is easy to go all like, like I have, and practice a lot for a couple of days and then just stop practicing. As with every skill, the key to great progress is continuous practice. So try to schedule some time, like 20-30 minutes each day to visit the different sites I have listed in this post.

Being able to type fast and accurate will save you a lot of time if you are an office worker that spends lot of your time in front of the keyboard.

Teach Yourself Computer Science

In high school I studied electronics and computer systems, and then I continued on with software engineering for a short period before switching over to electrical engineering where I received my master’s degree in electronic system design. After graduation I wrote software for large embedded systems for a number of years before switching over to more high-level programming in .NET.

During my studies at the university I took some programming courses, but I never took any courses on data structures, algorithms, databases, computer networking, an similar courses that are related to computer science. However, during the years after graduation I have spent many hours studying these topics.

In Sweden, where I live, you do not have to pay a tuition fee to study at the university as long as you are a member of the European Union. Also, several universities offer single, stand-alone, courses that you can take remotely in pretty much your own pace. I have taken a few courses using this set up, most recently I took a hands-on course on relational databases given by the University of Dalarna.

I have also studied algorithms and data structures via Coursera. The course Algorithms, Part I, offered by Princeton University with instructors Robert Sedgewick and Kevin Wayne is great. It is quite challenging so if you consider taking it you should be prepared to put in quite a lot of work.

If you are looking for a courses on a specific framework or computer language there are tons out there. I worked through some courses on Pluralsight some time ago, which was nice. Just remember that they are often very specific to a particular programming language or framework.

Quite recently I found the site https://teachyourselfcs.com/ which is a superb resource for anyone looking for tips on what to study when you wish to learn Computer Science. I was happily surprised to find that I already had some of the recommended literature, namely the classic Structure and Interpretation of Computer Programs, The Algorithm Design Manual, and Designing Data-Intensive Applications. All of which are great books.

Today I also received a copy of Computer Networking: A top-down approach which is the recommended book on the topic Computer Networking.

If you haven’t studied Computer Science but are interested to learn, I highly recommend that you take a look at teachyourselfcs.com. The resources provided on that page contains information that will be useful your whole career and not just until the next hype.

Enough writing, time for me to dive into the next chapter of Computer Networking: A top-down approach.

Visualizing software architecture using the C4 model

Recently I stumbled across a model called ”C4” that is designed to help visualize software architecture. After reading through the C4 website, https://c4model.com/, and looking at the conference talk by Simon Brown, I felt that this was something I should explore more in depth. My, very personal, opinion is that we as software designers have lost our ability to communicate software architecture in clear and concise way. Maybe it was lost somewhere in our hunt for being ”Agile” and valuing Working Software over comprehensive documentation, as stated in the Manifesto for Agile Software Development, and we need to rediscover how it should be done.

Now, I don’t have anything against being agile, I’ve suffered through the test and integration phases of a large waterfall project and that is one experience I never wish to have again. However, valuing Working Software does not mean that our systems should not be documented, and in many cases claiming that The code is our documentation just does not cut it (if you don’t believe me, try telling your, not so tech-savvy, manager to clone the Git repo and look at the source code when you get a question about the system and see what happens).

What really spoke to me when I learned about the C4 model are the clear and distinct levels, Context, Containers, Components, and Code. These are ordered by level of abstraction. At the highest level, Context, the system is presented as a single box and the diagram shows people and external systems that in some way interact with the system. As an example I made up a used cars dealership backend system.

System context diagram for the imaginary system

It should be easy to understand the role of the system and the context of which it functions. In the diagram there are two Persons, or roles, a Car Dealer and a System Admin. The dealer has access to a locally hosted system which in turn interacts with the backend system via REST. The system admin has direct access to the backend system via some sort of graphical user interface (GUI). The backend system also interacts with the National vehicle owner registry via remote procedure calls (RPC).

On the next level, the Container level, we open up the box of the Used Cars Dealership Backend system and take a look inside. Note that containers in this context have nothing to do with Docker. Containers in this context are different applications that makes up the system. You can imagine a Web API application that handles REST request and some sort of admin application that the system administrator has access to. There is probably also one or two databases that contains information about the cars and the connected dealers. It could look something like the diagram below.

Container diagram for the imaginary system

On the third level, Components, we zoom in on a single container and peek inside. I didn’t go that deep in this example but you can imagine a component to correspond to a .NET project and the connections between components being project references. As you can imagine, the components level can change quite rapidly in new projects and it can be worth looking at ways to generate a diagram in this level instead of manually keeping it updated.

For the final level, Code, no diagrams should be manually constructed. The Code level is represented by classes and interfaces, which can be constructed using UML. These diagrams should either be skipped entirely or generated by a tool.


The C4 model can be really useful for describing a software system’s architecture from a high-level perspective. The first two levels, Context and Containers, should be quite stable during the lifetime of a system and creating and keeping those diagrams up to date should not add much work. The diagrams can be used when communicating the architecture both within a development team and with other parts of the organization.

The lower two levels, Components and Code, are more volatile and also more targeting software developers and architects actively working with developing the system. They should be generated from the existing code rather than being written manually by hand.

For documentation on Wiki pages, PDF files and the likes I would only use Context and Container diagrams. Components and Code can be generated when needed, or automatically generated as part of the build and automatically uploaded to a shared area.

There are pretty good tooling available for creating the diagrams using simple text and have the diagrams generated. For the diagrams above I used PlantUML (https://plantuml.com/) with a C4 add-on (https://github.com/RicardoNiepel/C4-PlantUML). The text files can be easily stored in the same source control repository as the system’s source code.

Book review – Get Programming with F#

It’s been a while since I finished Get Programming with F# by Isaac Abraham but I haven’t come around to review it, until now.

Learning functional programming has been one of my personal goals this year. I started out with Functional Programming in C# by Enrico Buonanno which explains a lot of the concepts of functional programming, and shows how they can be implemented in C# (see my review here https://ericbackhage.net/c/book-review-functional-programming-in-c/). It was nice to be able to learn the concepts without needing to pick up a new programming language at the same time. However, since C# is primarily an object oriented language, things like immutability, partial application, currying, etc does not fit very well and does not come naturally. So what better way to get some real experience than to learn a functional first language?

There are several languages to choose from when you wish to pick up a functional language. For the purist there is Haskell (https://www.haskell.org/) which is a purely functional programming language. If you wish to run on the JVM you can give Scala (https://scala-lang.org/) or Clojure (https://clojure.org/) a go. For me, who primarily writes code that runs on the CLR, F# felt like a natural language to start out with. Any libraries that you write in F# can be used from C# projects as well, and you can have both C# and F# projects in the same solution in Visual Studio and have them reference each other. Hence, I decided to Get Programming with F#.

Book cover

The subtitle of the book is A guide for .NET developers and that is exactly the target audience. I would say that a prerequisite for getting real value from this book is that you are familiar with writing code in C# using Visual Studio.

It is written similar to a hands-on tutorial where new material is presented and you are given tasks where you need to apply the knowledge. I like the idea where you follow along and put your new found knowledge into practice as you go, but it also put constraints on when and where you can study the material since you need to sit by your computer pretty much the whole time.

The book was published in 2018 and most of the material is still relevant. However, there are some parts that are out of date, especially those that describes how to set up your work environment for working efficiently with F#. This is quite frustrating since this is one of the first things you encounter in the book and it does give you a bad start. Once you realize that Visual F# Power Tools is no longer supported and either just move along without it, or switch to Visual Studio Code with Ionide (https://ionide.io/), everything goes pretty well, until you need to reference libraries from F# interactive. I struggled with this quite a bit and ended up using nuget on command line to download the packages to a temporary folder which I then referenced. If you know of a better way, please let me know.

The material covered is all good stuff. I was very impressed with the type providers and I can see how these can be very powerful tools. Even though F# is a multi-paradigm language the author focuses on the functional style and avoids using object oriented solutions. This is a good thing since it shows how to solve common problems in a different way than you are used to, coming from C#.

I am not aware of any other book on F# that is quite like this one and I would recommend it to anyone with a solid background in C# development that wish to get some hands on experience with functional programming in F#. It is a bit frustrating that some parts are out of date but if you are aware of it you can quite easily just skip those parts. Also, the code for the book could use some updating, there is however a separate branch for .NET Core that I recommend you switch to when working through the examples (https://github.com/isaacabraham/get-programming-fsharp/tree/netcore).

If you really want to go hardcore I believe Haskell is the way to go. F# is not purely functional and will let you do things, like calling impure functions from pure functions, that won’t even compile when using Haskell. I do however recommend that you try to adhere to functional principles also when learning F#. If you want to write code using object oriented principles then C# is a much better fit.

In summary, if you are a C# developer interested in F#, this book is for you.

New C# 9 features and their F# counterparts

I recently watched Mads Torgersen’s video presentation of C# 9 from Microsoft Build and also read his presentation on the same topic titled Welcome to C# 9.0. It is obvious that the C# team is heavily inspired by functional programming and many of the new features have equivalents in F# (where they have existed for a long time). I thought it would be interesting to do a comparison of the new features and their F# counterparts. Since C# is primarily an object oriented language it might be difficult to add functional concepts in a way that fits well into the existing language. Let’s see what it looks like.

Init-only properties

Several of the new C# 9 features focuses on immutability. For those of you familiar with functional programming it is no news that immutability is one of the ground pillars of the functional paradigm. Mutating data is something we associate with imperative and object oriented programming.

C# 9 will introduce a feature called init-only properties which limits the possibility to set property values to only when a new instance of an object is created. Let’s look at some code.

// Prior to C# 9
public class Person 
  public string FirstName { get; set; }
  public string LastName { get; set; }
var person = new Person { FirstName = "Eric", LastName = "Backhage" };
// It is possible to change the value of the properties whenever, this
// might not be something you want.
person.FirstName = "Carl";

// With C# 9
public class Person
  // Notice that 'set' has been changed to 'init'
  public string FirstName { get; init; }
  public string LastName { get; init; }
var person = new Person { FirstName = "Eric", LastName = "Backhage" };
// With 'set' changed to 'init' the following line will not compile.
person.FirstName = "Carl";
// However, it is still possible to set the 'person' variable to
// a different value.
person = new Person { FirstName = "Carl", LastName = "Backhage" }; 

F# – Record type assigned to mutable variable

If I wanted to have similar behavior in F# I would use a record type and assign it to a variable that is declared as mutable. The code would look like this:

type Person = { FirstName: string; LastName: string }
let mutable person = { FirstName = "Eric"; LastName = "Backhage" }
// As in the C# version the following line would not compile
person.FirstName <- "Carl"
// But it is still possible to set the 'person' variable to a
// different value
person <- { FirstName = "Carl"; LastName = "Backhage" }

The only reason I would use ’mutable’ however is for variables that are updated thousands of times in a tight loop. In other cases I would use the default behavior of immutable variables. We’ll see this next.


Records in C# has been planned for a long time and it feels really nice that they finally seem to make it into the language. For those of you unfamiliar with Records they can be described as immutable data containers without any logic (methods) associated with them. To define a Record type in C# 9 you add the keyword data to the class definition. Like this:

public data class Person
  public string FirstName { get; init; }
  public string LastName { get; init; }
  There is also a shorthand that removes most of the 
   boiler plate. The above code can be written like
   this instead:
   public data class Person { string FirstName; string LastName }
// Creating an instance of Person is no different
// than before
var person = new Person { FirstName = "Eric", LastName = "Backhage" };
// Properties are still init-only so the following line
// will not compile
person.FirstName = "Carl";
// However, since Person is now a Record the
// following line will also not compile
person = new Person { FirstName = "Carl", LastName = "Backhage" };
// Instead you must introduce a new variable
var anotherPerson = person with { FirstName = "Carl" };
// The 'with' keyword lets you clone properties from another
// instance. anotherPerson.LastName is equal to "Backhage"

F# – Records

In F# Records is one of the core language types and has been around forever. The above C# 9 code looks like this in F#:

type Person = { FirstName: string; LastName: string }
let person = { FirstName = "Eric"; LastName = "Backhage" }
// As in C# 9 the following does not compile
person.FirstName <- "Carl"
person <- { FirstName = "Carl"; LastName = "Backhage" }
// Instead you need to introduce a new variable
let anotherPerson = { person with FirstName = "Carl" }

C# 9 Records – value based equality

Unlike regular classes, that uses reference based equality unless you override the Equals method, C# 9 Records will use value based equality. That means that if two records are of the same type and their properties have the same values, they will be considered equal. Let’s look at some example code.

public data class Person { string FirstName; string LastName; }
var p1 = new Person { FirstName = "Eric", LastName = "Backhage" };
var p2 = new Person { FirstName = "Eric", LastName = "Backhage" };
var p3 = new Person { FirstName = "Carl", LastName = "Karlsson" };

p1.Equals(p2); // => true
p2.Equals(p3); // => false

In F#, Records works exactly in the same way, you just use ’=’ for equality checking instead:

type Person { FirstName: string; LastName: string }
let p1 = { FirstName = "Eric"; LastName = "Backhage" }
let p2 = { FirstName = "Eric"; LastName = "Backhage" }
let p3 = { FirstName = "Carl"; LastName = "Karlsson" }

p1 = p2 // -> true
p2 = p3 // -> false

C# 9 Records – Positional Records

Positional Records is a nice shorthand that reduces the amount of boiler plate code needed when defining record types with constructors and deconstructors. The code example below shows how it will work.

public data class Person(string FirstName, string LastName);

/* The code above is actually equal to all of this:
public data class Person
  string FirstName;
  string LastName;
  public Person(string firstName, string lastName)
    => (FirstName, LastName) = (firstName, lastName);
  public void Deconstruct(out string firstName, out string lastName)
    => (firstName, lastName) = (FirstName, LastName);
// Instantiating
var person = new Person("Eric", "Backhage");
// Deconstructing
var (firstName, lastName) = person;

Now we are getting on par with F# in terms of being succinct. Let’s compare the same operations line by line:

// C# 9
public data class Person(string FirstName, string LastName);
// F#
type Person { FirstName: string; LastName: string }
// C# 9
var person = new Person("Eric", "Backhage");
// F#
let person = { FirstName = "Eric"; LastName = "Backhage" };
// C# 9
var (firstName, lastName) = person;
// F#
let { FirstName = firstName; LastName = lastName } = person

Nice work of the C# team!

C# 9 Records – Inheritance

True to the object oriented paradigm, C# Records will support inheritance. You will be able to create base Records and create derivatives that inherit from them. It will look like this:

public data class Person { string FirstName; string LastName; }
public data class Student : Person { int ID; }

var student = new Student { FirstName = "Eric", LastName = "Backhage", ID = 1 };

Since inheritance is an object oriented concept and F# is a functional-first language F# Records can not be inherited. The solution is to use composition instead. One way of accomplishing similar behavior in F# is shown in the code below:

type PersonalInfo = { FirstName: string; LastName: string }
type Student = { ID: int; PersonalInfo: PersonalInfo }

let student = { ID = 1; PersonalInfo = { FirstName = "Eric"; LastName = "Backhage" } }

My personal preference here is composition. I believe that the C# team have had a really tough time getting value based equality and instantiation using the ’with’ keyword working well with inheritance, hope it will be worth it.

Conclusion and further reading

I think the new C# 9 features will be great additions to the C# language and it is my opinion that the C# language design team are doing a great job adding nice new features to the language. Immutable records are not something you would normally expect to find in an object oriented language, and they feel a lot more natural in F#, but I am confident they will make a great addition to C# now that they are finally coming.

For more information on upcoming C# 9 features that I haven’t mentioned here I recommend reading Mads Torgersens post on (from where I copied most C# 9 examples in the code in this post) https://devblogs.microsoft.com/dotnet/welcome-to-c-9-0/.

Now, stop reading and go write some code!

Introducing The WIB Limit

If you and your development team are using a Kanban board then you are familiar with the term WIP limit. WIP is short for Work In Progress and the idea is that putting a limit on the amount of ongoing work will help the team focus on the most important work and getting more work items completed. The limits will also help with identifying bottlenecks in the work flow that needs to be addressed.

I think WIP limits are great. Personally I try to work on only one thing at the time as much as possible and minimize context switching. There is however one more limit that I would like to propose, the WIB limit. WIB is short for ’Work In the Backlog’ and the goal of the limit is to avoid having the number of items in your backlog growing out of control.

The backlog problem

In my experience, backlogs with 100 or even several hundred items in them are not unusual. As time goes by more and more items are added, at a faster pace than the development team can finish the items on top. This usually goes on until the product owner goes on a cleaning frenzy and drops the majority of the items. The cycle then starts over and the backlog continues to grow.

So why is this a problem? For once, if you get a request for a feature from the organization and your response is ”Sure! I am putting it in the backlog and we’ll work on it as soon as possible”, and you have 100 other items in the backlog with higher priority, you are pretty much lying to the organization that the feature will be developed…ever. And once people starts to realize that their items aren’t being worked on they will try to find all kinds of ways around your work process.

Second, trying to keep an overview of, and prioritize between, hundreds of items is just not realistic. The work effort required to keep them up to date and in correct priority order is just too big.

Third, there will be times when you start feeling guilty for having items just lingering in the backlog for months and months, but because they have been there for so long the notes written in them will be out of date or difficult to remember what they actually meant, meaning you have to start over with any work you put in to understanding requirements and write a good user story in the first place.

The WIB limit

I suggest putting a hard limit on the number of items in the backlog. There is no exact number that I believe works for all but somewhere in the range 15 – 20 items should be the limit. It should be possible to have a pretty good overview on 20 items and keeping them prioritized.

Once the WIB limit has been reached, no new items are added to the backlog until there is free space. Any requests to add new items are rejected, justified by that there just isn’t any room for new work items at this moment.

Now, there will be times when something really urgent pops up that needs to be put high up in the backlog. When this happens, and the WIB limit has already been reached, the item with the lowest priority must be dropped and the organization has to be informed that this happened. If the item put in is really top priority, then this must be accepted.

Note that introducing a WIB limit will not in any way reduce the number of work items being completed. Instead it will help with highlighting the current work load, make it more clear what items are prioritized and why, and keep development teams from making promises they can not keep.

Learn SQL Server Administration in a Month of Lunches

I just finished a month of lunches (actually breakfasts) in the company of a book called ”Learn SQL Server Administration in a Month of Lunches” by Don Jones. As the name implies it is a book on the topic of administrating one or more database servers that runs Microsofts RDBMS software ”SQL Server”.

Book cover

This is NOT a book on database design or the programming language SQL, even though it contains a T-SQL crash course. Instead this book teaches you how to manage a SQL Server. That is, how to manage security, set up and configure auditing, monitor performance, investigate dead locks and time outs, manage back-ups, and other things that are useful for keeping a database server running smoothly.

Jones did not write this books for software developers only but for anyone who recently has become responsible for administrating a server running SQL Server and is in need of getting up to speed with the most important tasks that comes with the role. I would recommend anyone, developer or not, that are responsible for managing one or more instances of SQL Server to read this book. It is packed with tips and hands-on exercises that will quickly help you learn the most important skills needed for administrating SQL Server.

Even though the book was written when SQL Server 2012 was the most recent version out there I did not find anything that wasn’t applicable for SQL Server 2019. Even the pictures of dialogs in SQL Server Management Studio are still 100% accurate. I suppose that Microsoft does not change these if not absolutely necessary.

Now go learn some SQL Server administration!

Results in F#

I recently wrote a blog post on the Option type in F# and how to use it to represent the abscence of a value, for example when looking up a user in a database by trying to match an integer with a user id. If no user with a matching id exists, Option lets you return None. Compared to returning null, which is quite common in C#, None is a valid value and you wont risk getting a NullReferenceException when returning None.

However, Option does not let you specify any reason, like an error message, that explains what went wrong when None was returned. To remedy that, you can instead use the Result type. A Result in F# can be either Ok or Error where both can be of different types. Let’s look at the example from before, but re-written to use Result.

open System

type CustomerId = CustomerId of uint32
type Customer = { Id: CustomerId }
type Email = Email of string
type ErrorMessage = ErrorMessage of string

type StringToCustomerIdParser = string -> Result<CustomerId, ErrorMessage>
type CustomerLookup = CustomerId -> Result<Customer, ErrorMessage>
type EmailLookup = Customer -> Result<Email, ErrorMessage>
type EmailSender = Email -> unit

let toError s = ErrorMessage s |> Error
let toString (ErrorMessage msg) = msg

let userIdStringParser : StringToCustomerIdParser = fun s ->
    match UInt32.TryParse s with
    | true, num -> Ok (CustomerId num)
    | _ -> toError (sprintf "Could not parse %s to an integer" s)

let getCustomerById : CustomerLookup = fun id ->
    let getValue (CustomerId i) = i

    match id with
    | CustomerId 1u -> Ok { Id = CustomerId 1u }
    | CustomerId 2u -> Ok { Id = CustomerId 2u }
    | _ -> toError (sprintf "No customer with id %d exists" (getValue id))

let getEmailForCustomer : EmailLookup = fun customer ->
    match customer.Id with
    | CustomerId 1u -> Ok (Email "apan@bepan.se")
    | _  -> toError "Customer has no e-mail"

let sendPromotionalEmail : EmailSender = fun email ->

let main argv =
    if argv.Length <> 1 then
        failwith "Usage: program <user_id>\n"

    let result = 
        userIdStringParser argv.[0]
        |> Result.bind getCustomerById 
        |> Result.bind getEmailForCustomer
        |> Result.map sendPromotionalEmail

    match result with
    | Ok _ -> printf "Email sent successfully\n"
    | Error err -> printf "%s"  (toString err)

Take a look at the code in the main function. There are several methods that return a Result and I have used Result.bind and map to create a workflow where the returnvalue from each function call is used as an input parameter to the next function, as long as the result is Ok. If any of the functions returns an Error the rest of the workflow is skipped (Result.bind will not call the next function, instead it will just return the Error from the previous function).

A constraint that is good to be aware of is that the error value type must be the same for all functions. You can see this if you look at the function type declarations that the error value type is always ErrorMessage, which is a type that just wraps a string. If different types were used it would not be possible for Result.bind and map to just return the error from the previous function, since the types would not match. There are ways around this but I won’t go into those now.

Looking at the code again. What do you think will be printed to the console if the number one is given as input, what about 2 and 3? Below is the outcome from some different runs.

$ sendEmailToCustomer 1
Email sent successfully
$ sendEmailToCustomer 2
Customer has no e-mail
$ sendEmailToCustomer 3
No customer with id 3 exists
$ sendEmailToCustomer test
Could not parse 'test' to an integer

How do you like this way of handling errors? Quite different from the usual throwing exceptions and returning null isn’t it. Personally I think that, after a bit of getting used to, this approach makes the code much easier to read and reason about. Notice that there is not a single try-catch, no null guards, and only a single if-statement in the code above.