Accelerate your software engineering career by understanding adjacent layers.

As a developer you are always between something. If you work on UI, you are between the user and the rest of your application. If you work on web services or APIs, you are between the web framework and a database, a library, or another service. Your executable code is between the compiler and the runtime (JVM/CLR) or CPU.

While it’s possible to work just in your area, ignoring what’s above and below you is not a good long-term strategy. Knowing at least a little about adjacent layers is invaluable because it:

  • makes debugging “weird” issues easier
  • can help prevent mistakes made due to incorrect assumptions
  • allows choosing more optimal design choices

How can you learn about adjacent layers? Here are a few ideas:

  • If you are using a popular framework, a library, or a product like a database there should already be a lot of documentation readily available
  • In case of open-source projects you can simply check out the code
  • For libraries or services created in-house you can talk to the team who built them or read the code
  • You can also look at the code for JavaScript (or TypeScript) libraries because they ship as code. Minification and uglification can make it challenging
  • If you don’t have access to code and you’re working with languages like C# or Java, you can use decompilers that do a decent job generating code from the IL/bytecode
  • Your last resort is looking at or debugging the assembly, but this is hard in general and your mileage may vary.

Once you get a good grasp of your adjacent layers you can take it further and understand layers your code doesn’t directly interact with. Keep doing this and soon you will be one of only few people who understands the entire stack.

 Accelerate your software engineering career by tracking your work

Your manager doesn’t know what you’re doing. They do know about things you should be doing they care about but you’re doing much more than that. You’re probably doing much more than even you think you’re doing. So, if even you don’t know what you’re doing why would you expect your manager to know this?

To know what you’re doing you need to track your work. The reason to do this is to be able to easily answer the following questions:

  • What am I doing? – helps confirm you’re working on the right things
  • What am I not doing? – helps ensure you’re not dropping the ball on important work
  • What I’ve accomplished this year/half/quarter? – makes all career, performance review and promotion discussions much easier

There are many ways to track work. You’ll need to find what works for you. I do it in a very simple way. Each year I create a Google doc for tracking my work for the given year. It has two sections:

  • What I’ve completed
  • A weekly list of tasks or projects I need to work on

Each Monday morning I spend 10-15 minutes to update this doc. I go over last week’s work items and move completed ones to the ‘completed’ section. I strikethrough work items that are no longer needed or I decided not to do. I copy the remaining ones as this week’s tasks. Finally, if there is any new work, I add it.

I attach artifacts to most items in the ‘completed’ section. They are a tool I or my manager can use to showcase my work and support my career related discussions. Here are examples of what I include:

  • Links to design docs, posts, roll out plans, etc.
  • References to diffs where I influenced the design or prevented serious issues
  • Other teams’ projects I helped unblock
  • Details about why something I did was hard

There is no one correct answer as to what granularity to track the completed items at. I found including smaller work items is worthwhile. Some of them are too small to matter by themselves but they add up. The secret is to group them in coherent themes. For instance, adding a couple tests will not get you to the next level but if you have done this multiple times you might have significantly increased the test coverage for your team or product which could be an additional argument to support your promotion.

Starting to track my work was one of the best things I’ve done for my career. It takes just a few minutes per week but gives me the clarity I need and saves me a ton of time during performance reviews. If you’re not tracking your work, I strongly encourage you to start.

Accelerate your software engineering career by fixing something every week.

We, software engineers, are so used to living in pain that we stopped noticing it. We die a death by a thousand papercuts every day only to start fresh the next day. Flaky tests, broken builds, workarounds, outdated documentation is our daily bread. We live with all of that forgetting how much it costs.

Imagine an annoying issue you need to spend a few minutes on every week. Now, multiply this time by the number of people on your team – likely, they all are facing this issue. I also bet this is not the only “small” issue you and your team have. If you add all this up it may turn out that your team could use another engineer whose full-time job is just to deal with all these annoyances.

With time, it only gets worse because small issues tend to grow. One flaky test leads to a flaky suite and soon no one can tell what the quality of the product is. A convoluted function is patched multiple times without anyone really understanding it to the point when everyone prays that they don’t have to touch it. Missing test coverage results in multiple bugs reported by users.

The interesting thing is – most small issues don’t take a lot of time to fix. You yourself (i.e., not counting your team) may be able to recoup the time invested in a couple of weeks.

There are many examples of the positive impact of taking care of a small issue. Here is just a handful examples from my experience:

  • Fixing an acknowledged bug deprioritized consistently from release to release made lives of some users better (and discussing the bug repeatedly took more time than just fixing it)
  • Refactoring code that was incomprehensible made adding a new feature easier and faster
  • Updating an internal wiki helped save time spent on answering the same question again and again
  • A “flaky” test was a result of a product bug that could have serious consequences
  • Tuning a noisy alert prevented from waking up a team member who was on-call in the middle of the night

But wait, there is more! If you keep doing this regularly (a weekly cadence worked best for me) people will notice. Your team members may even start doing the same! And if they don’t, it will at least make it harder for them to make things worse – for example, it is much easier to ignore a new failing test if tens of tests are already failing than to ignore the first failing test in a suite. In any case, you are now helping to strengthen the engineering culture of your team leading by example.

And what if no one notices? Oh well, you made at least the life better for yourself – and this is already a win!

Accelerate your software engineering career by writing clean diffs

As a software engineer writing diffs (also called PRs – Pull Requests, or CRs – Code Reviews) is your bread and butter. You want to code your changes, have them reviewed, merge them, and start the process again. Repeating these cycles effectively is essential for delivering new features and building products quickly. There are not many things that can slow this process down like low-quality diffs. Some signs of a low-quality diffs are:

  • Compilation errors
  • Tests failures
  • Lint warnings

Low quality diffs leave both the author and the reviewer(s) frustrated. The author is frustrated because there is a lot of back and forth and they fell like they will never be able to merge their changes. The reviewers are frustrated because they feel their time is being wasted on diffs that are clearly not ready for review.

Sending a low-quality diff happens occasionally to everyone (e.g., this new file you forgot to include in your diff or that test you didn’t run due to a typo). However, when done repeatedly, it will have a detrimental effect on your career because your team members will simply try to avoid reviewing your code. This will prevent you from iterating quickly and will make it harder for you to adhere to schedules.

While spending a few minutes to double check your diff is in good shape may seem like a waste of time – especially if timelines are tight, it is an investment that in a long run will save you even more time. And once you build this habit it will become your second nature.

std::optional? Proceed with caution!

The std::optional type is a great addition to the standard C++ library in C++ 17. It allows to end the practice of using some special (sentinel) value, e.g., -1, to indicate that an operation didn’t produce a meaningful result. There is one caveat though – std::optional<bool> may in some situation behave counterintuitively and can lead to subtle bugs.

Let’s take a look at the following code:

  bool isMorning = false;
  if (isMorning) {
    std::cout << "Good Morning!" << std::endl;
  } else {
    std::cout << "Good Afternoon" << std::endl;
  }

Running this code prints:

Good Afternoon!

This is not a surprise. But let’s see what happens if we change the bool type to std::optional<bool> like this:

  std::optional<bool> isMorning = false;
  if (isMorning) {
    std::cout << "Good Morning!" << std::endl;
  } else {
    std::cout << "Good Afternoon!" << std::endl;
  }

This time the output is:

Good Morning!

Whoa? Why? What’s going on here?

While this is likely not what we expected, it’s not a bug. The std::optional type defines an explicit conversion to bool that returns true if the object contains a value and false if it doesn’t (exactly the same as the has_value() method). In some contexts – most notably the if, while, for expressions, logical operators, the conditional (ternary) operator (a complete list can be found in the Contextual conversions section on cppreference) – C++ is allowed to use it to perform an implicit cast. In our case it led to a behavior that at the first sight seems incorrect. Thinking about this a bit more, the seemingly intuitive behavior should not be expected. An std::optional<bool> variable can have one of three possible values:

  • true
  • false
  • std::nullopt (i.e., not set)

and there is no interpretation under which the behavior of expressions like if (std::nullopt) is universally meaningful. Having said that, I have seen multiple engineers (myself included) fall into this trap.

The problem is that spotting the bug can hard as there are no compiler warnings or any other indications of the issue. This is especially problematic when changing an existing variable from bool to std::optional<bool> in large codebases because it is easy to miss some usages and introduce regressions.

The problem can also sneak easily to your tests. Here is an example of a test that happily passes but only due to a bug:

TEST(stdOptionalBoolTest, IncorrectTest) {
  ASSERT_TRUE(std::optional<bool>{false});
}

How to deal with std::optional<bool>?

Before we discuss the ways to handle std::optional<bool> type in code, it could be useful to mention a few strategies that can prevent bugs caused by std::optional<bool>:

  • raise awareness of the unintuitive behavior of std::optional<bool> in some contexts
  • when a new std::optional<bool> variable or function is introduced make sure all places where it is used are reviewed and amended if needed
  • have a good test unit coverage that can detect bugs caused by introducing std::optional<bool> to your codebase
  • if feasible, create a lint rule that flags suspicious usages of std::optional<bool>

In terms of code there are few ways to handle the std::optional<bool> type:

Compare the optional value explicitly using the == operator

If your scenario allows treating std::nullopt as true or false you can use the == operator like this:

std::optional<bool> isMorning = std::nullopt;
if (isMorning == false) {
  std::cout << "It's not morning anymore..." << std::endl;
} else {
  std::cout << "Good Morning!" << std::endl;
}

This works because the std::nullopt value is never equal to an initialized variable of the corresponding optional type. One big disadvantage of this approach is that someone will inevitably want to simplify the expression by removing the unnecessary == false and, as a result, introducing a regression.

Unwrap the optional value with the value() method

If you know that the value in the given codepath is always set you can unwrap the value by calling the value() method like in the example below:

    std::optional<bool> isMorning = false;
    if (isMorning.value()) {
      std::cout << "Good Morning!" << std::endl;
    } else {
      std::cout << "Good Afternoon!" << std::endl;
    }

Note that it won’t work if the value might not be set – invoking the .value() method if the value was not set will throw the std::bad_optional_access exception

Dereference the optional value with the * operator

This is very similar to the previous option. If you know that the value on the given code path is always set you can use the * operator to dereference it like this:

std::optional<bool> isMorning = false;
if (*isMorning) {
  std::cout << "Good Morning!" << std::endl;
} else {
  std::cout << "Good Afternoon!" << std::endl;
}

One big difference from using the value() method is that the behavior is undefined if you dereference an optional whose value was not set. Personally, I never go with this solution.

Use value_or() to provide the default value for cases when the value is not set

std::optional has the value_or() method that allows providing the default value that will be returned if the value is not set. Here is an example:

std::optional<bool> isMorning = std::nullopt;
if (!isMorning.value_or(false)) {
  std::cout << "It's not morning anymore..." << std::endl;
} else {
  std::cout << "Good Morning!" << std::endl;
}

If your scenario allows treating std::nullopt as true or false using value_or() could be a good choice.

Handle std::nullopt explicitly

There must have been a specific reason you decided to use std::optional<bool> – you wanted to enable the scenario where the value is not set. Now you need to handle this case. Here is how:

    std::optional<bool> isMorning = std::nullopt;    
    if (isMorning.has_value()) {
      if (isMorning.value()) {
        std::cout << "Good Morning!" << std::endl;
      } else {
        std::cout << "Good Afternoon!" << std::endl;
      }
    } else {
      std::cout << "I am lost in time..." << std::endl;
    }

Fixing tests

If your tests use ASSERT_TRUE or ASSERT_FALSE assertions with std::optional<bool> variables they might be passing even if they shouldn’t as they suffer from the very same issue as your code. As an example, the following assertion will happily pass:

ASSERT_TRUE(std::optional{false});

You can fix this by using ASSERT_EQ to explicitly compare with the expected value or by using some of the techniques discussed above. Here are a couple of examples:

ASSERT_EQ(std::optional{false}, true);
ASSERT_TRUE(std::optional{false}.value());

Other std::optional type parameters

We spent a lot of time discussing the std::optional<bool> case. How about other types? Do they suffer from the same issue? The std::optional type is a template so its behavior is the same for any type parameter. Here is an example with std::optional<int>:

    std::optional<int> n = 0;
    if (n) {
      std::cout << "n is not 0" << std::endl;
    }

which generates the following output:

n is not 0

The problem with std::optional<bool> is just more pronounced due to the typical usages of bool. For non-bool types it is fortunately no longer a common practice to rely on the implicit cast to bool. These days it is much common to write the condition above explicitly as: if (n != 0) which will compare with the value of as long as it is populated.