Don’t let “later” derail your software engineering career

One thing I learned during my career as a software engineer is that leaving unfinished work to complete it “later” never works.

Resuming paused work is simply so hard that it hardly ever happens without external motivation.

Does it matter, though?

If the work was left unfinished not because it was deprioritized but because it was boring, tedious, or difficult, then more often than not, it does matter. The remaining tasks are usually in the “important but not urgent” category.

What happens if this work is never finished?

Sometimes, there are no consequences. Occasionally, things explode. In most cases, it’s a toll the team is silently paying every day.

From my observations, the most common software engineering activities left “for later” are these:

  • adding tests
  • fixing temporary hacks
  • writing documentation

Adding test “later”

Insufficient test coverage slows teams down. Verifying each change manually and thoroughly takes time, so it is often skipped. This results in a high number of bugs the team needs to focus on instead of building new features. What’s worse, many bugs are re-occurring as there is no easy way to prevent them.

I don’t think there is ever a good reason to leave writing tests for “later.” The best developers I know treat test code like they treat product code. They wouldn’t ship a half-baked feature, and they won’t ship code without tests – no exceptions.

Fixing temporary hacks “later”

Software developers introduce temporary hacks to their code for many reasons. The problem is that there is nothing more permanent than temporary solutions. “The show must go on,” so new code gets added on top of the existing hacks. This new code often includes additional hacks required to work around the previous hacks. With time, adding new features or fixing bugs becomes extremely difficult, and removing the “temporary” hack is impossible without a major rewrite.

In an ideal world, software developers would never need to resort to hacks. The reality is more complex than that. Most hacks are added for good reasons, like working around an issue in someone else’s code, fixing an urgent and critical bug, or shipping a product on time. However, the decision to introduce a hack should include a commitment to the proper, long-term solution. Otherwise, the tech debt will grow quickly and impact everyone working with that codebase.

Writing documentation “later”

Internal documentation for software projects is almost always an afterthought. Yet, it is another area that, if neglected, will cause team pain. Anyone who has been on call knows how difficult it is to troubleshoot and mitigate an issue quickly without a decent runbook.

In addition, documentation also saves a lot of time when working with other teams or onboarding new team members. It is always faster to send a link to a wiki describing the architecture of your system than to explain it again and again.

One way to ensure that documentation won’t be forgotten is to include writing documentation as a project milestone. To make it easier for the team, this milestone could be scheduled for after coding has been completed or even after the product has shipped. If the entire team participates, the most important topics can be covered in just a few days.

How does “later” impact YOU?

Leaving unfinished work for “later” impacts you in two significant ways. First, it strains your mental capacity. The brain tends to constantly remind us about unfinished tasks, which leads to stress and anxiety (Zeigarnik effect). Second, being routinely “done, except for” can create an impression of unreliability. This perception may hurt your career, as it could result in fewer opportunities to work on critical projects.

Top 5 Jokes Software Developers Tell Themselves and their Managers

Software developers are boring. Not only do they keep repeating the same jokes all the time, but they also take them very seriously!

Here are the most common ones

I will add unit tests later

“I will add unit tests later” is one of the most common jokes software developers tell. In most cases, they believe it. But then reality catches up, and tests are never added. If you were too busy to add unit tests when it was the easiest, you won’t have time to make up for this later, when it becomes more difficult, and you might even be questioned about the priority and value of this work.

I am 99% done

I am terrified to hear that someone “is 99% done,” as if it were positive. I have seen too many times how the last 1% took more time and effort than the first 99%. Even worse, I participated in projects that were 99% completed but never shipped.

My code has no bugs

Claiming that someone’s code has no bugs is a very bold statement. I did it several times at the beginning of my career, only to be humbled by spectacular crashes or non-working core functionality. I found that saying: “So far, I haven’t found bugs in my code. I tested the following scenarios: …” is a much better option. Listing what I did for validation can be especially useful as it invites inquiring about scenarios I might have missed.

This joke is especially funny when paired with “I will add unit tests later.”

No risk – it’s just one line of code

While a one-line change can feel less risky than bigger changes, that doesn’t mean there is no risk. Many serious outages have been caused by one-line configuration changes, and millions of dollars are lost every year due to last-minute “one-liners.”

Estimates

Most of the estimates provided by software development are jokes. The reason for this is simple: estimating how much time even a software development task is more art than science. Trivial mistakes, like misplaced semicolons, can completely derail even simple development tasks. Bigger, multi-month, multi-person projects have so many unknowns that providing accurate estimates is practically impossible. From my experience, this is how this game works:

  • software developers provide estimates, often adding some buffer
  • their managers feel that these estimates are too low, so they add more buffer
  • project managers promise to deliver the project by some date that has nothing to do with estimates they got from engineering
  • the project takes longer than the most pessimistic estimates
  • everyone keeps a straight face

Bonus

“My on-call was pretty good! I was woken up only three times this week.”

On-call Manual: Handling Incidents

If your on-call is perfect and you can’t wait for your next shift, you can stop reading now.

But, in all likelihood, this isn’t the case.

Rather, you feel exhausted by dealing with tasks, incidents, and alerts that often fire after hours, and you constantly check the clock to see how much time is remaining in your shift.

The on-call can be overwhelming. You may not know all the systems you are responsible for very well. Often, you will get tasks asking you to do things you have never done before. You might be pulled into investigating outages of infrastructure owned by other teams.

I know this very well – I have been participating in on-call for almost seven years, and I decided to write a few posts to share what I learned. Today, I would like to talk about handling incidents.

Handling Incidents ⚠️

Unexpected system outages are the most stressful part of the on-call. A sudden alert, sometimes in the middle of the night, can be a source of distress. Here are the steps I recommend to follow when dealing with incidents.

1. Acknowledge

Acknowledging alerts is one of the most important on-call responsibilities. It tells people that the on-call is aware of the problem and is working on resolving it. In many companies, alerts will be escalated up the management chain if not acknowledged promptly.   

2. Triage

Triaging means assessing the issue’s impact and urgency and assigning it a priority. This process is easier when nothing else is going on. If there are active alerts, it is crucial to understand if the new alert is related to these alerts. If this is not the case, the on-call needs to decide which alerts are more important. 

3. Troubleshoot

Troubleshooting is, in my opinion, the most difficult task when dealing with alerts. It requires checking and correlating dashboards, logs, code, output from diagnostic tools, etc., to understand the problem. All this happens under huge pressure. Runbooks (a.k.a. playbooks) with clear troubleshooting steps and remediations make troubleshooting easier and faster. 

4. Mitigate

Quickly mitigating the outage is the top priority. While it may sound counterintuitive, understanding the root cause is not a goal and is often unnecessary to implement an effective mitigation. Here are some common mitigations:

  • Rolling back a deployment – outages caused by deployments can be quickly mitigated by rolling back the deployment to the previous version.
  • Reverting configuration changes – problems caused by configuration changes can be fixed by reverting these changes. 
  • Restarting a service – allowing the service to start from a clean state can fix entire classes of problems. One example could be leaking resources: a service sometimes fails to close a database connection. Over time, it exhausts the connection pool and, as a result, can’t connect to the database. 
  • Temporarily stopping a service – if a service is misbehaving, e.g., corrupting or losing data due to a failing dependency, temporarily shutting it down could be a good way to stop bleeding.
  • Scaling – problems resulting from surges in traffic can be fixed by scaling the fleet.

5. Ask for help

Many on-call rotations cover multiple services, and it may be impossible to be an expert in all of them. Turning to a more knowledgeable team member is often the best thing to do during incidents. Getting help quickly is especially important for urgent issues that have a big negative impact. Other situations when you should ask for assistance are when you deal with multiple simultaneous issues or cannot keep up with incoming tasks.  

6. Root cause

The root cause of an outage is often found as a side effect of troubleshooting. When this is not the case, it is essential to identify it once the outage has been mitigated. Failing to do so will make preventing future outages caused by the same issue impossible.

7. Prevention

The final step is to implement mechanisms that prevent similar outages in the future. Often, this requires fixing team culture or a process. For example, if team members regularly merge code despite failing tests, an outage is bound to happen.

I use these steps for each critical alert I get as an on-call, and I find them extremely effective.

The self-inflicted pain of premature abstractions

Premature abstraction occurs when developers try making their code very general without a clear need. Examples of premature abstraction include:

  • Creating a base class (or interface) even though there is only one known specialization/implementation
  • Implementing a more general solution and using it for one purpose, e.g., coding the visitor pattern, only to check if a value exists in a binary search tree
  • building a bunch of microservices for an MVP (Minimum Viable Product) application serving a handful of requests per minute

I have seen many mid-level and even senior software developers, myself included, fall into this trap. The goal is always noble: to come up with a clean, beautiful, and reusable architecture. The result? An unnecessarily complex mess that even the author cannot comprehend and which slows down the entire team.

Why is premature abstraction problematic?

Adding abstractions before they are needed adds needless friction because they make the code more difficult to read and understand. This, in turn, increases the time to review code changes and risks introducing bugs just because the code was misunderstood. Implementing new features takes longer. Performance may degrade, thorough testing is hard to achieve, and maintenance becomes a burden.

Abstractions created when only one use case exists are almost always biased toward this use case. Adding a second use case to fit this abstraction is often only possible with serious modifications. As the changes can’t break the first use case, the new “abstraction” becomes an awkward mix of both use cases that don’t abstract anything.

With each commit, the abstraction becomes more rooted in the product. After a while, it can’t be removed without significantly rewriting the code, so it stays there forever and slows the team down.

I witnessed all these problems firsthand when, a few years ago, I joined a team that owned an important functionality in a popular mobile app. At that time, the team was migrating their code to React Native. One of the foundations for this migration was a workflow framework implemented by a couple of team members that was inspired by Operations from Apple’s Foundation Framework. When I joined the team, the workflow framework was a few weeks late but “almost ready.” It took another couple of months before it was possible to start using it to implement simple features. Only then did we find out how difficult it was! Even a simple functionality like sending an HTTP request required writing hundreds of lines of code. Simple features took weeks to finish, especially since no one was willing to invest their time reviewing huge diffs.

One of the framework’s features was “triggers,” which could invoke an operation automatically if certain conditions were satisfied. These triggers were a source of constant performance issues as they would often unexpectedly invoke random operations, including expensive ones like querying the database. Many team members struggled to wrap their heads around this framework and questioned why we needed it. Writing simple code would have been much easier, faster, and more enjoyable. After months of grinding, many missed deadlines, and tons of functional and performance issues, something had to be done. Unfortunately, it turned out that removing the framework was not an option. Not only did “the team invest so much time and effort in it,” but we also released a few features that would have to be rewritten. Eventually, we ended up reducing the framework’s usage to the absolute minimum for any new work.

What to do instead?

It is impossible to foresee the future, and adding code because it might be needed later rarely ends well. Rather, writing simple code, following the SOLID principles, and having good test coverage are encouraged. This way, you can add new abstractions later when you do need them without introducing regressions and breaking your app.

Prioritize bugs like a boss

Me at my first job: “A bug? Oh, no! We MUST fix it!”

Me at Microsoft: “A bug? Oh, no! We CAN’T fix it!”

Me now: “A bug? Let’s talk!”

When I started my career over twenty years ago, I found dealing with bugs easy: any reported defect had to be fixed as soon as possible. This approach could work because our company was small, and the software we built was not complex by today’s standards.

My early years at Microsoft taught me the opposite: fixing any bug is extremely risky and should not be taken lightly. At that time, I worked on the .NET Framework (the one before .NET Core), which had an extremely high backward compatibility bar. The reason for this was simple: .NET Framework was a Windows component used by thousands of applications. Updates, often included in Windows Service Packs, were installed in place. Any update that changed the .NET Framework behavior could silently break users’ applications. As a team, we spent more time weighing the risk of fixing bugs than fixing them.

Both these situations were extremes and wouldn’t be possible today. As software complexity skyrocketed and users can choose from many alternatives, dealing with bugs has become much more nuanced. Impaired functionality is one, but not always the most important, aspect to consider when prioritizing fixing a bug. Here are the most common criteria to look at when triaging bugs.

Is it a bug?

While in most situations, there is no doubt that a reported issue is a bug, this is not always the case. While on the ASP.Net Core team, users reported some bugs they expected or preferred an API to behave differently. It did mean, however, that these issues were valid. For example, if you are building a spec-compliant HTTP server, you can’t fix the typo in the [Referer HTTP header](https://en.wikipedia.org/wiki/HTTP_referer) regardless of how many bugs asking you to do so you receive.

Security

Security bugs and vulnerabilities can lead to unauthorized access to sensitive data, financial losses, or operational disruptions. As they can also be exploited to deploy malware and infiltrate company networks, fixing security bugs is almost always the highest priority.

Regulatory and Compliance

Regulatory and Compliance bugs are another category of high-priority bugs. Even if they don’t significantly impact the functionality, they may have serious legal and financial consequences.

Privacy

Bugs that lead to the disclosure of sensitive data are treated very seriously, and fixing them is always a top priority. In the U.S., the law requires that businesses and government agencies report data breaches.

Business Impact

Business impact is an important aspect of determining a bug’s priority. Bugs that impact company revenue or other key business metrics will almost always be a higher priority than bugs that do not impact the bottom line.

I worked at Amazon during Amazon’s 2018 Prime Day. Due to extremely heavy traffic, the website experienced issues for hours, making shopping impossible. Bringing the website to life was the top priority for the company that day, followed by months of bug fixing and reliability improvements.

Functional Impact

Impact on functionality is usually the first thing that comes to mind when hearing the word “bug.” Rightly so! Functionality limited due to bugs leaves users extremely frustrated. Even small issues can lead to increased customer support tickets, customer loss, or damage to the company’s reputation.

Timing

Timing could be an important factor in deciding whether or not to fix a bug. Hasty bug fixes merged just before releasing a new version of a product can destabilize it and block the release. Given the pressure and shortened validation time, assessing the severity of these bugs and the risk of the fixes is crucial. In many companies, bugs reported in the last days before a major release get a lot of scrutiny, and fixing them may require the approval of a Director or even a VP.

The cost and difficulty of fixing the bug

Sometimes, bugs are not getting fixed due to the high cost. Even serious bugs may be punted for years if fixing them requires rewriting the product or has significant undesirable side effects. Interestingly, users often get used to these limitations with time and learn to live with them.

The impact of fixing a bug

Every bug fix changes the software’s behavior. While the new behavior is correct, users or applications may rely on the old, incorrect behavior, and modifying it might be disruptive. In the case of the .NET Framework, many valid bugs were rejected because fixing them could break thousands of applications.

The extreme case of a bugfix that backfired spectacularly was when our team fixed a serious bug in one of the Microsoft Windows libraries. The fix broke a critical application of a big company, an important Microsoft customer. The company claimed that fixing and deploying the application on their side was not feasible. As we didn’t want to (and couldn’t) revert the fix that shipped to hundreds of millions of PCs worldwide, we had to re-introduce the bug to bring back the old behavior and gate it behind a key in the Windows registry.

Is there a workaround?

When triaging a bug, it is worth checking if it has an acceptable workaround. Even a cumbersome workaround is better than being unable to do something because of a bug. A reasonable workaround often reduces the priority of fixing a bug.