Keeping dependencies up to date

Submitted by Robert MacLean on Wed, 03/16/2022 - 19:03

If you work with JavaScript or TypeScript today, you have a package.json with all your dependencies in it and the same is true for JVM with build.gradle... in fact, every framework has this package management system and you can easily use this to keep your dependencies up to date.

In my role, every time I add a new feature or fix a bug, I update those dependencies to keep the system alive. This pattern originates from my belief that part of being a good programmer means following the boy scout rule.

I was recently asked if I believe that these dependency upgrades are risky and should we rather batch them up and do them later since it will make code reviews smaller and our code won't break from a dependency change.

I disagreed but saying "the boy scout rule" is not enough of reason to disagree... that is a way of working, the reasons I disagreed are...

Versions & Volume

All dependency version upgrades have the chance to fail. By fail I mean they break our code in unexpected ways.

There is a standard that minor version changes should be safe to upgrade, which is why I often will do them all at once with minimal checks while major version changes I approach with more care and understanding. Major changes normally happen by themselves. This is because the major version change is the way the dependency developer tell me and you there are things to be aware of.

Major vs. minor will never be rules to rely on perfectly, rather they are guidance of how to approach the situation. Much like when you drive a car, a change in speed of the road is a sign that you need more or less caution in the next area.

As an example that the type version changes and also the volume of changes are not factors let me tell you about last week. Last week I did two minor version updates on a backend system as part of normal feature addition. It broke the testing tools because one of the dependencies had a breaking change. A minor version, with a breaking change.

It was human error on the part of the developer of the dependency to do a minor and not a major change; which impacted how I approached updating, and that will always increase the chance of issues.

Software is built by humans. Humans, not versions will always be the cause of errors.

Risk & Reward

I do like the word “risk” when discussing should you update, because risk never lives alone; it lives with reward.

How often have you heard people saying updating is too risky, focusing on the chance of something breaking... but not mentioning the reward if they did update?

Stability is not a reward; Stability is what customers expect as the minimum

When we do update we gain code that performs better, is cheaper and easier to maintain and is more secure. The discussion is not what will it break, it is why do we not want faster & safer code for cheaper?

I have an inherited piece of code from a team that did not update the versions, it has a lot of out of date dependencies. It is a high chance of breaking when we start to upgrade those dependencies because it was left uncared for.

However, if I look at the projects my team has built where we all update versions every time we do a change, we only ever going to be doing one or two small updates each time. It is easy to see when issues appear which makes fixing the issues easy too.

Death, taxes and having to update your code.

As a developer there is only one way of escaping updating your code: You will hand the code to someone else to deal with and changes teams, or eventually, you will need to upgrade - doing it often and in small batches is cheaper and easier for you.

Using the backend system example again from above. I only had two small changes to dependencies, so my issue was one of them. I could quickly check both of them and I ended up in the release notes for one of them within 15min where the docs clearly showed the change of logic. That let me fix the code to work with it and thus we could stay on the new version. If I had 100 changes... I would've rolled it all back and gone to lunch and future me would hate past me for that.

Architects & Gardeners

Lastly, our job is not to build some stable monument and leave it to stand the test of time. I deeply believe in DevOps and thus believe the truth that software is evolutionary in nature and needs to be cared for.

We are gardeners of living software, not architects of software towers.

In our world, when things stop… they are dead. Maintenance and fixing things that break is core to our belief that it is the best way to deliver value to customers with living software.

Tenets of stable coding

Submitted by Robert MacLean on Fri, 10/22/2021 - 21:12
  1. Build for sustainability
    We embrace proven technology and architectures. This will ensure that the system can be operated by a wide range of people and experience can be shared.
  2. Code is a liability
    We use 3rd party libraries to lower the code we directly need to create. This helps us go fast and focus on the aspects which deliver value to the business
  3. Numbers are not valuable by themselves; We focus on meaningful goals and use numbers to help our understanding
    We do not believe in 100% code coverage as a valuable measure
  4. We value fast development locally and a stable pipeline
    We should be able to run everything locally, with stubs/mocks, if needed. We use extensive git push hooks to prevent pipeline issues.
  5. We value documentation, not just the "what" but also the "why"
  6. We avoid bike shedding by using tools built by experts, to ensure common understanding.

We acknowledge that there are the physics of software which we cannot change

Submitted by Robert MacLean on Fri, 10/22/2021 - 21:06
  1. Software is not magic
  2. Software is never “done”
  3. Software is a team effort; nobody can do it all
  4. Design isn’t how something looks; it is how it works
  5. Security is everyone’s responsibility
  6. Feature size doesn’t predict developer time
  7. Greatness comes from thousands of small improvements
  8. Technical debt is bad but unavoidable
  9. Software doesn’t run itself
  10. Complex systems need DevOps to run well

From Tom Limoncelli; his post goes into great detail


Submitted by Robert MacLean on Wed, 09/16/2020 - 18:24

Recently been talking a lot about the OWASP Top 10 and have created some slides and a 90 min talk on it!

So if want to raise up your security, this is a great place to start.

You are blocked

Submitted by Robert MacLean on Sun, 08/30/2020 - 11:54

For the last 4 months I have taken what could be seen as extreme; blocking a few hundred thousand people on Twitter. This has led, in the last week to a few people asking or pointing out that they are blocked and wondering why I chose them.

The reality is I probably did not block them, I blocked someone else (lets call them the aggressor) and using a tool I wrote, I block the agressor and all their followers and so while I've actively blocked a few hundred agressors it balloons to hundreds of thousands of followers.


Twitter, is for me, a place I go to hang out with people I like, to learn from them and hear their stories. Twitter, for me, is my cocktail party; except it is getting gatecrashed by the altright, racism, sexism, and nationalists. People I do not want at my cocktail party. It has caused me to have too many shitty experiences so I now have a bouncer.

I have taken the view that if someone follows a problem, the chance that they are problematic themselves is higher than I am willing to risk. It is not consistant, actually the larger the the account less likely it is to be accurate. Donald Trump is a beautiful example, so many people follow him for some many reasons that while I do not agree with Trump, to block all his followers is not going to work.

Still visible

Twitter block is lovely in that you can still view my thoughts, just open a private tab in your browser. It merely makes the cost to interact with me higher - which is ideal, increasing the cost means that either someone needs to be determined to be a jerk, easier to manage, or they really value the interaction. I can handle those edge cases easier.


Why not make my account private? I tried this but it makes engagement impossible; it is not a good long term plan.


The "engage me" crowd like to say blocking creates echo chambers where you cannot learn anything and I agree if someone does this everywhere - but Twitter is merely one of many places I engage and learn. I choose to have Twitter be fun and a cocktail party. I choose other places to learn.

You may have different views for how you use Twitter and that is great for you and we will agree to disagree.


Want to be unblocked? Drop me an email or get someone I know to DM me and check who you follow since you are influenced by the people you spend with, so do not spend time with horrible people.

Smarter Screen

Submitted by Robert MacLean on Tue, 01/07/2020 - 10:14

I spend a lot of time in the kitchen, I love to cook and so I am often in there with my phone listening to a podcast or, if it is a Saturday morning, watching Show of the week. I am not alone in this behaviour, everyone in my home does this and often at dinner we share youtube videos by propping a phone up on the toaster and huddling around it. This was clearly time to improve the experience with a kitchen screen - a smart TV would be perfect but their lack of support means it is a show stopper for me... so I put together a smarter screen.

Build List

Powering this is a Raspberry Pi 4. I grabbed the 4Gb model, just cause... I don't have a smart reason for that decision. If you are in South Africa, I grabbed mine from PiShop and grabbed the essentials kit. Putting together the "case" was maybe the most head-scratching aspect since it is just screws, plastic and a fan... no instructions.

Also ordered from PiShop was the remote control since I want this to be like a TV, I do not want a keyboard or mouse. I opted for the OSMC Remote Control which has a small USB dongle and uses radio signals rather than infrared, which means it does not need line of sight. Since the Pi will be behind the screen, the line of sight will be an issue. The remote "just worked" which was so awesome.

For the screen, I ordered a LG 24MK400H which was the perfect side for my needs, wall-mountable and on special 😄 The mounting solution I grabbed a Brateck LDA18-220 Aluminum Articulating Wall Mount Caravan Bracket, which is really awesome and easy for mounting. This came with instructions but they were poor and going with experimenting first helped me find a happier setup.

With all of that, I had everything I needed to get running.


The Pi kit came with a MicroSD card with NOOBS preinstalled on it and all I needed to do is when booting, hold shift and select the LibreElec OS to install. LibreElec is a really basic OS which is "just enough" to run the Kodi media centre software. Going through the setup on that got it up and running within about 30min.


I don't have a "library" of media, rather I just stream the content I want so installing the add-ons I needed was key to set up, and I went with:

  • YouTube
  • TubeCast, this lets me cast from my phone to the Kodi
  • Twitch
  • Amazon Prime Video (VOD), this is for streaming Amazon and not for the buying of movies
  • Netflix, this has a really great guide to getting started with 3rd party add-ons which are worth your time


The only strange part of the setup was that each time the Kodi booted, I got a prompt saying there is an update for LibreElec... but the settings for LibreElec had nothing in it for the update and no way to do the update. Thanks to Reddit I was able to switch it to manual and update the setup and then switch it back.

Go to Settings > LibreELEC > System. Change automatic updates to 'manual' (I'm not even sure if auto update works at all, I've had it set to that before and it never auto updates). Change update channel to LibreELEC-8.0. Select available versions and select the newest one (8.2.3 at time of writing this).

If this was how you were trying to update, then I'm not sure. I would say backup your LibreELEC install and then start fresh with a new version.

DevFest 2019

Submitted by Robert MacLean on Sat, 11/30/2019 - 12:08

Today I was honoured to be part of the second DevFest in SA with a talk sharing about Kotlin, Micronaut, DataStore and other fun tech... but more on how we ended up where we are with our current project. It really is a tech lead doing a retrospective with tech sprinkles to get everyone involved.

If you want the code it is on GitHub and slides are below:

When the world sees a 500... but the server promises it is a 200

Submitted by Robert MacLean on Sat, 11/16/2019 - 13:04

Here is the story of all the work I did this week, and it is so odd I feel it needs to be shared... but lets talk about the world the problem can be found in.

The world

It is a µservice (that is the ohh, look how smart I am to use a symbol for micro) written with DropWizard and deployed in a Docker container, with Traefik in front of that. To hit it you go through a ingress controller and a load balancer.

 |               |
 |   Internet    |
 |               |
 |               |
 | Load Balancer |
 |               |
 |               |
 |    Ingress    |
 |               |
 |               |
 |    Traefik    |
 |               |
 |               |
 | Microservice  |
 |               |

The microservice acts a BFF (backend for front end), so it does some auth fun and makes calls to an internal API and manipulates them (e.g. changes the data structure). We have a number of different REST style calls across GET, POST, PUT and DELETE.

In terms of environments, we obviously have a production environment and we have an integration environment which are setup the same way. We have a stubs environment where we fake out the internal API. Lastly we can run the microservice on our laptops, but there is just the microservice... no traefik etc...

The Problem

When we run our load test from outside everything works except 1 call which fails 98% of the time with a HTTP 500. Other calls (even the same method) all work. Load tests run against the stubs environment and the same call works perfectly on our laptops, in integration and production.

We can even run the fake internal API on the laptops, with the load tests and it works fine there. Basically, one call fails most of the time in one environment... 🤔

Grabbing logs

When we pull the logs for the microservice in stubs things get weirder... it is returning an HTTP 200 😐 This is the same experience we get everywhere else, it works... except in stubs to the load tests it gets a 500

+---------------+   500 Here
|               |
|   Internet    |
|               |
+-------+-------+   200 Here
|               |
| Microservice  |
|               |

Further pulling of logs show 500s in the load balancer and the ingress so somewhere between the microservice and traefik the HTTP 200 becomes a HTTP 500... but we don't have logs on traefik we can pull which makes this a bit harder to determine...

Logging onto Traefik

Next we logged onto Traefik and decided to curl the microservice directly and lo and behold... we get a 500 🤯 and to make it more interesting that the microservice logs still show it is returning a 200 - like what the actual?! Could there be an network issue or magic?

Interesting the 500 came with an error saying "insufficient content written"

Insufficient content written

This led me to looking at the content and looking at what we are sending and I see we are sending Content-Length and the body and guess what... the length of the body does not equal the Content-Length... oh 💩

This is a client side HTTP error... where the server sends the incorrect amount of body, so the client goes, well the server is wrong and raises a 500. I've always thought 500 errors were server error and thus could only be raised on the server.

The fix

The fix is simple, in our server we were using the Response.fromResponse to map internal API to the public API and so it was copying the Content-Length from the internal API and we were sending that along.

This meant the fix was to delete the Content-Length header before we call fromResponse to ensure it would rebuild the header and be correct.

The reason why it didn't fail else where, the version of the mock API we use added Content-Length but newer versions and the real APIs used chunked encoding which never set the header so there is no issue there.

This was a long road to understand the issue, and one line to fix it, and totally a new experience in learning that server errors can occur client side too.

VSCode + Catalina

Submitted by Robert MacLean on Thu, 10/10/2019 - 21:56

For the most part, the initial upgrade to macOS Catalina was uneventful; I was caught unaware by the wave of permission requests that greeted me but it was 2 minutes of clicking accept or deny and continuing on with my day (though, how normal users will cope is beyond me... it feels very un-apple).

The two issues I did run into, were the need to reconfigure Google Drive (again, a minor 2-minute activity) and trying to get VSCode to work properly. This was a lot more annoying. The initial issue was that git, could not be found... this broke all of source control in VSCode. The fix was to run the following from the terminal: xcode-select --install and restarting VSCode.

Once that was fixed, the next issue was that I could no longer sign commits with GPG, this gave a similar issue as the initial Git not being found.

The correct fix for VSCode is to add it to the list of Developer Tools which you can find under Security & Privacy. Once a restart of VSCode everything just worked. I added Terminal to the list too which also stopped autocomplete from Fish Shell from constantly prompting too.