Skip to main content

AWS Community Day - South Africa

What an amazing day! For the first time in South African history, we had an AWS Community Day, all thanks to the hard work of Hennie Francis and the team working with him. The organisation, swag, and content were all next-level. For a first-time event, it was stellar.

I skipped all the talks before lunch because I was having such a great time networking with the sponsors and catching up with friends. Before I knew it, it was lunchtime, followed by Dale Nunns doing his famous Duck Hunt talk (you can watch it on YouTube. Dale was amazing, as always.

The next talk I saw was from a new speaker to me, Louise Schmidtgen. She did my favourite type of talk: an honest review of a real-world failure, what was learned, and how they would avoid it in the future. Will absolutely be at her talks in future.

The final talk of the day was Mark Birch‘s. I had zero interest in seeing this talk from the title, “Community as Code: Engineering Human Infrastructure,” especially since, like most attendees, I hadn’t even read the description. But, as I’ve said before, conferences are a serendipity gold mineand this is yet another example of it; but not the last one of the day. For me, this talk felt like it was designed perfectly for me. I was wowed and definitely need to get his book - Community in a Box.

Following the day, I got an invitation to the speakers’ dinner, and again, I just ended up in the right seat (thanks serendipity) for an amazing night of discussion and inspiration. I got to hear from Jared Naude, Lindelani Mbatha, Roberto Arico, and Liza Cullis. I am still in awe of their skills and so grateful for them sharing their wisdom and stories with me.

It was not a perfect day, unfortunately. The venue had some rather short-sighted restrictions on movement and taking photos, which is why the images in my post are cartoons I “drew myself” and totally not from photos I sneakily took and an AI changed 🙄

The venue also extended lunch without asking the organisers, which led to the last time slot of the day being cancelled. I appreciate that the venue staff were trying their best, but decisions like that need to be made by the organisers, not the venue. This meant that Candice Grobler, whose talks always blows me away, didn’t get to do her talk - a big disappointment for me.

I hope this is a lesson for the venue to improve, because their reputation is in bad shape right now.

It was amazing to see how Hennie and the team coped with the challenges; they truly did an amazing job despite the venue. I cannot wait for the 2026 event! I will be getting tickets as soon as they come out!

The serendipity of the conference

My experience at yesterday’s DataConf, a fantastic event organised by two people I adore, Candice Mesk and Mercia Malan, perfectly illustrates this. My day began as usual: catching up with friends and desperately trying to convince the Bootlegger coffee people to open early just for me. ☕

I had my first serendipitous moment before the day even began. Quinn Grace asked me what I thought comes after AI. The conversation and the ideas that came from that discussion were awesome, and I am still thinking on it as I write this.

Another moment of serendipity happened after lunch. I was unsure which talk to attend, so I simply stayed in the main room without even really knowing what the talk was about. It turned out to be the most practical talk of the day for me: “How to craft teams of exceptional analysts” by Lisema Matsietsi. The title alone screamed that this was NOT a talk for me, and like many attendees, I never read the description, so I was flying blind. This was pure serendipity. The talk was entirely leadership-focused and helped illuminate parts of team dynamics I hadn’t deeply understood. I’m so glad it was at the event; it just goes to show how you can end up in the right room at the right time—despite yourself.


The Learning Was in the Talks

I initially thought I’d written enough on the topic, but Duncan Dudley told me my posts were too short. I also thought I’d steal a page from Dale Nunns wonderful post on the event (he even has photos… I am way too lazy for that), so here are some of the actual learnings that happened for me.

I had the pleasure of watching certified genius (certified by me - still counts) Michael Johnson talk about data engineering. As the resident “not data” guy, his talk was incredibly useful, giving a wonderful look at the history of the field and why we are where we are. It really helped me understand the landscape better.

This was followed by Pippa Hillebrand who has the genius and laser focus of your favourite super villain, but without the desire to take over the world. Her talk on AI privacy was so powerful, focusing not on how we build AIs, but on how we run them and the risks involved.

Pippa’s talk was a perfect lead-in for the funniest and most genuine speaker of the day, Georgina Armstrong. Her talk on recommender systems was genius. I wish DataConf had recorded these talks because hers is a must-see, if for no other reason than so my partner didn’t have to listen to me go over every detail when I got home.

I’ve already mentioned Lisema’s talk, so I’ll move on to Marijn Hazelbag, PhD talk on digital twins with cellphones and fibre networks. While it was entirely pointless to my work, it was SO interesting (also extra points for the only live demo of the day, which helped captivate me more). It opened a door to a world I didn’t know existed. I have no idea if I’ll ever need that knowledge, but serendipity may have a plan for it.

The talks of the day concluded with Carike Blignaut-Staden, who gave a must-see talk for any team building a dashboard. I’ve been guilty of doing all the things she said you shouldn’t do, which is a great place to learn from because it’s all about improving from there.


What a wonderful day. I hope this encourages you to try a conference. And when you do, maybe skip a talk to discuss the future of work or go to a talk you wouldn’t normally have chosen. It just might lead to an even better experience.


Originally posted to my LinkedIn but thought would share here too for those who don’t follow me there: https://www.linkedin.com/pulse/serendipity-conference-robert-maclean-342pf/?trackingId=shsYleAzXj22zxAHwx6J%2BQ%3D%3D

The Git LFS Problem with SSH Profiles

Like anyone with a brain, I want mine to be as empty as possible of important things so I can fill it with memes and TV quotes. To help that process, I put my passwords into a password manager—mostly for security, but also for convenience.

1Password, like others, supports running an SSH agent, which is amazing when you have multiple accounts on GitHub and GitLab that all need SSH. It provides great portability, and for most things, it just works.

This does mean having something like this in my ssh config file so I can reference each account with a different name:

Host personalgh
  HostName github.com
  User git
  IdentityFile ~/.ssh/rmaclean_github.pub
  IdentitiesOnly yes

Host clientAgh
  HostName github.com
  User git
  IdentityFile ~/.ssh/clientA_github.pub
  IdentitiesOnly yes

Then, when I git clone, I don’t use [email protected]:rmaclean/developmentEnvironment.git, instead, I use personalgh:rmaclean/developmentEnvironment.git. This instructs Git to use the specific profile in the SSH config.

This works great, except if you use Git LFS (Large File Storage). Over the last 14 months, I’ve used LFS a lot since one of my clients is a game developer, and LFS is essential for all the binary assets. Since LFS uses a separate process, it makes assumptions about the hostname and thinks the SSH profile name is the hostname. And you get an error like this:

Cloning into 'demo'...
remote: Enumerating objects: 4108, done.
remote: Counting objects: 100% (234/234), done.
remote: Compressing objects: 100% (127/127), done.
remote: Total 4108 (delta 140), reused 149 (delta 103), pack-reused 3874 (from 3)
Receiving objects: 100% (4108/4108), 1.62 MiB | 1.46 MiB/s, done.
Resolving deltas: 100% (2524/2524), done.
Downloading public/assets/audio/music/Arcade_LoopNew.mp3 (2.4 MB)
Error downloading object: public/assets/audio/music/Arcade_LoopNew.mp3 (425014c): Smudge error: Error downloading public/assets/audio/music/Arcade_LoopNew.mp3 (425014c3f342e099e5c041b875d440e9547ef4fb2a725d41bb81992ad9f37ddd): batch request: ssh: Could not resolve hostname clientA: nodename nor servname provided, or not known: exit status 255

Errors logged to '/private/tmp/demo/.git/lfs/logs/20250826T090610.648994.log'.
Use `git lfs logs last` to view the log.
error: external filter 'git-lfs filter-process' failed
fatal: public/assets/audio/music/Arcade_LoopNew.mp3: smudge filter lfs failed
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'

To fix this, you must use the standard [email protected]: host structure. But since you also need to pass in the correct SSH key, you can do that with a temp config setting: core.sshCommand.

When cloning, you can specify the command directly: git -c core.sshCommand="ssh -i ~/.ssh/clientA_github.pub" clone [email protected]:client/demo.git

After successfully cloning the repository, any subsequent pushes or pulls will still fail. This is because the repository is not yet configured to use the right key. The git clone command simply added a parameter for that single operation—it didn’t change the repository’s configuration.

To fix this, you need to set the configuration correctly after the clone. Switch into the cloned folder and run this command: git config core.sshCommand "ssh -i ~/.ssh/clientA_github.pub"

This command sets the core.sshCommand for the repository, ensuring that it always uses the correct key. Now it will just work.

The Case of the Dotty Environment Variables

This post is for that one other person who’s losing their mind over a bizarre issue: lowercase environment variables with dots in their names not showing up in one environment, when it works elsewhere.

The client’s request was simple: configure all environment variables with a naming convention like my.app.variable. Everything I knew suggested this was impossible—environment variables typically follow a UPPERCASE_SNAKE_CASE convention and don’t play well with special characters like dots. Yet, in their environment it works.

The Investigation

To get to the bottom of this, I created a minimal, reproducible example using a simple Java application that reads an environment with the k8s and dockerfile need to run it. I hosted the code on GitHub for anyone to see and confirm: it worked. I could evensh into the container and confirm the variables were present and correctly formatted using the env command.

The puzzle deepened. The environment variables were definitely there and readable, so why couldn’t my real application see them?

The Unexpected Culprit

After a lot of debugging, I finally found the problem: the Docker base image.

In the client’s working environment and my local test (by coincidence), were running on eclipse-temurin:21-jre-alpine. The actual project code, however, was using eclipse-temurin:21-jre.

The difference seems subtle, but it was critical. The alpine version, being a minimal Linux distribution, likely handles environment variable parsing differently or has a different shell configuration (via the entry point being bash) that allows these non-standard variable names to be passed and read correctly. The standard image, based on a different underlying OS (likely Debian or similar), does not.

Trying out DenoDeploy Early Access

I have been using DenoDeploy for my experiments and toys recently, and been wowed by it. Recently they have announced their version 2 is coming, and you can try it out in early access now: https://deno.com/deploy

I wanted to try this and felt that after a year, it was a good time to rebuild the website for my sole proprietorship, https://www.goodname.co.za. Last year when I launched it, I hosted it on my main (expensive) hosting provider where I run this and used Drupal as a system for it… all because it was quick to get going.

In a year, I did no updates and spent way too long updating dependencies I didn’t need… so why not use something new with DenoDeploy EA? And what could I not do before… run static HTML content! Yeah, DenoDeploy now lets that work and since I can put together HTML/CSS/JS quickly… it makes it really solid. It also means I can use my normal dev tools and push updates via GitHub.

I don’t have much more to say, because DenoDeploy EA just worked; it was easy to configure (just connect to GitHub), link the domain via DNS and BOOM! It is running! It is amazing. I am very excited to see what is coming from that in the future.

If you are looking for a place to run TypeScript, JavaScript, or static content… you owe it to yourself (and your wallet) to check it out.

Bring your Google calendar into a spreadsheet

When it comes to spreadsheets, Excel kicks ass, like it is massively more powerful than anything else out there, but I have recently had to pull Google Calendar info into a spreadsheet and rather than manual capturing it, I found that Sheets from Google with the App Script is really powerful thanks to the unified Google experience.

To bring in the info, I followed the following steps.

  1. Create a new spreadsheet (I used the awesome https://sheets.new url to do that)
  2. In the spreadsheet, add your start and end dates for the range you want to import. I put start in A1 and end in B1
  3. Click extensions → App Scripts
  4. In the Code.gs file, drop the following code in
// Configuration constants
// change these as needed
const START_DATE_CELL = 'A1';
const END_DATE_CELL = 'B1';
const HEADER_ROW = 3;
const HEADER_COL = 2;

// do not change these
const DATA_START_ROW = HEADER_ROW + 1;
const NUM_COLS = 3;

function calendar_update() {
  //your calendar email address here
  var mycal = Session.getActiveUser().getEmail();
  var cal = CalendarApp.getCalendarById(mycal);
  var sheet = SpreadsheetApp.getActiveSheet();
  
  // Clear existing data rows
  var currentRow = DATA_START_ROW;
  while (true) {
    var checkRange = sheet.getRange(currentRow, HEADER_COL, 1, NUM_COLS);
    var values = checkRange.getValues()[0];
    var hasData = values.some(cell => cell !== '' && cell !== null && cell !== undefined);
    
    if (!hasData) {
      break;
    }
    
    checkRange.clearContent();
    currentRow++;
  }
  
  //put dates here
  var events = cal.getEvents(
    sheet.getRange(START_DATE_CELL).getValue(),
    sheet.getRange(END_DATE_CELL).getValue(),
    { search: '-project123' },
  );
  
  var header = [['Date', 'Event Title', 'Duration']];
  var range = sheet.getRange(HEADER_ROW, HEADER_COL, 1, NUM_COLS);
  range.setValues(header);
  var rowIndex = DATA_START_ROW;
  for (const event of events) {
    if (event.getTitle() === 'Busy' || event.getTitle() === 'WFH' || event.getMyStatus() === CalendarApp.GuestStatus.NO) {
      continue;
    }

    var duration = (event.getEndTime() - event.getStartTime()) / 3600000
    var details = [[event.getStartTime(), event.getTitle(), duration]];
    var range = sheet.getRange(rowIndex, HEADER_COL, 1, 3);
    range.setValues(details);
    rowIndex++;
  }
}
  1. Set the config at the top of the script and hit save
const START_DATE_CELL = 'A1'; // this is where you specified the inclusive start date to pull from
const END_DATE_CELL = 'B1'; // this is where you specified the exclusive end date to pull to
const HEADER_ROW = 3; // the row for where the header for the table will be
const HEADER_COL = 1; // this is the column where the first part of the header is A = 1, B = 2 etc...
  1. Save and run… you will be asked for auth, this is a one time approval;
  2. The content will be in the sheet now! But let’s make it easy top update
  3. Go to Insert → Drawing, and draw a button or icon and hit insert
  4. On the button, click the 3 dots and select Assign Script
  5. For which script do you want to assign, put in calendar_update and click Ok. Now you can click that button at any time and it will update

And as a final awesome trick, you may wish to convert something like 0.25 to a human-readable 15 minutes? I use the formula =TEXT(<VALUE>,"[h] \h\o\u\r\s m \m\i\n\u\t\e\s")

My Secret Weapon: Single-Letter Git Aliases

Header image

Today I want to share my favourite Git aliases that I’ve built up over the years. If you haven’t heard of a Git alias, it’s basically a way to add your own custom commands to Git. Think of it as a personal shortcut for frequently used Git operations (you can check out the official docs here for the technical deep dive).

You could add these to your shell directly, and they’d work pretty similarly. But for me, that requirement of still having to type git first really keeps them nicely ring-fenced. It helps keep my mental space focused just on Git commands. Plus, since I use these constantly, they’re all single-letter commands. If you’re doing everything on the shell, you might run out of good single letters (depending on your shell, you’ve only got so many!). Here, I still have a limit, but it’s focused specifically on Git commands, so I’m not going to hit it anytime soon.


My Go-To: git u (Update Branch)

First up is u. This one is for updating my branch. Most days, before I even start coding, I want to update my feature branch to main. I also like to do a bit of clean-up to keep Git per formant and my local repo tidy. Man, this used to be a bunch of things I had to remember to do:

  1. Switch to main… except it might also be master, release, or staging. I bounce between multiple teams and clients, and remembering which one is which each time can be tiring.
  2. Run fetch --prune to clean up any local branches that no longer exist on the remote. Keeps things neat!
  3. Pull the latest changes into my local main.
  4. Run maintenance for performance goodness.
  5. Switch back to my feature branch and merge those fresh changes across.

That’s a lot of individual commands, right? Today, that’s just git u. Internally, the alias looks like this:

alias.u=!git switch $(git symbolic-ref refs/remotes/origin/HEAD | sed 's@^refs/remotes/origin/@@') && git fetch --prune && git pull && git maintenance run && git switch - && git merge $(git symbolic-ref refs/remotes/origin/HEAD | sed 's@^refs/remotes/origin/@@')

Now, where this still messes up for me is on one client where their main is actually production, but their workflow dictates pulling from staging… so it’s a bit weird. And then there’s the classic rebase vs. merge debate. I landed on merge for this alias since it’s often safer, but I do wonder if I could build a smarter system to try rebase, and if that has issues, then switch to merge. Food for thought!


Branching Out with git n (New Branch)

Next up is n. This one is for creating a new branch. Take everything from the u command above, except for the last step (merging back). Instead, I want to type in a new branch name, and it creates that new branch for me. That was super easy to integrate using read, and the final alias looks like this:

alias.n=!git switch $(git symbolic-ref refs/remotes/origin/HEAD | sed 's@^refs/remotes/origin/@@') && git fetch --prune && git pull && git maintenance run && read -p 'New branch name: ' branch && git switch -c $branch

And there you have it! These two aliases have saved me countless keystrokes and mental gymnastics over the years.

TIL: Tax is a asymptote

Tax is complicated, and we don’t even have lunatic tariffs to worry about.

Often people talk about the fact we pay up to 45% tax in South Africa, but that is seldom true; in fact, mathematically it will never be true because tax is something called an asymptote. Even if you round up, you have to earn over R32.1 million a month, to get to 45% - something no-one (or very few people) will get close to.

I wanted to see and play with this, so I built taxreality.sadev.co.za this evening to play with the formula and visualise it.

screenshot of my little tool

Like with my last project, it is built with Deno, Fresh and Deploy. In addition, I also made use of System.css for some of the styling and RealFaviconGenerator which is an amazing tool; it let me use a SVG and have it change to work with light and dark mode!

A holiday project with Deno 🦕, Fresh 🍋 , Deploy and Formula 1 🏎️

screenshot of the website I built

The project is up at: https://f1teams.sadev.co.za/

Yesterday was a public/bank holiday in South Africa, and that it gave me a chance to try to build something green fields in a day… but I also wanted to push some skills by doing some more work with Deno. Initially thought about building with Next, but I use that often… so thought I would try Fresh (the framework from Deno) and that naturally led to using Deploy, the hosting that Deno also offers.

But what to build? How about a simple website which shows the evolution of Formula 1 teams in time for the Grand Prix this weekend, so I can see what each team was called over time.

Building with Deno, was, as always, painless - native TypeScript support and the full tool chain was great. Only issue I had was that I want trailing commas on my JSONC data… but the formatter won’t let that be (so very minor). Fresh was uneventful… honestly, if you know Next, getting up and running takes so little extra time and the structuring makes so much sense. Deploy was an absolute highlight though - an amazingly easy to deploy from GitHub service with very generous free tier. I am now thinking of how many ways I can make use of that.

From start to finish, it took about a half day… and this is using new technologies - this is an amazing stack for quick and professional development.

Two other different aspects I used:

  • When I started I hadn’t planned on using Deploy, which has a free KV available and likely if I wouldn’t use that if I had planned to use Deploy… but since I didn’t I went with a static JSONC file and parsed it with JSONC-Parser. I absolutely love JSONC more than pure JSON… and the sooner we all move to it, the better.

  • I used no component library…. everything is “handcrafted” HTML and CSS. Not even something like SASS… I still think there is a use for component libraries in bigger systems, but modern HTML & CSS is so powerful that it is just wonderful to keep the size down (the whole website is 100Kb)