Weeknotes SE01E01 2021-01-17

Non work

Kept up a fair bit of exercise this week.

Spent Tuesday evening making pizza with my daughter as part of her school work. We actually started on Monday evening because we made a sourdough pizza dough and we wanted a slow bulk fermentation. The pizzas came out well.

Some boring bits of life-admin.

I closed my old web hosting account at last. I moved a some old domains and personal blogs that I maintain for friends and family over to static sites using eleventy. I tidied up an old blog of mine and did some housekeeping on old posts. There's a bit more to do, but most of the content I want live is live. I'll address the various pieces of broken formatting and design as I get time and inclination. None of the sites get any traffic to speak of.

I spent far too much time trying to plot my weeknotes title and numbering style. Ultimately, it doesn't matter, just me trying to bring a small sense of order.


The week started with a deadline. GDS is starting to use a new framework to assess the skills of people working in "digital data and technology" professions. We had a submission deadline and the early part of the week was spent checking all the submissions for my line reports. There's a new tool to submit the assessments. It's not the most user friendly; there's no proper feedback on past submissions and line reports cannot see when their line manager has validated their assessments. Slightly annoying because I really want to make sure my line reports have their assessments correctly submitted because it can affect their pay.

I spent much of the rest of the week in meetings. The routine meetings included senior tech and leads planning sessions as well as GOV.UK's developer community tech fortnightly. Ad hoc meetings centred around digital identity. A good overview with tech colleagues on the digital identity team and another longer half day session which was more general and included different disciplines.

Tech fortnightly was interesting - Richard showed us how he had repurposed one of our publishing apps as a personal blog.

I spent some time going through some threat modelling for one of our services and did a little research on MITM attacks.

Worked with one of the developers on the team to explore autoscaling a little.

I also looked into our current special leave policies and shared it around with some people at work. Several people in the developer community are finding it difficult to balance work and life at the moment. Parents are, anecdotally, finding it difficult being both a full time employee and teacher.

I started reading 97 Things Every Engineering Manager Should Know because Dean has started a book club.

I started to think a bit more about how to onboard developers.

How to view the raw commit data on GitHub

Today I learned that you can view the raw commit data for any commit on GitHub by appending .patch to the commit's url. For example, https://github.com/octocat/Spoon-Knife/commit/d0dd1f61b33d64e29d8bc1372a94ef6a2fee76a9.patch

This includes the author's email address. I know you can easily view the git log if you pull a repo down, but I didn't know it was easily visible on GitHub.

Deleting all docker containers and images quickly

For when you really want to blat everything.

docker rm (docker ps -a -q) && docker rmi -f (docker image ls -q)

I'm using fish so the command substitution is a little different to the usual bash syntax. If you're running a more standard shell, you'll probably need some $s in there:

docker rm $(docker ps -a -q) && docker rmi -f $(docker image ls -q)

docker rm removes containers by container Id. docker ps -a -q lists all containers in quiet mode - so just the container Ids.

The logic is the same for removing images - I've thrown in the force flag for good measure, and I've used && to concatenate two commands onto one line.

This will error out if you have no containers.

It's also very destructive. A nuclear option as it were.

How to install DrRacket on Mac OS and Homebrew

Homebrew is excellent and you can install a minimal racket by running

brew istall racket

Once this finishes, you'll see instructions on how to install DrRacket with raco. Follow the advice:

raco pkg install --auto drracket

This takes a while. You might struggle to actually find the DrRacket.app or launch DrRacket.

If you've never used racket before, you probably don't know raco either and you won't know where it has installed all the things.

Run the following to find out where raco has placed the .app:

racket -e '(require setup/dirs) (displayln (path->string (find-gui-bin-dir))) (for-each displayln (directory-list (find-gui-bin-dir)))'

This lists /usr/local/Cellar/minimal-racket/7.5/bin as the directory where DrRacket.app sits on my machine. Running /usr/local/Cellar/minimal-racket/7.5/bin/DrRacket.app/Contents/MacOS/DrRacket from the command line will launch DrRacket.

I symlinked it so it was a little easier:

ln -s /usr/local/Cellar/minimal-racket/7.5/bin/DrRacket.app/Contents/MacOS/DrRacket /usr/local/bin/drracket

Weeknote 20200216 - Three things about Python

I chaired the tech fortnightly. It is an open agenda; anyone can put a topic in and speak. I always forget how much work it is to co-ordinate the events. Finding people to talk and suggesting topics takes more time than I expect.

I presented three things I have enjoyed while learning some Python. It sparked some good discussion in a community which does day to day work in Ruby. I am glad that my presentation served as a conversation starter. I want the tech fortnightly to become more two sided discussions rather than one sided presentations. I was a little dismissive of JavaScript in the middle of my talk, which I regret. It is insensitive to those who work primarily in JavaScript.

I started working on a prototype to explore data from a satisfaction survey. This involved putting together a database schema from a spreadsheet of data of denormalised and anonymised data. I inadvertently created quite a large chunk of what is a generic survey data design in the process.

I also attended a strategy workshop with other lead developers, head of tech and senior product managers. Thinking strategically is a skill in which I am not practiced. I still fall back to thinking of solutions and immediate implementation detail.

I spent some time tidying up my vale linting setup. Writing to vale's exacting standard is hard!

Weeknote 20200119 - Python, pip, dependencies

I started working with a new team focussed on using data science to make GOV.UK better for the end users. My previous team focussed on long term maintenance of the platform. Data science is a whole new world for me. I'm excited to get stuck in and learn.

The code is largely in Python and is the first time I have worked with a significant piece of production Python code. I immediately bumped into problems trying to install the requirements for the project.

In the project, two of the top level dependencies rely on Pygments. Pip installed an up to date version of Pygments when it resolved the first dependency, but the second dependency required an earlier version of Pygments. I think this was the result of merging Dependabot PRs without running pip install and ensuring a clean install.

Pip, to my surprise, does not resolve dependency versioning issues and I am reminded of DLL Hell.

My current understanding of workflow on a Python project is developers occasionally use pip freeze to write requirements to a file. Subsequent efforts can install from the file. Apart from the versioning issues, it is also easy to commit unwanted packages to a project if developers are not careful.

I spent some time investigating how other projects manage Python dependencies. A colleague pointed me towards PEP-0518 which looks interesting, but in my limited view of repositories around GDS, I did not see a .toml file being used so I don't know whether people are doing this in the wild or not.

I found pipdeptree which outputs a tree view of dependencies similar to Bundler's Gemfile.lock.

I discussed the problem with one of the other developers on my new team and went digging around some other Python projects. We found a pattern of two requirements files. One for base project dependencies that is hand crafted, and one autogenerated with all dependencies and their sub-dependencies. The advantage is developers are more aware of the dependencies they are adding and this should address the unwanted or unneeded packages sneaking their way into a project.

Going forward, I want to add some automated process around the projects to catch these errors earlier. The first thing to do is spin up a new virtual environment and run pip install -r requirements.txt on each branch push. After that, I want to put some linting in place. The code is still small enough for this not to be too daunting.

How to run a scheme script from the command line

Here's a small bit of scheme code:

(define (adding a b)
(+ a b))

(display (adding 7 4))

To run this directly from the command line, you can use the following:

mit-scheme --quiet < script.scm

How to force line breaks in markdown

I always forget this.
I don't know why, because my editor shows big nasty red blotches whenever I have trailing spaces.
To force a line break in markdown, end your line with two spaces.

A limerick

A couple of devs with nice Macs
Thought up some quite clever hacks
They opened PRs
Approved them too fast
Now production is being rolled back

How to output the current RSpec example

Here's a snippet you can add to your rspec config to print out the name of the method being run:

config.before do |example|
puts "----- state ----"
puts example.metadata[:full_description]
puts "inline: #{Sidekiq::Testing.inline?}"
puts "fake: #{Sidekiq::Testing.fake?}"
puts "----------------"

Why on earth would you wan to do this? Sometimes you are working with something like Sidekiq where state can bleed between tests (for example changing from inline to fake); this little snippet helps you figure out where it's happening without having to investigate all your test files.

Next Page