Finding the photos you took but didn't realise · 6 days ago

Having just acquired a camera famed for its applicability to urban and street photography, we are all asked to stay at home – my timing is great as always. Obviously you can take a lot of pictures at home – I certainly did during my 365 project where having some white card to use as a backdrop for random household items proved invaluable – I don’t really feel like doing that right now. Instead, I’ve decided to go back through all my old photos and try to improve my basic editing skills.

In the last few years Laura and I were fortunate to go on a series of road trips through parts of the US, and some city trips to Nantes in France, and Helsinki in Finland. During these trips I took a lot of photos but published very few of them. In part this was because I’m not a fan of Lightroom, and so dreaded processing them, but also because I found a lot of the pictures didn’t match the scene as I felt it emotionally or remembered it after. Now, no amount of editing will make up for bad composition, of which there is a fair amount in my library, but even where I felt I got the composition right, the photos didn’t convey how the scene felt, and it’s these photos that I’ve been going back to try and see if I can improve, to expose some hidden gems.

The kind of things I’m looking to adjust are exposure, contrast, colour temperature and balance, that sort of thing. In terms of pixel editing, the most I’m doing here is cropping or straightening, but ideally I’ll have got that right in camera. Basically, I’m just doing the things that you could have done in a dark room – if you’ve not read it I can recommend this old article about how famous film prints had their exposure tweaked across the picture.

My tool for this is Lightroom, and as such I’ve been watching a lot of videos about how to edit photos of various styles. Whilst some of these videos do go into the “swapping the sky for a better one”, or even into “please buy my Lightroom presets”, I’ve learned something from each one despite not all of it being appropriate for me (or the presenter’s bombastic style not being to my tastes :). Here’s a few that I can recommend if you’d like to getting started in this too:

As I continue to watch these videos and expand my knowledge of what I can do, and just get practice in so I develop an intuition for what works and what doesn’t, I’ve been posting to my Flickr account a picture every day or so from my backlog.

The other restriction I’ve given myself is that in general I will just use a limited set of colour profiles. Given these are all older pictures, I’ve kept it to Color Chrome and Monochrome+Y on pictures taken with my old Fujifilm X-E2, and a similarly restrictive set of Adobe colour profiles that look similar for those taken with my Canon 7D. The reason for this is to force me to do the editing, rather than letting the colour profile change a lot of things without my understanding why.

This below picture, of a painted street box in Helsinki, is one of the first ones where I managed to get the photo to convey what I saw rather than what came straight out of camera:

Bright Helsinki

The day was quite overcast at that point if I remember correctly, and so the picture came out of camera with a very flat look. For this one I tightened up the crop a little, upped the overall exposure, then gave it an S-shape on the tone curve to provide slightly more contrast. I also used the clarity slider, which adjusts contrast in the mid-tones for the image to further bring out the detail in the neutral background.

One thing I didn’t change, and as a rule don’t, is colour saturation. Its very easy to overdo that, and I find that adjusting the light and dark to make a photo pop results in an image more to my tastes.

This photo was taken from a hotel window in Nantes, and was me just trying to capture the cityscape a little:

High up in Nantes

The sun at this point was just coming up, and out of the camera I’d lost all the detail in the street, due to the high contrast of the light itself. So here I had to do the opposite of the previous picture and remove contrast rather than add it back. I also tweaked the colour temperature to bring back the warmth of the scene which had been lost as the camera made everything look more white than yellow. The result much better captures my emotional memory of the light of Nantes in the morning than the series of bits that were saved to the flash card.

This final example is one I’m quite frankly amazed is one of my pictures at all – this is something I have no right to have taken:

Through the pass

This is taken in the Black Canyon of Gunnison National Park in Colorado. At the time there was a crazy snow storm that had swept in: the previous day we were having to buy hats to keep the sun from burning us, and the following morning it was a white out. I took this picture of the break in the rocks, but out of camera it looked very dull, as the diffused light caused everything to come out looking flat.

Interestingly it really didn’t take much to make this pop, which is the real learning for me here – I went from a dull looking picture to one I’m proud of very quickly. I upped the exposure at the top end of the tone curve to brighten up the whites – which is how I remember the scene in that snow storm, and to separate the foreground rocks from the background. I also again adjusted the clarity just a little to bring out the features of the rocks to be a little more distinct. And then finally what got me to me memory was adjusting the colour temperature down, the opposite of what I did on the last photo, to capture that slightly colder look. And bam, it turned out that one of my favourite photos I’ve ever taken was sat in my photo library for the last two years and I didn’t know it.

Thus, if you’re stuck at home wishing you could be out taking pictures, perhaps instead go back and see if there’s any hidden gems in your photo library that just happened to escape your attention at the time.

Comment

Recording light again · 13 days ago

My stats in Lightroom tell a sad tale: about ten years ago I was taking four to five thousand photos a year with an actual camera, but two years ago that fell below 1000 for the first time, and then last year it halved further.

In part, that’s just how inspiration is, and I’ve had other focuses, but it was also in part down to two practicalities that put me off: firstly I wasn’t finding my camera gear encouraged me to carry it everywhere, and secondly I disliked the tools I had processing photos. This post is mostly about my attempt to solve the first part of that.


About ten years ago, when I first seriously got into photography as a hobby, I graduated from my first DSLR to shooting with a lovely but large Canon 7D. For a while, particularly when I did my 365 challenge, I carried it everywhere. But at some point, even though I mostly just used a fixed 35mm prime, that bulk got far too much to carry everywhere and I found myself less and less frequently heading on out with a camera.

To fix this, about 5 years ago, I swapped over to a Fujifilm X-series compact system camera, the X-E2, which was low down in their range, but had two vital features: it was both small and light. The X-E2 still had proper interchangeable lenses, so I could pander to my love of 35mm primes with lovely narrow depth of fields, but it was also of a size where I could shove it in my bag without adding much bulk.

Along the carriage

The problem was that whilst I’m addicted to prime lenses, I also do like having options. So I’d usually have my 35mm prime on the Fujifilm, but then I’d want to carry with me the also very lovely 12mm wide angle prime I had, which was quite bulky. And sometimes I’d want to take my 55-200 zoom if going out somewhere where I might need that – once you managed to get an unexpectedly good shot with a lens you’re loathed to leave it behind just in case.

Great Egret Preening

And so I ended up with a camera bag full of stuff on the off-chance I might need them, and thus I’m back at having a bulky set of bits to carry around, and thus I kinda stopped carrying them.

The other downside of the Fujifilm was that those earlier models were quite slow in terms of focus. So on some trips, like when we went to Nantes last, I’d still take the Canon 7D with me just armed with the Sigma 30mm prime, but still be frustrated with its bulk whilst enjoying the ability to get shots quicker (even if this example below looks static, it was on a moving boat :).

Pastel rope

Add to that my other interests, this lead to me just using my phone to take snaps and forgetting about artistic photography for a while.

But of late I’ve wanted to get back into photography. Partly as I wanted something that I’d do for fun alone, and partly as I was inspired by seeing great photos from my friends, such as Dave, Morag, Tim, Jason, and Karen – each has a very different style from me, but it’s just lovely to see what they’ve been making and it made me want to try again.


Thus I decided at the end of last year to stop having two cameras that I didn’t want to carry around with, and get one I would. Originally I was looking at trading in both my 7D and X-E2 and getting a more recent Fujifilm body like the X-30 that had faster autofocus whilst remaining in their lighter end of their cameras, but that didn’t fix the issue with my lack of discipline around taking all my lenses with me everywhere just in case.

So I decide to do something radical/silly and get a camera with just one fixed lens in it: not just a fixed focal length, but also permanently fixed to the camera. If I can’t change the lens, there’s no need to take all these extra lenses around with me is there? To this end I got a Fujifilm X100F, which has an 18mm f/2 lens on it. The 18mm sits somewhere between my two usual regular primes of 35mm and 12mm, making it good for urban style photography which usually what I go for:

Kings hiding in the tree

But it also still is close enough that it still lets you take pictures of people in situations:

The laughing drummer

The camera choice is/was a bit of a gamble – it may be over time that I find this a frustrating choice (when I explained to my friend I mentioned above, Dave, what I was doing he asked “why do you hate yourself?” :). However, one thing I love about prime lenses is how they force you to consider things differently, look for a non-obvious take that will fit within the constraints of the limitations of the lens you have to hand. I’m not a hugely artistically creative person, so I find that forcing these constraints on me pushes me to be more creative than I’d otherwise be.

Abstract aluminium

The other nice feature about the X100F is that it is a Fujifilm camera still, and means I still get to use the lovely film simulations that Fujifilm cameras come with. The above photos are mostly using the Acros black and white film simulation, but I also like the Classic Chrome colours:

Morning goods

And it’s with these two film simulations I’m trying to fix a little of the other reason I don’t take many photos, which was I became obsessed with trying to edit them just so. Instead I’ve set myself a soft limit that I can use just these two film modes, and then just minimal edits in Lightroom before publishing. This way I worry more about the moment of capture than spending an age editing.


So I’m back taking pictures again: perhaps not at a prolific rate still, but I’m having more fun experimenting with this simplified camera and simplified workflow.

I’ve resurrected my Flickr account, and am posting there what I take if you’re interested in seeing what I capture.

And if you do, feel free to take time to comment if you see something you like or dislike; I feel like I’m restarting my photography in part, and I recall one of the best things about my 365 was the constructive feedback I got from others that helped me improve.

Easy and not-so easy listening · 133 days ago

I’ve been getting back into podcast listening of late, and rather than my usual listening to guitar or tech podcasts, I’ve been trying to listen to a mix of things to help increase my understanding of the current issues we find ourselves in around politics and the environment, and then mixing it up with some podcasts that look at more light-hearted topics that are still educational in some way.

Here’s a list of things I can recommend, grouped by topic, and roughly also happens to be the order of priority in which I listen to them when they pop up on my podcast player.

Politics

Polarized by the RSA – this is an attempt to look at why the current political environment is as divisive as it is. It (usually) stays away from the day-to-day politics, and tries to understand the general landscape of why we seem to have ended up in a situation where we have two sides that share no common ground or hope for compromise.

I binge listened to this from the start over the course of a couple on months, and if you do try this one I recommend listening in order.

Talking Politics – whereas Polarized tries to take a macroscopic look, Talking Politics is more a mix of current affairs analysis and longer term trend reviews, but is always more analytical and thoughtful, avoiding personality politics and focussing on the actual political/legal side of things.

I tend to prefer the longer term episodes, but I do find in the current run up to a general election the microscopic is useful.

Environmental

The Beam Podcast – The Beam is a magazine publication that looks into environmental topics, and their podcasts continues that theme. I do like that this takes broader topical looks, but I find that compared to the politics podcasts it lacks a little actionable bite, but then that’s probably more a reflection on the domain. Still, worth listening to.

General Interest

Reply All – Reply All attempts to explain how the modern Internet impacts life, from a non-technical standpoint. It has a standard set of topics it cycles through, my favourite of which is super-tech-support: here they dig into things like how did someone’s snapchat account get hacked, or how did someone trying to listen to relaxation sounds on their Alexa get something with creepy footsteps on it – all of which expose how interconnected everything is and nothing comes from where you expect.

99% Invisible – Each week 99PI picks a different niche topic and takes a detailed look at it. It usually has a slight design bend, but topics range from automating pepper farming (which made me aware of how much automation in farming ruins bio-diversity), the design of call holding systems, the and how placing a garden store Buddha statue can change crime patterns significantly. It’s not very deep, but it’s usually quite interesting, and makes a nice antidote to the more serious podcasts I listen to.

The Incomperable – The Incomperable is a film/book/game review show, commonly with a science fiction theme, but not exclusively. I tend to pick and choose which episodes to listen to on this one, as either I’m not interested in the latest Marvel/Star Wars films, or I’m trying to avoid spoilers. However, it has introduced me to some great classic films, like The Thin Man, and I enjoy their annual review of nominations for the two big sci-fi book awards as a way of finding new reading material.

Comment

Remember Grandad · 296 days ago

My Grandad passed away recently, someone who was a large part of my childhood. As I’ve got older and things in life get in the way I’d not seen him (or any other of my family) nearly as often as I should have, but he’ll always be a special person to me, and someone who had a large impact on my life. It’s not just me: he and Nanna had five daughters, so I have a lot of cousins, and we all share the similar fond memories of our Grandad.

Of the countless memories I have of him, there’s two that for some reason stick out right now, both I guess from when I was around 12 or thereabouts.

The first I think captures his fun side. One evening, whilst everyone else had their supper cup of tea in the living room, Grandad and I went into the kitchen to have a biscuit with our tea – he was always a fan of gingernuts – and I realised after nattering with him for a while that we’d ate the entire packet between us! A slightly mischievous act, but I’ve no idea why it sticks with me. Perhaps because it’s one of the few that’s just me and him rather than as a larger family unit. But it also shows his of child-like fun he had, and this seemed to cement part of that for some reason. To this day I still have a (helpfully) similar sense of childish fun, which I attribute in a large part to both to him and my gran on the other side of the family.

The other memory that came to mind is of him being amazed at how brattish I was being about not getting to play an arcade game I really wanted to play (as ever, it was something my “cooler” friends were playing, and so I felt the need to play it to just be on par, but didn’t have the money to do so). He wasn’t being nasty about his observation, just bemused I think, hardly surprising given what his generation would have had to deal with at a similar age. But despite clearly seeing me for the brat I was a lot of the time as a child, he still treated me as someone worthy of attention and playing with. I hope that I can be as inclusive and as generous as he was to others, and I guess this is a textbook example of unconditional love. Dear me, I must have been a major pain in the neck as a child (sorry Mum & Dad), but Grandad still gave me attention like the rest of his grandkids.

His funeral was this Monday gone, and at the get together of family and friends afterwards it was lovely to see everyone share their happy memories. Grandad disliked dark clothing, so we all tried to wear something bright – thankfully I’m well stocked for bright floral shirts. But the lasting memory will be watching the set of grandkids playing the games he’d play with us all – we brought in the marbles and the dominos and the other toys he’d spent hours playing with us, and we had some more fun in his memory. To me that’s a near-perfect way to remember his impact on us.

Photo of my Grandad with my and my sister, on a beach, possibly mid-80s

Grandad passing was a reminder to me that our time is finite, something that it’s easy to forget in the day-to-day. That blue guitar I recently completed which everyone has said nice things about, had been stuck in limbo waiting for me to finish it as I procrastinated due to fear of things not being perfect. But Grandad’s passing spurred me to just get on with it – stop worrying about the maybes, and just do your best and give it a go. So that guitar is there thanks to his memory, and I’ll always think of him now when I think of it.

I was sharing my memories above with my Mum after the funeral, and she remembers me at a similar age complaining for the n-th time that I was bored (I really was a terrible child), and Grandad turning around and saying “life is boring – you have to make it not boring”. Words that didn’t take at the time, but speak to me now. This is definitely one of the reasons I’m very fortunate to have Laura in my life: she helps life not be boring, both by being there and by encouraging me to do things I might not otherwise try.

And that saying is also the broader point of this note: time is limited, and whilst I don’t think you can treat every moment as precious (that’d be as tiring as it is impractical), it’s worth being reminded that you can’t put things off indefinitely. Whatever it is that is important to you, ensure you make time for it, as it’s only you that can make it happen: life is boring, you have to make it nor boring.

I’ll try my best Grandad.

Comment

Better testing for golang http handlers · 770 days ago

I’m writing this up as it doesn’t seem to be a common testing pattern for Go projects that I’ve seen, so might prove useful to someone somewhere as it did for me in a recent project.

One of the things that bugs me about the typical golang http server setup is that it relies on hidden globals. You typically write something like:

package main

import (
    "net/http"
    "fmt"
)

func myHandler(w http.ResponseWriter, r *http.Request) {
     fmt.Fprintf(w, "Hello, world!")
}

func main() {
     http.HandleFunc("/", myHandler)
     http.ListenAndServe(":8080", nil)
}

This is all lovely and simple, but there’s some serious hidden work going on here. The bit that’s always made me uncomfortable is that I set up all this state without any way to track it, which makes it very hard to test, particularly as the http library in golang doesn’t allow for any introspection on the handlers you’ve set up. This means I need to write integration tests rather than unittests to have confidence that my URL handlers are set up correctly. The best I’ve seen done test wise normally with this setup is to test each handler function.

But there is a very easy solution to this, just it’s not really considered something you’d ever do in the golang docs – they explicitly state no one would ever really do this. Clearly their attitude to testing is somewhat different to mine :)

The solution is in that nil parameter in the last line, which the golang documents state:

“ListenAndServe starts an HTTP server with a given address and handler. The handler is usually nil, which means to use DefaultServeMux.”

That handler is a global variable, http.DefaultServeMux, which is the request multiplexer that takes the incoming requests, looks at the paths, and then works out which handler to call (including the default built in handlers if there’s no match to return 404s etc.). This is all documented extrememly well in this article by Amit Saha, which I can highly recommend.

But you don’t need to use the global, you can just instantiate your own multiplexer object and use that. If you do this suddenly your code stops using side effects to set up the http server and suddenly becomes a lot nicer to reason about and test.

package main

import (
    "net/http"
    "fmt"
)

func myHandler(w http.ResponseWriter, r *http.Request) {
     fmt.Fprintf(w, "Hello, world!")
}

func main() {
     mymux := http.NewServeMux()
     mymux.HandleFunc("/", myHandler)
     http.ListenAndServe(":8080", mymux)
}

The above is functionally the same as our first example, but no longer takes advantage of the hidden global state. This in itself may seem not to buy us much, but in reality you’ll have lots of handlers to set up, and so your code can be made to look something more like:

func SetupMyHandlers() *http.ServeMux {
     mux := http.NewServeMux()

    // setup dynamic handlers
     mux.HandleFunc("/", MyIndexHandler)
     mux.HandleFunx("/login/", MyLoginHandler)
    // etc.

    // set up static handlers
     http.Handle("/static/", http.StripPrefix("/static/", http.FileServer(http.Dir("/static/"))))
    // etc.

     return mux
}

func main() {
     mymux := SetupMyHandlers()
     http.ListenAndServe(":8080", mymux)
}

At this point you can start using setupHandlers in your unit tests. Without this the common pattern I’d seen was:

package main

import (
    "net/http"
    "net/http/httptest"
    "testing"
)

func TestLoginHandler(t *testing.T) {

     r, err := http.NewRequest("GET", "/login", nil)
     if err != nil {
          t.Fatal(err)
     }
     w := httptest.NewRecorder()
     handler := http.HandlerFunc(MyLoginHandler)
     handler.ServeHTTP(w, r)

     resp := w.Result()

     if resp.StatusCode != http.StatusOK {
          t.Errorf("Unexpected status code %d", resp.StatusCode)
     }
}

Here you just wrap your specific handler function directly and call that in your tests. Which is very good for testing that the handler function works, but not so good for checking that someone hasn’t botched the series of handler registration calls in your server. Instead, you can now change one line and get that additional coverage:

package main

import (
    "net/http"
    "net/http/httptest"
    "testing"
)

func TestLoginHandler(t *testing.T) {

     r, err := http.NewRequest("GET", "/login", nil)
     if err != nil {
          t.Fatal(err)
     }
     w := httptest.NewRecorder()
     handler := SetupMyHandlers()  // <---- this is the change :)
     handler.ServeHTTP(w, r)

     resp := w.Result()

     if resp.StatusCode != http.StatusOK {
          t.Errorf("Unexpected status code %d", resp.StatusCode)
     }
}

Same test as before, but now I’m checking the actual multiplexer used by the HTTP server works too, without having to write an integration test for that. Technically if someone forgets to pass the multiplexer to the server then that will not be picked up by my unit tests, so they’re not perfect; but that’s a single line mistake and all your URL handlers won’t work, so I’m less concerned about that being not picked up by the developer than someone forgetting one handler in dozens. You also will automatically be testing any new http wrapper functions people insert into the chain. This could be a mixed blessing perhaps, but I’d argue it’s better to make sure the wrappers are test friendly than have less overall coverage.

The other win of this approach is you can also unittest that your static content is is being mapped correctly, which you can’t do using the common approach. You can happily test that requests to the static path I set up in SetupMyHandlers returns something sensible. Again, that may seem more like an integration style test, rather than a unit test, but if I add a unit test to check that then I’m more likely to find a fix bugs earlier in the dev cycle, rather than wasting time waiting for CI to pick up my mistake.

In general, if you have global state, you have a testing problem, so I’m surprised this approach isn’t more common. It’s hardly any code complexity increase to do what I suggest, but your test coverage grows a lot as a result.

Comment

Some luthier notes · 859 days ago

I’ve spent the week locked in Makespace working on guitars, and thought I’d write up some notes on the things I’ve been working on to give insight into what goes into making guitars. You can see it here on the Electric Flapjack blog.

Comment

Managing GOPATH for multiple projects with direnv · 872 days ago

I’ll stop with the golang tips shortly, but another quick time saver incase you’ve not seen this before: you can use direnv to manage your GOPATH settings for each of your projects.

direnv is a small utility that will set/unset environmental variables as you enter/leave directories. It’s dead easy to set up, and in homebrew if you’re on a Mac. This means I can set a GOPATH specifically for each go project, without having to remember to do GOPATH=$PWD each time – direnv just sets it as a change directory to the project, and unsets it when I move away.

This can be useful for other things to, like setting PYTHONPATH or other project specific environmental variables.

Hat tip to Day Barr for alerting me to that one.

Comment

Handling golang third party dependancies robustly · 880 days ago

I wrote recently about my thoughts on golang, concluding that although far from perfect, I quite like the language as it makes solving a certain class of problem much easier than traditional methods.

One of the things I was a bit dismissive of was how it manages packages. Whilst I’m not a fan of its prescriptive nature, it’s out of the box behavior is in my mind just not compatible with delivering software repeatedly and reliably for production software. However, it’s fairly easy to work around this, I’ve not seen anyone use this particular approach, so I thought I’d document it for future people searching for a solution.

The problem is this: by default golang has a nice convenience feature that third party packages are referred to by their source location. For example, if I want to use GORM (a lightweight ORM for Go), which is hosted on github, I’ll include it in my program by writing:

import "github.com/jinzhu/gorm"

And as a build stage I’ll need to fetch the package by running the following command:

go get -v github.com/jinzhu/gorm

This command does is checkout the package into your $GOPATH/src directory at $GOPATH/src/github.com/jinzhu/gorm, doing a git clone of whatever their latest master code is.

On one hand this is very nice: you build in how to find and fetch third party dependencies. However, it’s enforced two things that I don’t want when I’m trying to build production software:

  1. I now rely on a third party service being around at the time I build my software
  2. The go get command always fetches the latest version, so I can’t control what goes into my build

Both of these are not something I’m willing to accept in my production environment, where I want to know I can successfully build at any time, and I have full control over what goes into each build.

There is a feature of the golang build system you can use to solve this, just it’s not that obvious to newcomers, and this alone isn’t very useful, so here’s my solution, bsaed on the assumption you’re already using git for version control, and you have $GOPATH pointed at your project’s root folder:

  1. Clone the project into your own code store repository. I always do this anyway, as you never know when third party projects will vanish or change significantly.
  2. Create a vendor directory in your project. The golang build system will look $GOPATH/vendor for packages before looking in the $GOPATH/src directory.
  3. Add as a git submodule the project at the appropriate point under vendor. For GORM that’d be vendor/github.com/jinzhu/gorm, similar to how go get would have put it in the src directory.
  4. Replace your go get build step with a git submodule update command.

And voila, you’re done. Using git submodules means you can control which commit on the third party project you’re using, and by pointing it at your own mirror, you can ensure if your own infrastructure is there you can still deliver software regardless of external goings ons.

As a friend of mine pointed out, there are tools you can do to try and manage third party code into the vendor location, such as vndr, but the fewer tools I need to install to build a product the better – still, if you want to avoid the creation of directories yourself then you should give this a look.

Comment

Some thoughts on Golang · 888 days ago

The Go programming language has been around for about a decade now, but in that time I’ve not had much call to create new networked services, so I’d never given it a go (I find I can’t learn new programming languages in abstract, I need a project otherwise the learning doesn’t stick). However I had cause to redo some old code at work that had grown a bit unwieldy in its current Python + web framework de jour, so this seemed like a chance to try something new.

I was drawn to Go by the promise of picking up some modern programming idioms, particularly around making concurrency manageable. I’m still amazed that technologies like Grand Central Dispatch (GCD) that save programmers from worrying about low level concurrency primitive (which as weak minded humans we invariable get wrong) are not more widely adopted – modern machines rely on concurrency to be effective. In the Bromium Mac team we leaned heavily on GCD to avoid common concurrency pitfalls, and even then we created a lot of support libraries to simplify it even further.

Modern web service programming is inherently a problem of concurrency – be it on the input end when you’re managing many requests at once to your service, and on the back end when you are trying to off load long running and periodic tasks away from the request service path. Unfortunately the dominant language for writing webservices, Python, is known to be terrible at handling concurrency, so you end up offloading concurrency to other programs (e.g., nginx on the front end, celery on the back end), which works, but means you can only deal with very coarse grain parallelism.

Go seems to have been designed to solve this problem. It’s a modern language, with some C like syntax but free of the baggage of memory management and casting (for the most part), and makes concurrency a first class citizen in its design. Nothing it does is earth shatteringly new – the go routine concurrency primative is very old, and the channel mechanism used to communicate between these routines is standard IPC fair – but what it seems to pull off is putting these things together in a way that is very easy to leverage. It also lacks the flexibility of the aforementioned GCD to my mind, but ultimately it is sufficiently expressive that I find it very productive to write highly concurrent code safely. It actually makes writing web services that have such demands fun again, as you end up with a single binary that does everything you need, removing the deployment tedium of the nginx/python/celery pipeline. You can just worry about your ideas, which is really all I want to do.

Another nice feature is the pseudo object orientation system I Go. Go has two mechanisms that lead you in the same direction as traditional OO programming – structs and interfaces. Structs just let you define structs as you might in C, but you can do composition that gives you a sort of inheritance if you need it, and interfaces just define a list of function interfaces you can use on a struct. But an interface isn’t tied to a struct as it might be in a traditional OO, they’re defined separately. This seems weird at first, but is really quite powerful, and makes writing tests very easy (and again, fun) as it means you can “mock” say the backend object simply by writing an object that obeys an interface, rather than worrying about actual inheritance. Again, it’s nothing new, it’s just pulled together in a way that is simple and easy to be productive with.

The final nicety I’ll mention is another feature is an idiom that in the mac team at Bromium we forced on ourselves – explicit error handling and explicit returns of errors next to the valid result. This again makes writing code to handle errors really natural: this is important, as programmers are inherently lazy people and it’s a common cause of bugs in that the programmer simply didn’t think about error handling. Go’s library design and error type make this easy.

For all this, Go has its flaws. Out of a necessity to allow you to have values that may have no value, Go has a pointer type. But it also makes accessing concrete values and pointers look identical in most cases, so it’s easy to confuse those, which can occasionally lead to unexpected bugs, particularly when looping over things where you take the loop pointer rather than the value it’s pointing to. The testing framework is deliberately minimal, and the lack of class based testing means you can’t really use setup and teardown methods, but this leads to a lot of boiler plate code in your tests – this is a shame, as otherwise Go makes writing tests really easy. And let’s not get started on the package system I Go, which is opaque enough to be a pain to use.

It’s also a little behind say Python in terms of full stack framework support. The Go community seems against ORMs and Django style stacks, but that does mean it’s hard to justify its use if you’re writing a website for humans to use with any complexity. There is at least a usable minimal DB ORM in the form of GORM that saves you from writing SQL all the time.

But for all its flaws, I really have taken to Go, and I’ve written a small but reasonable amount of production quality code in it now, and I still find it a joy to use as it’s so productive. For writing backend web services, it’s a joy. There’s not enough mature framework support yet that I’d use it instead of Django/Python for a full user interactive website, but for IoT backends or such it’s really neat (in both senses).

If any of this sounds interesting to you then I can recommend The Go Programming Language book. Not only is it easy to read, it gives you good practical examples that let you see the power of its primitives very quickly. If you think I’ve got it wrong with any of my criticisms of the language, then do let me know – I’m still a relative newbie to golang and very happy to be corrected so I can be even more productive with it!

Comment

Practice practice · 937 days ago

About 18 months ago I wrote something here about how I was trying to get better at playing guitar, and I was going to try post a video to youtube once a week with a new song snippet as a way of having some discipline. If you do recall that, you also know I didn’t do it (I think I managed one more after that post).

But the reason for not doing it was at least reasonable: I actually started taking lessons, and my teacher makes me practice daily, so the discipline sorted itself out, and saved you, dear reader, from lots of bad cover songs.

Instead you can watch some bad bits of me doing blues style improv from my last daily practice session, warts and all:

Now, I may not be giving Joe Bonamassa cause to question his career choices, but I look at this and am somewhat amazed how far I’ve managed to come in 18 months thanks to David, my guitar teacher. When I wrote that original post back in May I was just trying to copy bits of other songs, and here I am today able to throw down a 12 bar blues backing track and then ad-lib over it, even throwing in a bit of wah pedal, to my heart’s content (albeit in a slightly repetitive and formulaic way :).

Partly this is the direction David and I have been working towards – rather than learning to cover old songs or work towards grades I’ve just been trying to understand the building blocks for playing the blues. What is the grammar and vocabulary that makes up a song. I may not yet be writing more than basic sentences, but despite the fact that I feel occasionally learning a song might be more short term satisfying, it’s when I get time to do a little bit of ad-lib like above that it all pays off. Ask me to play a song and I’m hopeless, but give me a looper pedal and I can entertain myself for an age with things like this.

The closest I get to playing an actual song is things like this, where I’m riffing on the great Jeff Beck Group track Rock My Plimsoul (who turn were riffing off B.B. King’s Rock My Baby):

I’ve still a long way to go, of that I’ve no delusion – the open stages of Cambridge are in no danger of seeing me any time soon. But it’s nice to occasionally reflect that one has at least made some progress, even if I can’t play a tune on demand :)

Comment

Previous