Dr. David J. Pearce

The Semantics of Semantic Versioning?

Semantic versioning is a surprisingly interesting topic when you get into it. Recently, myself and a few colleagues (Patrick & Jens) have been giving it some thought (and we even wrote an essay on it)! If you haven’t seen it already, check out the manifesto for semantic versioning. Whilst that provides a nice overview, there is a lot left unsaid. There are two different perspectives on semantic versioning:

  1. Downstream. This is perhaps the more obvious scenario. Downstream developers (clients) want access to the library features offered by (upstream) developers! They also want both stability and protection. That is, they don’t want future releases of a library to break their code but (ideally) they want to get future releases automatically (e.g. for critical security updates).

  2. Upstream. On the flip-side, upstream (library) developers want flexibility to continue improving their libraries with new features, refactorings, etc. They also want to fix bugs and security vulnerabilities as and when they arise.

In some sense, semantic versioning is just a communication mechanism between upstream and downstream developers. Now, a three point version number is (at best) a low fidelity communication channel. But, this post is not about that. Rather, it is about figuring out how to make the most of semantic versioning as it is.


An important aspect of semantic versioning is trust. Downstream developers must trust upstream developers not to break the protocol (e.g. by putting out minor releases with breaking changes). When trust is lost, clients become hesitant to upgrade and the lag between a new release and the client upgrading increases. This makes sense as clients balance the costs of upgrading against their benefits. For example, if upgrading requires only a few minor tweaks to your code base, but offers important security patches then it seems worth it. But, when upgrading requires significant changes to your code (e.g. because library developer decided randomly to refactor the API) and the only benefit is some features you don’t need — it doesn’t.

We can view all this through the lens of economic theory and treat it as a market system. Then, trustworthy upstream developers should succeed where others fail, etc. This seems like that’s it all sorted out! But, the reality is different as, unfortunately, mistakes are made all the time by developers we think should be trustworthy (see examples below). The problem is that the system is not yet efficient because:

  1. Downstream developers have real difficulties determining what the costs and benefits are.

  2. Upstream developers cannot easily tell when they inadvertently make breaking changes (more on this below).

In thinking about this, we’re interested in what techniques could be brought to bear on this to make the market system more efficient.

Breaking Changes

An important question here is: what are “breaking changes” anyway? Knowing this is somehow key to a smoothly functioning system. Some thoughts:

Most of these could be considered breaking changes in certain situations (i.e. depending on the client):

Exhibit A. Firefox (downstream developer) uses fontconfig (upstream developer). A commit to fontconfig v2.10.92 meant it now rejected empty filenames. It’s documentation didn’t say whether empty filenames were allowed or not, so this was reasonable right? Well, it broke Firefox.

Exhibit B. JSoup v1.10.1 included a performance refactoring for “reducing memory allocation and garbage collection”. Again, this seemed reasonable but clients quickly started reporting problems.

These are just some examples and you can easily find more with a little digging. The point is that upstream developers miss (or ignore) changes affecting downstream clients all the time. So, what can we do?


RevAPI provides food-for-thought here. If you haven’t come across it before, this tool compares two versions of a Jar file and identifies certain kinds of breaking change. Examples of breaking changes include: reducing the visibility of a method; removing a public declaration; or, modifying a public class so that it no longer implements some interface. This is actually awesome! People should use this stuff all the time!

Our interest here is not what the tool thinks are breaking changes, but what it doesn’t. For example, when a method no longer accepts null for some parameter, or moves from linear to quadratic time, or returns the elements of an array in a different order, etc. Ok, we have to be reasonable — one tool cannot do everything and these are hard problems. Still, RevAPI offers a glimmer of hope that semantics versioning could be much more than it currently is. And, there are others: Elm Bump, rust-semverver, and clirr to name a few.

So, there should be tools, and lots of 'em! Both upstream and downstream developers should be using them to spot inadvertent breaking changes, or to gauge the cost of upgrades. Whilst current tools are fairly shallow in their assessment of breaking changes, there is a wealth of techniques from fields like static analysis and automated testing which could be used here.


Well, that’s enough for now!! If you made it this far, then you should check out our essay which goes into way more detail.

And finally, just to get you thinking, here’s a cool idea for upstream developers: know your dependencies! These days, its easy to find your downstream clients. Before releasing a new version, just check for breaking changes by running all your clients’ tests! That’s exactly what Crater does for Rust and also what these folks and these folks are suggesting.

Here are a few related articles on semantic versioning which are definitely worth a read!