I mentioned in my previous post that I was taking a Programming in the Cloud class on distributed systems and programming this semester, and one of the perks of this class is that we occasionally have speakers from the industry come in to talk about various topics.
Last week, Doug Terry from Microsoft Research came in to speak about eventual consistency, which I found super, super interesting. I don't claim to be an expert on the topic now, but here are my notes (more or less from memory). First, the two opposite sides of the spectrum:
Strong vs. Eventual
Strong Consistency - guaranteed consistency across replicas
Strong consistency guarantees, well, strong consistency. Any read always returns up-to-date data, although this may result in lower availability since you're limiting the machines that you're able to read from to keep this guarantee.
Eventual Consistency - all replicas will "eventually" become consistent.
Although this weaker form of consistency has better availability because you can read to/write from any machine, you can only hope that the data will be fresh (for some applications, this is enough).
And everything in between
This isn't an exhaustive list of consistency models, but a couple of other models in the middle of the spectrum that Doug went over that day:
Read My Writes - guaranteed to be able to read updates you write
In applications where you're doing some sort of read then update operations (like
x+=1), it's very important to be able to your own writes. The example Doug gave during lecture was a baseball analogy, where a scorekeeper is keeping track of the current score of the game with operations like
homeScore +=1 for every home run scored for the home team. It's a different story if there are multiple scorekeepers, where you might need some causal consistency guarantees, but a single scorekeeper must at minimum be able to read his own writes.
Afterwards in class, we had a discussion about the various ways that this consistency model could be implemented. There needs to be some sort of versioning system for the clients to be able to compare with the machines to determine if their writes are stored there or not, or as an optimization, the client could store the writes locally and always be able to "read their own writes" if they take the union of their write set and any other machines' write set.
Consistent Prefix - updates returned are some prefix of all the updates, with no gaps
In case the title didn't make sense, a "consistent prefix" means that some prefix of all of the writes (assuming the writes have a global order) are applied such that the prefix is consistent ("up to date" up until the last write in the prefix, with no gaps).
For example, if the writes are A, B, C, D, E, F, you will be able to read some consecutive set of these writes, starting with A. Note that the empty set is also trivially a consistent prefix.
Why is this useful? Following the baseball analogy from before, if you are recording score writes of
B:7 and you don't have a consistent prefix guarantee, you could be reading a database that only has the third and last writes applied, giving you
A: 3 and
B: 7 when in reality, that combination of scores never existed.
Monotonic Reads - for each client, guaranteed to read a more recent or at least the same set of updates
Monotonic reads guarantees that you'll never "go back in time" in terms of seeing updates - you'll always see a set of updates that is at least as new as the updates you saw the last time you read from the system. The implementation of this requires a client to keep some kind of version of when it performed its last read, and search for a machine that has at least those writes. However, like Read My Writes, we can apply the same optimization of keeping a read set on the client (and performing a union whenever reading from the system).
Since we're only concerned with reading data that at least as fresh as the last time we read it, we can also introduce a notion of "stickiness" or "affinity", where a client stays with the last machine it read (until it goes down, at least) instead of re-searching for another machine that is more up-to-date. Assuming a single machine will never lose any updates it has applied, staying with the same machine is trivially monotonic.
Last one! Hopefully you find this fascinating (like I do), and not boring. The idea of staleness allows consistency to be defined in terms of how old a replica can be compared to the most up-to-date data. In some applications, it's okay to have the data be a little stale - for example, weather reports can be a couple hours old. Strong consistency is a special case of Bounded Staleness where the amount of staleness is none.
Wow, this blog post took longer than I thought it would to write. I'm really interested in these topics (and databases/distributed systems in general), so I'm pretty excited for what the class has in store for us for the rest of the semester!
Again, I don't claim to be an expert in this field (and most of these notes are from memory), so feel free to point out any mistakes/incorrect assumptions or start a discussion in the comments. Thanks for reading! (: