The edge, in cloud computing, is deploying servers in multiple physical locations to serve users that are also distributed geographically. A service that uses edges will serve (say) users in the US with servers in North America, and Indonesian users with servers located in Southeast Asia. The reason for this is simple: the smaller the distance between a server and a user, the smaller the network delays (and the better the user experience).
I come from a background of centralized server control, so the idea of the edge is quite a stretch for me. In this article, I want to challenge myself to reconsider my tendency to think cloud architecture in terms of a centralized group of servers.
For static content, the case for using the edge is compelling. In this case, there’s no drawback in putting your content in servers all over the world (except perhaps for the additional devops complexity that this involves, which is probably not much if you’re doing things right). When your content changes, you push updates to every edge so that the users can get the latest version of it. If you are not overly concerned with the updates taking a few seconds to propagate to every edge, everything should be good. End of story.
Where things become much trickier is when we’re talking about an application. What distinguishes an application from static content is that an application holds state. State is data that belongs to a specific user and that changes over time. Because state changes and needs to remain consistent, you cannot just “put” your application on the edge. Well, you can, but first you need to take a long hard look at your consistency decisions.
Here, the CAP theorem is essential. The theorem states that you can only choose two out of three: consistency, availability or partition tolerance. A network partition happens when two or more edges cannot communicate with each other due to either a network failure or one of the edges being outright down.
If you use edges, your system is distributed, therefore it is vulnerable to network partitions. So you really have two choices in the face of a network partition: prioritize consistency, or prioritize availability.
Prioritizing consistency means that you’d rather refuse to serve a user request (and return an error), rather than risk having your state be inconsistent. A good use case for this is financial transactions: if you cannot be sure that a user has X amount of money in the bank (because the source of that state is in an edge you cannot reach), you’d rather refuse the withdrawal than proceed with it anyway. This means that, during a partition, your system will return errors.
Prioritizing availability means that you’d rather serve a user request even though you might not be sure if it reflects the latest state. A good use case for this is reading a public feed in a social network: you’d rather serve the page with its content, even though the latest messages might be missing, or some messages have been deleted. This means that the freshness of the data returned by your system cannot be trusted.
If you’re building on the edge, this is a choice you have to make, consciously and well.
Myself, I’m firmly clutching onto consistency, for the simple reason that it is much harder to reason about an inconsistent system. Given this choice, I’ll choose consistency unless the situation presents a compelling case for availability. In that case, I will reconsider the choice.
An interesting distinction is between read and write consistency. Reading inconsistent data (for example, a slightly out of date list of twitter posts) is generally harmless. Writing inconsistent data (for example, allowing to withdraw 100 from an account that has 50) is generally harmful. If you’re able to segregate requests based on whether they’re just reads or writes, you might be able to make this choice separately.
This flexibility, however, can be severely limited if your reads feed further writes. For example, if a read operation tells you how much money is in the account, and then you immediately use that amount to authorize or deny a transaction, your read operations needs to be consistent.
This choice between availability and consistency is essentially a DB problem, since state lives in databases, rather than in files or inside your API.
It breaks my heart that my favorite database, Redis, has chosen availability over consistency. I don’t use redis cluster mainly because of that reason.
On the side of consistency, there are the following options:
- Spanner, a relational database developed by Google, on top of which run Google Ads, Gmail and Google Photos.
- CockroachDB, developed by an ex-Google team.
These DBs use a consensus algorithm (Paxos or Raft) to guarantee consistency among several nodes. If there’s a partition, the operations will fail and not happen.
To close off, this is what I’m currently exploring:
- Have a DB that lets me partition (I’d rather say partition than shard) my keyspace across different nodes. I want to be able to tell the DB how to partition the information. Going back to where we started, I’d like a system where I could have the data for users in the US in a partition that is based in an US edge.
- Have that DB be able to execute transactions across partitions while maintaining consistency.
- Rather than thinking of the API and the DB as separate systems, conceive of a system where the API operations are wholly run “inside” the DB. This avoids a large source of hidden inconsistency: the delay between sequences of reads and writes that an API does against a DB.
If you made it this far and you have something to comment, I’ll be happy to hear your thoughts. Thanks for reading.