Proof of Elapsed Time, Intel’s SGX and the Internet of Things

How does Proof of Elapsed Time work, what does Intel’s SGX have to do with it and how does it fit with the Internet of Things?

In the beginning, there was Bitcoin, the first blockchain implementation using Proof of Work to establish consensus. And for the next years all the newer implementations also used Proof of Work consensus. But this algorithm has a few shortcomings and as the various blockchains started expanding in size and being used in different applications, a lot of issues emerged. Fortunately, research on consensus algorithms continued and today we have many to choose from, each with different strengths and weaknesses. In this article and the next ones, I will talk about the Proof of Elapsed Time algorithm, the underlying hardware required to use it and how it fits in the emerging applications of blockchains on the IoT.

Proof of Work consensus

I’m going to start by discussing Proof of Work here, despite the fact that it is not mentioned anywhere in my really clickbaity title. You will have to trust me for now and keep reading and hopefully, by the end of the article, everything will make sense.
So, the Proof of Work consensus algorithm. Everyone who has read even a little bit about blockchains and especially cryptocurrencies has heard about it. It’s the way the Bitcoin and Ethereum networks and most blockchain-based cryptocurrencies decide which is the next block that will be added to the ledger. The idea (in a really high level) behind the algorithm is the following:

  • A miner in the network gathers a bunch of transactions into one block. Then she tries to hash the contents of the block in such a way so as to find a value that fits some criteria. The criterion could be that the value should be less than a certain target. This is an implementation detail and we will not concern ourselves with it.
  • The way that a miner tries to come up with a proper value is to add some string at the end of the data, hash and see if the result fits the criteria.
  • If the hash is correct, then the miner commits the new block, otherwise, she generates a new random string and retries.

So there are three possible outcomes:

  1. The miner is the first to find a proper hash and commits her block.
  2. The miner fails to find a hash fast enough, someone else commits the block and everyone moves on and tries to mine the next block.
  3. The miner finds a hash pretty much at the same time as some other miner. In this case, it’s kind of random who will win and it is based on which of the two chains the rest of the network will choose to work on.

But how does a miner choose which string to append to the data? The answer is that the miner has no idea what string to use. The hashing algorithms used currently are strong and secure enough, that there is no mathematical technique to give any hints as to what the string should look like in order to get a proper value. Which means that this whole process is completely random. All the miners in the network try randomly generated strings and someone eventually gets a correct result. The more processors a miner has, the more strings she can try every second and the more chance she has to find the right combination.
The reason that this algorithm works well for consensus is the following:

The probability someone will find the right solution is equal for all the participants in the network.

Think about why this is true. Every miner will go through a random number of calculations before succeeding. There is definitely a correct result and there is no way for anyone to guess even a part of the answer if the hashing algorithm used is a cryptographic hashing algorithm. So, every round of calculations takes pretty much the same time (I know I’m also oversimplifying here but bear with me) and since computing the result of a hash function is a fixed number of steps, it means that each miner will have to wait a random amount of time before getting the chance to try to commit a block. The miner that gets the minimum amount of time wins the lottery, gets elected leader and commits her block to the ledger.

This type of algorithm is also called Nakamoto consensus algorithm from the pseudonym of the creator(s) of Bitcoin.

The problem with it is the massive amount of energy needed to run the calculations that the hashing algorithm needs. Which also tends to rise with the amount of data that need to be hashed. This is one of the roadblocks, for example, that has prevented consensus in the Bitcoin community about an increase to the size of the block which would increase the throughput of the network.

Another issue that has come up, is that new specialized processors have appeared, which are able to perform the hashing operations a lot faster than standard CPUs. These ASICs are also more expensive, which means that this algorithm favors miners who have a lot of capital to invest on expensive chips and are located in countries where the electricity is cheap and usually produced by non-renewable means. Add to that the fact that all these operations mean that there is also a lot of energy going into cooling the processors and you get a good idea why the Bitcoin network uses as much energy as a small country.

Finally, there is an issue with the finality of the algorithm. As I mentioned in the list of the probable outcomes of the algorithm, if two miners find a correct hash for a block around the same time, then which block ends up in the ledger is random. Practically, this means that even if a Bitcoin transaction you make gets committed to the ledger, you still have to wait for around 1 hour to ensure that it is permanent. Ideally, a consensus algorithm provides instant finality for blocks.

Proof of Elapsed Time

And after this small intro, we finally go into the Proof of Elapsed Time consensus algorithm. We talked about how Proof of Work operates and I said that the main problem with this algorithm is the amount of energy it needs. In a world where we are trying to make everything more energy efficient, having a system that uses so much energy for only a few transactions per second is a major dealbreaker. Just think what would happen if the Bitcoin or the Ethereum network needed to handle the amount of transactions that Visa has to handle, which is a few thousand per second.
Wouldn’t it be awesome if we could have a Proof of Work algorithm but without the Work thingie and the massive energy consumption? And this is where Intel said “Hold my beer” and came up with Proof of Elapsed Time. This algorithm was first introduced as part of Hyperledger Sawtooth, which is Intel’s contribution to the Hyperledger ecosystem. How this algorithm works is the following:

  • Suppose that we have a  number of nodes in a blockchain network.
  • Each node will produce a random wait time and will go to sleep.
  • The node with the minimum wait time wakes up, commits a block to the blockchain and notifies the rest of the peers.
  • The end.

Seems simple right? So, what we get over here is pretty much the same workflow as with the Proof of Work algorithm but without spending almost any energy. It’s actually a very efficient algorithm since it puts the miner thread to sleep and the processor can switch to other tasks until it needs to be woken up. Again we have three possible outcomes:

  1. A miner has the minimum waiting time, which means that she is elected leader and is able to commit a block.
  2. A miner does not have the minimum waiting time and waits until the next round to retry.
  3. A miner wakes up at pretty much the same time as another peer and tries to commit a block. Whether her version of the chain or another version of the chain will be finalized, is somewhat random.

If you produce the waiting times truly randomly, then you have a fair lottery algorithm that distributes the probability that a peer will commit a block equally in the network. So you have a Nakamoto style consensus that doesn’t waste energy. Hooray!
One minor issue with this algorithm is that, just like Proof of Work, it does not have very good finality. Even after a block is committed, a client will have to wait for a few minutes to ensure that there is no other chain in the system that is longer than the one that contains her transaction.
One giant issue though is this:

But how do the peers know that no one is cheating?

Well, that’s an excellent question. The answer to this question is the juicy part.
Proof of Elapsed Time is implemented in the low level using Intel’s Software Guard Extensions or SGX. SGX is a set of processor commands that allow the execution of code inside a Secure Enclave, a Trusted Execution Environment running inside the processor, that uses special hardware to isolate itself against any interference from the operating system and all the `Ring -[0-9]*` layers of code. This part is especially important because it is the only thing that allows Proof of Elapsed Time to function properly as a consensus algorithm and its results to be verifiable by external entities.
There are a lot of details in how Intel SGX works, so I’m going through them in the next articles of the series, along with more use cases for SGX, so stay tuned!



If you want to dig a little bit more into PoET and Hyperledger Sawtooth check this out. Also, you might want to check out this whitepaper to get a rigorous mathematical proof of why PoW and PoET work. Note that it talks about PoW but its conclusions could be extended to PoET.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.