Time in enclaves

In general we know that any one crypto algorithm will be broken in X years time. The usual way to mitigate this is by using certificate expiration. If a peer with an expired certificate tries to connect we reject it in order to enforce freshness of their key.

In order to check certificate expiration we need some notion of calendar time. However in SGX’s threat model the host of the enclave is considered malicious, so we cannot rely on their notion of time. Intel provides trusted time through their PSW, however this uses the Management Engine which is known to be a proprietary vulnerable piece of architecture.

Therefore in order to check calendar time in general we need some kind of time oracle. We can burn in the oracle’s identity to the enclave and request timestamped signatures from it. This already raises questions with regards to the oracle’s identity itself, however for the time being let’s assume we have something like this in place.

Timestamped nonces

The most straightforward way to implement calendar time checks is to generate a nonce after DH exchange, send it to the oracle and have it sign over it with a timestamp. The nonce is required to avoid replay attacks. A malicious host may delay the delivery of the signature indefinitely, even until after the certificate expires. However note that the DH happened before the nonce was generated, which means even if an attacker can crack the expired key they would not be able to steal the DH session, only try creating new ones, which will fail at the timestamp check.

This seems to be working, however note that this would impose a full round trip to an oracle per DH exchange.

Timestamp-encrypted channels

In order to reduce the round trips required for timestamp checking we can invert the responsibility of checking of the timestamp. We can do this by encrypting the channel traffic with an additional key generated by the enclave but that can only be revealed by the time oracle. The enclave encrypts the encryption key with the oracle’s public key so the peer trying to communicate with the enclave must forward the encrypted key to the oracle. The oracle in turn will check the timestamp and reveal the contents (perhaps double encrypted with a DH-derived key). The peer can cache the key and later use the same encryption key with the enclave. It is then the peer’s responsibility to get rid of the key after a while.

Note that this mitigates attacks where the attacker is a third party trying to exploit an expired key, but this method does not mitigate against malicious peers that keep around the encryption key until after expiration(= they “become” malicious).

Oracle key break

So given an oracle we can secure a channel against expired keys and potentially improve performance by trusting once-authorized enclave peers to not become malicious.

However what happens if the oracle key itself is broken? There’s a chicken-and-egg problem where we can’t check the expiration of the time oracle’s certificate itself! Once the oracle’s key is broken an attacker can fake timestamping replies (or decrypt the timestamp encryption key), which in turn allows it to bypass the expiration check.

The main issue with this is in relation to sealed secrets, and sealed secret provisioning between enclaves. If an attacker can fake being e.g. an authorized enclave then it can extract old secrets. We have yet to come up with a solution to this, and I don’t think it’s possible.

Instead, knowing that current crypto algorithms are bound to be broken at some point in the future, instead of trying to make sealing future-proof we can become explicit about the time-boundness of security guarantees.

Sealing epochs

Let’s call the time period in which a certain set of algorithms are considered safe a sealing epoch. During this period sealed data at rest is considered to be secure. However once the epoch finishes old sealed data is considered to be potentially compromised. We can then think of sealed data as an append-only log of secrets with overlapping epoch intervals where the “breaking” of old epochs is constantly catching up with new ones.

In order to make sure that this works we need to enforce an invariant where secrets only flow from old epochs to newer ones, never the other way around.

This translates to the ledger nicely, data in old epochs are generally not valuable anymore, so it’s safe to consider them compromised. Note however that in the privacy model an epoch transition requires a full re-provisioning of the ledger to the new set of algorithms/enclaves.

In any case this is an involved problem, and I think we should defer the fleshing out of it for now as we won’t need it for the first round of stateless enclaves.