Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix minor typos #13

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions qtee.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ roughly translated to:
For example, the Secure Cryptographic Implementation Association (SIMPLE-Crypto Association) aims to apply
Kerckhoffs's Principle to hardware and lays out their vision at https://www.simple-crypto.org/about/vision/:

> **[...] our vision is that as research advances, the security by obscurity paradigm becomes less justified and its benefits are outweighted by its drawbacks.** That is, while a closed source approach can limit the adversary's understanding of the target implementations as long as their specifications remain opaque, it also limits the public understanding of the mechanims on which security relies, and therefore the possibility to optimize them. By contrast, an open approach to security can lead to a better evaluation of the worst-case security level that is targeted by cryptographic designs.
> **[...] our vision is that as research advances, the security by obscurity paradigm becomes less justified and its benefits are outweighed by its drawbacks.** That is, while a closed source approach can limit the adversary's understanding of the target implementations as long as their specifications remain opaque, it also limits the public understanding of the mechanims on which security relies, and therefore the possibility to optimize them. By contrast, an open approach to security can lead to a better evaluation of the worst-case security level that is targeted by cryptographic designs.
:::

:::danger
Expand Down Expand Up @@ -457,7 +457,7 @@ See perhaps [Cryptographically Assured Information Flow: Assured Remote Executio
#### :thought_balloon: Thought experiment
If somehow we have managed to manufacture a chip, in a "decentralized" way, such that it can be verified, then perhaps the "decentralized" manufacturing process could log public metadata about the chip that would uniquely identify it. For instance, the metadata could be tied to a fingerprint generated via a PUF, in the chip. The metadata would contain the proof of correct manufacturing with respect to the requirements discussed earlier, such as matching a (formally verified) open source hardware design, and not leaking secret bits.

Then remote attestation in this case would involve first requesting from the device that it provides its unique fingerpring which could then be verified against the public metadata ... but how could we prevent devices from providing a fake fingerprint? Perhaps the public records of correctly manufactured devices should not be public afterall. That is, a chip's fingerprint should not be publicly linkable to the metadata (proofs of correct manufacturing). Said differently, a verifier should just needs to know that the chip it is interacting with has been manufactured correctly, and the verification process should not reveal information that could be used by a malicious chip to forge a fake identity.
Then remote attestation in this case would involve first requesting from the device that it provides its unique fingerprint which could then be verified against the public metadata ... but how could we prevent devices from providing a fake fingerprint? Perhaps the public records of correctly manufactured devices should not be public afterall. That is, a chip's fingerprint should not be publicly linkable to the metadata (proofs of correct manufacturing). Said differently, a verifier should just needs to know that the chip it is interacting with has been manufactured correctly, and the verification process should not reveal information that could be used by a malicious chip to forge a fake identity.

We also need a proof that it loaded the expected software for execution ...

Expand Down Expand Up @@ -550,7 +550,7 @@ From https://github.com/keystone-enclave/keystone?tab=readme-ov-file#status:


### Intel SGX's Root of Trust
If we take Intel as an example, trusting the chip manufacturer means many things. Intel SGX's so-called root of trust rests on two secret keys (seal secret and provisionaing secret), and an attestation key, as shown in the figure below, from [Intel SGX Explained]. Note that this may have changed since the writing of the [Intel SGX Explained] paper, but at the time at least, the two secrets were said to be stored in e-fuses inside the processor's die. Moreover, the two secret keys, stored in e-fuses, were encrypted with a global wrapping logic key (GWK). The GWK is a 128-bit AES key that is hard-coded in the processor's circuitry, and serves to increase the cost of extracting the keys from an SGX-enabled processor. The Provisioning Secret was said to be generated at the key generation facility - burned into the processor's e-fuses and stored in Intel's Provisioning Service DB. The Seal Secret was said to be generated inside the processor chip, and claimed not to be known to Intel. Hence, trusting Intel meant to trust that they do not leak the attestation key, and the provisioning key as they have access to them. Trusting Intel also meant that the manufacturing process that generates and embeds the Seal Secret did not leak the secret key. Trusting Intel also meant that once a chip is made, they did not attempt to extract the Seal Key, which is the only key, out of three, which they did not know.
If we take Intel as an example, trusting the chip manufacturer means many things. Intel SGX's so-called root of trust rests on two secret keys (seal secret and provisioning secret), and an attestation key, as shown in the figure below, from [Intel SGX Explained]. Note that this may have changed since the writing of the [Intel SGX Explained] paper, but at the time at least, the two secrets were said to be stored in e-fuses inside the processor's die. Moreover, the two secret keys, stored in e-fuses, were encrypted with a global wrapping logic key (GWK). The GWK is a 128-bit AES key that is hard-coded in the processor's circuitry, and serves to increase the cost of extracting the keys from an SGX-enabled processor. The Provisioning Secret was said to be generated at the key generation facility - burned into the processor's e-fuses and stored in Intel's Provisioning Service DB. The Seal Secret was said to be generated inside the processor chip, and claimed not to be known to Intel. Hence, trusting Intel meant to trust that they do not leak the attestation key, and the provisioning key as they have access to them. Trusting Intel also meant that the manufacturing process that generates and embeds the Seal Secret did not leak the secret key. Trusting Intel also meant that once a chip is made, they did not attempt to extract the Seal Key, which is the only key, out of three, which they did not know.

![image](https://hackmd.io/_uploads/rydXhPCTa.png)

Expand Down Expand Up @@ -694,4 +694,4 @@ You should also be able to make comments on this document.
[Traceable Secret Sharing: Strong Security and Efficient Constructions]: https://eprint.iacr.org/2024/405
[The battle for Ring Zero]: https://pluralistic.net/2022/01/30/ring-minus-one/#drm-political-economy
[Physical One-Way Functions]: https://cba.mit.edu/docs/theses/01.03.pappuphd.powf.pdf
[ideal functionality]: https://en.wikipedia.org/wiki/Universal_composability#Ideal_functionality
[ideal functionality]: https://en.wikipedia.org/wiki/Universal_composability#Ideal_functionality