When data is loaded into memory, the host system can technically view it. This creates issues for:
“The most sensitive moment in the lifecycle of data is the moment it is used”
Super Protocol addresses this through hardware-enforced enclaves that isolate workloads from the surrounding operating system and infrastructure.
Enclaves rely on processor-level isolation. They decrypt data inside a secure boundary, run the computation, and re-encrypt the output before it leaves.
A typical workflow includes:
Inline examples can refer to commands or identifiers, such as superctl attest workload
To illustrate the differences between conventional and enclave-backed execution, consider the following:
| Feature | Standard Cloud | Enclave Execution |
|---|---|---|
| Data visibility during use | Visible | Hidden |
| Operator access | Full | None |
| Execution integrity | Hard to prove | Hardware-attested |
| Model extraction risk | High | Minimal |
Many valuable AI scenarios require sharing insights without sharing raw data. Super Protocol enables this by allowing multiple parties to load their inputs into the same enclave while keeping their individual assets private.
An example scenario:
Super Protocol provides a compute environment where confidentiality, integrity, and verifiability are built into the execution layer. Data remains protected, workloads can be independently verified, and organizations can collaborate without exposing internal assets. As AI becomes more regulated and more valuable, these guarantees form the foundation for secure, large-scale deployment.