-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WeTEE Grant #2065
WeTEE Grant #2065
Conversation
CLA Assistant Lite bot All contributors have signed the CLA ✍️ ✅ |
I have read and hereby sign the Contributor License Agreement. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the application. I have one quick initial question: How is your solution different from existing solutions like https://github.com/Acurast or https://github.com/integritee-network? Could you integrate this also into the application?
Thank you very much, I studied Acurast's and integritee's solutions very carefully. I feel our solution is different in a few ways:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the quick reply. I mostly meant if you could reuse some of their code for your deliveries or what is your project different on a technical level. In any case, could you integrate a comparison into the application?
@Noc2 Thank you for pointing that out. We have done some further research on these two projects and have come to the following conclusions: According to the description of the Processors section in the official Acurast documentation:
Acurast uses the processors of Android devices at the hardware layer and provides the Acurast Secure Hardware Runtime (ASHR). Currently, WeTEE adopts the TEE of Intel SGX/AMD SEV, which is for x86 architecture devices, and the application runtime of WeTEE is a self-developed product. According to the description of the Application Layer section in the official Acurast documentation:
According to the description in this part of the Acurast documentation and the use cases provided in the Acurast Console, after our preliminary attempt, we found that Acurast primarily provides JavaScript Functions to acquire Web2 interface data and synchronize cross-chain data. The computation demand of such JavaScript Functions is typically measured in seconds, and the operational stress on physical devices is relatively low, so Acurast's overall computational model is more similar to the AWS Lambda service. This indirectly verifies why Acurast can choose the mobile SoC as its computing unit. As for WeTEE's use of x86 architecture devices, after achieving decentralized privacy computing technologically, the types of applications it serves are more "universal" than those of Acurast. This is also why WeTEE supports C, Python, Go, Rust, Javascript, and all program codes that support Gramine. From a technical reuse perspective, Zero Trust architecture is a very good security solution. WeTEE will also attempt to integrate the zero-trust solution with existing solutions in the future. However, since Acurast's business attributes greatly differ from those of WeTEE, we are currently not considering reusing this project's code. Like Integritee, WeTEE has adopted a solution based on TEE. However, WeTEE is not only a highly scalable, privacy-enabling network in the Polkadot ecosystem, but it has also built a decentralized application deployment infrastructure based on decentralized servers. According to the information provided by the Integritee official document Sidechain SDK, Custom Business Logic / STF, and the integritee-worker repository code, application developers must use its SDK to build applications when developing applications in Integritee. The business code will also be programmed in Rust, and it needs to integrate with Integritee's Sidechain. During the application deployment phase, the TEE environment of Integritee comes from its subsidiary company, Securitee. Securitee's infrastructure is hosted in the open, convertible, scalable, and reliable cloud infrastructure of OVHcloud, which is Europe's leading cloud provider. WeTEE adopts different technical routes and strategies for privacy computing. Application developers can use their preferred programming languages for application development without the need for excessive code modification. WeTEE's TEE environment is provided by miners and matches application deployment requests with TEE computing nodes based on an algorithm. While WeTEE and Integritee's worker are similar in terms of business logic, the worker of WeTEE is developed in Golang. Reusing the code related to Integritee's worker would introduce unnecessary business code modules and may cause some degree of project delay.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@BurnWW thanks for the work you put into this so far, this looks very interesting already. I have a few questions as well:
- Can you talk a bit more about how your governance model will work? For example, how will you incentivise people to vote?
(..) allowing more people to participate in the chain governance and thus promoting better community development
- When moving a web2 project to your platform, what adjustments are necessary, if any?
- Can web2 users deploy their apps on your K3/K8 systems or are you using K3/K8 only to run the platform in the background, thus requiring web2 users to deploy their apps differently?
- Are there any other technical limitations to be expected when compared to using web2 infrastructure, e.g. performance?
- It'd be nice if you could include one or two demo apps in the milestones that can be directly deployed on your infrastructure. Doesn't have to be anything sophisticated, a hello-world type app would do, just as a proof-of-concept that and how web2 integrations with your platform work.
- How does your solution compare to ankr?
- Expanding on the future roadmap beyond the grant would give more confidence in the long-term vision.
@takahser Thanks for your response Q1: Can you talk a bit more about how your governance model will work? For example, how will you incentivise people to vote?WeTEE aims to provide a decentralized application deployment platform that integrates a trusted execution environment for decentralized applications, allowing the deployment environment of applications to gradually break free from the constraints of centralized data centers. And as a startup product, there will inevitably be a huge gap with the current mature Web2 public cloud, which also means that there is a huge amount of development work and demand for developers. Looking back at the growth process of the public cloud, the development roadmap of each public cloud’s cloud business is centered around the needs of various applications in the market environment at that time for the cloud, but the demand side of the application is mostly large customers of the public cloud, and the development priority of the public cloud functions is also tilted towards large customers. This also means that some functions with problems in the public cloud, but with less impact on large customers, cannot be promptly fixed or improved. Functions of this type, which have high demand among public cloud users but are not in line with the interests of public cloud vendors, may not even receive development support. And the WeTEE DAO organization hopes to change this situation, with the following main strategies:
The WeTEE DAO hopes to evolve naturally and form an open Domain Driven Design pattern under the DAO model, where each domain can collaborate and develop relatively independently, promoting the progress of WeTEE together.
The expected result after a period of natural evolution is the birth of two groups:
WeTEE is organized and operates in the form of DAO, which has greater advantages in financial and development transparency, allowing developers and service providers to establish trust with the project more quickly. In terms of details, WeTEE has created sudo, gov, guild, project, asset, treasury, and other modules based on Substrate for on-chain governance. We have also created the DTIM tool based on the Matrix protocol for work collaboration and organizational governance. In order to make the community more active, WeTEE DAO has adopted the following strategies:
In order to attract more developers to participate in the development of WeTEE, WeTEE DAO has adopted the following strategies:
In order to attract more developers to participate in the voting of WeTEE, WeTEE DAO has adopted the following strategies:
Q2: When moving a web2 project to your platform, what adjustments are necessary, if any?We currently support two encryption schemes, with the following deployment scenarios:
Q3: Can web2 users deploy their apps on your K3/K8 systems or are you using K3/K8 only to run the mn b platform in the background, thus requiring web2 users to deploy their apps differently?Web2 users can directly deploy programs on the K3/K8 cluster connected to WeTEE. Users can use Wetee in the same way as they use k8s/k3s. In the future, we will also provide one-click deployment tools, where users only need to execute one command or click one button to complete the deployment. We will also provide decentralized CI/CD services, allowing users to modify the code and then deploy it with just one click. Q4: Are there any other technical limitations to be expected when compared to using web2 infrastructure, e.g. performance?Different confidential schemes will have some limitations.
Q5: It'd be nice if you could include one or two demo apps in the milestones that can be directly deployed on your infrastructure. Doesn't have to be anything sophisticated, a hello-world type app would do, just as a proof-of-concept that and how web2 integrations with your platform work.In Milestone 2 Delivery "01 App Example" will provide a program example that can be run directly on wetee. Q6: How does your solution compare to ankr?According to the official documentation provided by Ankr, as stated in Ankr Docs:
Its main functions are as follows:
From the main structure of Ankr, Ankr is a Hub that integrates and encapsulates various blockchain RPC/REST APIs. Application developers can quickly develop applications by calling the APIs provided by Ankr. For business data that needs to be persisted in the application, you can choose IPFS or STORJ for storage according to actual situations. This makes Ankr closer to a SaaS-like platform that provides integrated services related to blockchain, and provides gateway functions for blockchain-related APIs and storage services. However, the official documentation of Ankr and STORJ does not explain the distribution of their servers, nor do they clearly explain how they manage their physical servers, so it is impossible to speculate how their backend business processes customer business data. Therefore, based on the above two points, the main differences between WeTEE and Ankr are as follows:
From a business scenario perspective, Ankr is more focused on areas such as Gamefi/Defi/parallel chain construction, where users are currently unable to freely deploy applications. Moreover, from the perspective of project operation, Ankr adopts a corporate operation model, which is the same as the public cloud company model mentioned in the first question, and thus has the same potential problems and risks. Q7: Expanding on the future roadmap beyond the grant would give more confidence in the long-term vision.When you express interest in the future development of WeTEE, our team is deeply encouraged, as your support leads us to believe that our code and products will achieve meaningful and valuable results. This grant will allow us to focus on developing core features, laying a reliable business foundation for WeTEE. Upon reviewing the outcomes of our submitted grant, you will be able to easily verify WeTEE's core business using the documentation guidance and Docker examples provided by WeTEE. The R&D of WeTEE can be divided into three major stages, each of which contains several small R&D stages or iterative cycles:
During the current core development stage, WeTEE will concentrate all research and development resources on WeTEE's own development, including on-chain workers, apps, tasks, as well as K3S/K8S operators relevant to physical servers, app deployment models, task deployment models, and worker attestation models. Once this part is completed, WeTEE will spend a small amount of time to reorganizing the code and conducting a retrospective on the completed work, after which the development will move into the next phase of core research and development. At this stage, the R&D content mainly includes the WeTEE test/main network, the WeTEE Dapp SDK, accessing the Polkadot mainnet using Coretime, as well as the WeTEE monitoring system and the WeTEE Web user interface. After completing this stage of development, WeTEE will invite seed users to conduct usability testing and user acceptance testing, and operational work will be carried out in accordance with the requirements of WeTEE DAO. Following initial user feedback, targeted fixes and optimizations will be implemented to address any issues within WeTEE. The following will enter the third phase of core R&D, which is also the last core R&D phase in the current planning. In this phase, WeTEE will dynamically allocate R&D resources to the ‘blockchain-related’ or ‘hardware-related’ fields based on user and market feedback.
|
@BurnWW thanks for your very detailed and helpful answers. I have a few final questions, but I'm already going to mark your proposal as ready for review.
While the Gramine doc mentions "1-10% overhead", I didn't find such information for Ego. Did you get this number (10%) from their resources or did you evaluate it yourself, e.g. by testing it?
Does that mean that anybody in the network can theoretically read their data?
Would it be possible to maintain a "soft-fork mentality" that doesn't require the user to do any changes on their containers (I.e. no hard fork of the relevant code, so to speak), while still being protected? While I'm not sure if it's technically possible, this would be the ideal scenario because it'd significantly lower the barriers of entry for web2 users. I imagine that converting their containers or using golang with ego would discourage a lot of web2 teams. My personal experience shows that these kind of conversion tools often are not as simple as advertised (not in the containerisation space in particular, but in general). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@BurnWW Finally, it'd be good to have a bit more information about the feature set of each deliverable included.
@takahser Thanks for your response Q1:While the Gramine doc mentions "1-10% overhead", I didn't find such information for Ego. Did you get this number (10%) from their resources or did you evaluate it yourself, e.g. by testing it?During the WeTEE technology selection phase, we carried out evaluations across multiple dimensions such as hardware types, operating systems, virtualization, encryption methods, performance overhead, software ecosystem, and user learning costs. Ultimately, we decided to adopt libos + K3S/K8S + Intel SGX (AMD SEV) as the architectural direction for the WeTEE technology stack. The libos, including Gramine and Ego as mentioned, strikes a balance between the ease of use for application developers and the performance requirements of WeTEE.
We have selected several different yet representative applications to run on the WeTEE technical stack and observed the performance overhead of the applications. Ultimately, we arrived at performance overhead conclusions similar to those in the Gramine official documentation, namely an overhead of "1-10%". According to our analysis, the performance overhead can be roughly divided into two categories: the overhead brought by libos and the overhead brought by cryptographic computations. The performance overhead is also influenced by various factors such as CPU model, operating system kernel version, system component versions, and current system load. Of course, different types of applications also exhibit significant differences in performance overhead. Gramine or Ego, acting as libos, provides applications with a consistent interface compared to the kernel by intercepting and simulating system calls (syscalls), allowing applications to run on libos without awareness. Leveraging the interface abstraction layer provided by libos, similar to glibc or musl, applications can directly perform the required syscall operations in the system user space, resulting in minimal performance overhead. The primary impact on performance comes from the overhead of Intel SGX and the type of system calls required during application runtime. Due to the differences in applications, in general, the performance loss of single-process programs in SGX mode ranges from 5% to 10%. This aligns with the results of our testing after migrating applications to Gramine and Ego. To test performance overhead, we used a simple program implemented in Golang that calculates prime numbers up to 10,000. package main
import (
"fmt"
"time"
)
func main() {
firstDate := time.Now()
defer func() {
fmt.Println("Time-consuming:", time.Since(firstDate))
}()
// Task Queue Channel
intChan := make(chan int, 1000)
// The output channel, all the calculation results are placed here.
primeChan := make(chan int, 2000)
// Exit marked pipe
exitChan := make(chan int, 4)
// Distribute tasks
go putNum(intChan)
// Start four coroutines to calculate prime numbers and put them into the result channel.
for i := 0; i < 4; i++ {
go cal(intChan, primeChan, exitChan)
}
// Start coroutine, continuously get end flag from exitChan, when the number of acquisitions reaches 4
go closeWork(primeChan, exitChan)
// The main thread traverses the result set in range.
for i := range primeChan {
fmt.Println(i)
}
fmt.Println("Traversal ends")
}
/*
*
PutNum:Coroutine is responsible for putting all the numbers that need to be calculated into the intChan channel. Note: After everything is put in, close the intChan channel so that consumers don’t get stuck in an infinite loop when iterating through it with for-range
*/
func putNum(intChan chan int) {
for i := 1; i <= 100000; i++ {
intChan <- i
}
close(intChan)
}
/*
*
Determine whether all work coroutines have ended. If they have ended, close primeChan to notify the main thread.
*/
func closeWork(primeChan chan int, exitChan chan int) {
for i := 0; i < 4; i++ {
<-exitChan
}
close(primeChan)
close(exitChan)
}
/*
*
for-range loop traverses intChan and calculates whether it is a prime number. For-range will traverse to the closed unknown of the channel. When the range loop ends, put an identifier into exitChan indicating that the current coroutine has ended
*/
func cal(intChan chan int, primeChan chan int, exitChan chan int) {
for v := range intChan {
flag := true
for i := 2; i < v; i++ {
if v%i == 0 {
flag = false
break
}
}
if flag {
primeChan <- v
}
}
exitChan <- 0
} Compile and run using Golang:
Compile and run using Ego with the same source code:
Our test hardware is as follows:
The test results are as follows:
Q2: Does that mean that anybody in the network can theoretically read their data?If the application itself provides unprotected data interfaces (APIs) or program listening ports, then anyone in the network can retrieve data through these interfaces or ports. However, if we understand it correctly, your question should mainly focus on the following scenario :
WeTEE primarily focuses on container security by implementing defense measures in two aspects: remote access and physical access.
Security reinforcement settings are relatively complex, and due to space limitations, it is not possible to provide detailed elaboration. In summary, data protection for applications running in non-TEE environments relies on the data protection policies inherent to K3S/K8S, while applications in TEE environments benefit from strict data protection measures. Q3: Would it be possible to maintain a "soft-fork mentality" that doesn't require the user to do any changes on their containers (I.e. no hard fork of the relevant code, so to speak), while still being protected? While I'm not sure if it's technically possible, this would be the ideal scenario because it'd significantly lower the barriers of entry for web2 users. I imagine that converting their containers or using golang with ego would discourage a lot of web2 teams. My personal experience shows that these kind of conversion tools often are not as simple as advertised (not in the containerisation space in particular, but in general).Currently, a large number of Intel SGX-enabled hardware assets require support. In this regard, Gramine stands out as a promising libos, providing an abstract interface layer for glibc/musl without intrusive modifications to programs. This allows developers to smoothly migrate existing applications to the Intel SGX hardware platform. The emergence of Gramine offers developers a more convenient way to leverage the advantages of Intel SGX hardware, easing the process for them. Meanwhile, we will continue to optimize the functionality of WeTEE, continuously improving and reducing the difficulty for developers in application migration. • In the first stage ARG UBUNTU_IMAGE=ubuntu:20.04
FROM ${UBUNTU_IMAGE}
# ARGs cannot be grouped since each FROM in a Dockerfile initiates a new build
# stage, resulting in the loss of ARG values from earlier stages.
ARG UBUNTU_CODENAME=focal
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y curl gnupg2 binutils
RUN curl -fsSLo /usr/share/keyrings/gramine-keyring.gpg https://packages.gramineproject.io/gramine-keyring.gpg && \
echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/gramine-keyring.gpg] https://packages.gramineproject.io/ '${UBUNTU_CODENAME}' main' > /etc/apt/sources.list.d/gramine.list
RUN curl -fsSLo /usr/share/keyrings/intel-sgx-deb.key https://download.01.org/intel-sgx/sgx_repo/ubuntu/intel-sgx-deb.key && \
echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/intel-sgx-deb.key] https://download.01.org/intel-sgx/sgx_repo/ubuntu '${UBUNTU_CODENAME}' main' > /etc/apt/sources.list.d/intel-sgx.list
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y gramine \
sgx-aesm-service \
libsgx-aesm-launch-plugin \
libsgx-aesm-epid-plugin \
libsgx-aesm-quote-ex-plugin \
libsgx-aesm-ecdsa-plugin \
libsgx-dcap-quote-verify \
psmisc && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN mkdir -p /var/run/aesmd/
COPY restart_aesm.sh /restart_aesm.sh
ENTRYPOINT ["/bin/sh", "-c"]
CMD ["/restart_aesm.sh ; exec /bin/bash"] Developers can directly use the base image to package a specialized Dockerfile for Gramine. An example is shown below. FROM wetee/ubuntu:22.04
// Developer-defined workflow
.....
// run
gramine-sgx redis-server Developers can complete the construction of confidential containers without needing to focus extensively on the details of Gramine. Additionally, developers have the option to directly use Gramine Shielded Containers (GSC) to generate the image. • In the second phase Once WeTEE provides support for AMD SEV and Intel TDX, users will no longer need to modify their code, prepare Dockerfiles, or worry about compatibility issues. SEV and TDX, in the form of confidential virtual machines, provide a confidential computing environment for programs. |
@takahser Thanks for your response We have updated the descriptions of all Milestones. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@BurnWW thanks for another very helpful reply. I can see that you have a good plan here, and you seem to have thought of everything and since you made the deliverables clearer, I'm happy to approve it. The decentralized image building services you mention for the next phase sound promising as well - if it works, that could help driving adoption to this kind of platform significantly.
BTW, another question that popped up in my mind is, whether current K8 Tooling would be compatible with your platform, e.g. ArgoCD, OpenShift, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the detailed reply. I'm happy to approve it as wellö
@takahser Thanks for your response. In the grant application, the following business was mentioned:
To achieve this business goal, the WeTEE team has researched common CI/CD software in the current market environment, such as those listed in List of Continuous Integration Services. These are all excellent works of software engineering, but in order to meet WeTEE's demand for ease of use for application developers, we are still looking for an appropriate open-source CI/CD solution, striving to strike a balance between lightweight and ease of use for WeTEE. Currently, the requirements of WeTEE for CI/CD are as follows:
When you mentioned ArgoCD, we decided to put ArgoCD at the top of our CI/CD candidate list because currently we know that ArgoCD runs well in TEE environments. The core R&D of WeTEE's CI/CD is expected to begin in the 'User experience optimization stage' :
The function of CI/CD is expected to be provided in sync with the WeTEE main network, WeTEE Dapp SDK, and WeTEE monitoring system and WeTEE web user interface. Sorry for clicking the wrong close button just now. It's been corrected. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the thorough answers @BurnWW I'm happy to approve as well.
Congratulations and welcome to the Web3 Foundation Grants Program! Please refer to our Milestone Delivery repository for instructions on how to submit milestones and invoices, our FAQ for frequently asked questions and the support section of our README for more ways to find answers to your questions. |
Project Abstract
WeTEE is a decentralized web2 application deployment platform integrated with Trusted Execution Environment (TEE).
WeTEE consists of blockchain networks and multiple confidential computing clusters, collectively providing an efficient decentralised solution for confidential computing.
Thread-level confidential container service providers need to provide hardware devices that support Intel SGX, and they are required to use IPv4 / IPv6 to access the Internet.
Grant level
Application Checklist
project_name.md
).@yueyefengxu:matrix.org
(change the homeserver if you use a different one)