Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WeTEE Grant #2065

Merged
merged 7 commits into from
Nov 28, 2023
Merged

WeTEE Grant #2065

merged 7 commits into from
Nov 28, 2023

Conversation

BurnWW
Copy link
Contributor

@BurnWW BurnWW commented Oct 22, 2023

Project Abstract

WeTEE is a decentralized web2 application deployment platform integrated with Trusted Execution Environment (TEE).
WeTEE consists of blockchain networks and multiple confidential computing clusters, collectively providing an efficient decentralised solution for confidential computing.
Thread-level confidential container service providers need to provide hardware devices that support Intel SGX, and they are required to use IPv4 / IPv6 to access the Internet.

  1. At this stage, Substrate is mainly used as the blockchain framework to implement application deployment and billing-related smart contract functions.
  2. After the application is deployed, its workload will be safeguarded by hardware protection to prevent data leakage. Even confidential computing providers will not be able to access the data.
  3. It integrates with the existing cloud native toolchain for developers, typically requiring no code modifications, and in special cases, only a small amount of code modification is needed.
  4. Developers can view information such as the resource usage and health status of applications in the Web interface provided by WeTEE.

Grant level

  • Level 1: Up to $10,000, 2 approvals
  • Level 2: Up to $30,000, 3 approvals
  • Level 3: Unlimited, 5 approvals (for >$100k: Web3 Foundation Council approval)

Application Checklist

  • The application template has been copied and aptly renamed (project_name.md).
  • I have read the application guidelines.
  • Payment details have been provided (bank details via email or Polkadot (USDC & USDT) or BTC address in the application).
  • The software delivered for this grant will be released under an open-source license specified in the application.
  • The initial PR contains only one commit (squash and force-push if needed).
  • The grant will only be announced once the first milestone has been accepted (see the announcement guidelines).
  • I prefer the discussion of this application to take place in a private Element/Matrix channel. My username is: @yueyefengxu:matrix.org (change the homeserver if you use a different one)

@github-actions
Copy link
Contributor

github-actions bot commented Oct 22, 2023

CLA Assistant Lite bot All contributors have signed the CLA ✍️ ✅

@BurnWW
Copy link
Contributor Author

BurnWW commented Oct 22, 2023

I have read and hereby sign the Contributor License Agreement.

Copy link
Collaborator

@Noc2 Noc2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the application. I have one quick initial question: How is your solution different from existing solutions like https://github.com/Acurast or https://github.com/integritee-network? Could you integrate this also into the application?

@BurnWW
Copy link
Contributor Author

BurnWW commented Oct 22, 2023

Thank you very much, I studied Acurast's and integritee's solutions very carefully. I feel our solution is different in a few ways:

  1. Our solution is more like an Amazon cloud-like solution. We hope users can deploy their original web2 programs to WeTEE without modifying code, changing programming languages, or altering user habits. Users can run programs like mysql, golang webserver, VUE/REACT web apps, etc. This is an aspect that existing solutions have not covered yet.
  2. Also, we are focused on B-end users. We provide governance mechanisms similar to openGov for each organization, enabling decentralized organizations to better manage web2 programs.

@BurnWW BurnWW requested a review from Noc2 October 22, 2023 22:26
Copy link
Collaborator

@Noc2 Noc2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the quick reply. I mostly meant if you could reuse some of their code for your deliveries or what is your project different on a technical level. In any case, could you integrate a comparison into the application?

@BurnWW
Copy link
Contributor Author

BurnWW commented Oct 24, 2023

@Noc2 Thank you for pointing that out. We have done some further research on these two projects and have come to the following conclusions:

According to the description of the Processors section in the official Acurast documentation:

Acurast Processors offer their computational capacity to Consumers. In return for the confidential execution and verifiable fulfillment of jobs, processors are rewarded by the Consumers. At this stage, processors utilize dedicated Android smartphones as the off-chain infrastructure behind the Acurast Secure Hardware Runtime (ASHR). Become a processor within minutes.

Acurast uses the processors of Android devices at the hardware layer and provides the Acurast Secure Hardware Runtime (ASHR). Currently, WeTEE adopts the TEE of Intel SGX/AMD SEV, which is for x86 architecture devices, and the application runtime of WeTEE is a self-developed product.

According to the description of the Application Layer section in the official Acurast documentation:

Acurast's Zero Trust architecture transforms the way applications are designed and deployed.

According to the description in this part of the Acurast documentation and the use cases provided in the Acurast Console, after our preliminary attempt, we found that Acurast primarily provides JavaScript Functions to acquire Web2 interface data and synchronize cross-chain data. The computation demand of such JavaScript Functions is typically measured in seconds, and the operational stress on physical devices is relatively low, so Acurast's overall computational model is more similar to the AWS Lambda service. This indirectly verifies why Acurast can choose the mobile SoC as its computing unit.

As for WeTEE's use of x86 architecture devices, after achieving decentralized privacy computing technologically, the types of applications it serves are more "universal" than those of Acurast. This is also why WeTEE supports C, Python, Go, Rust, Javascript, and all program codes that support Gramine.

From a technical reuse perspective, Zero Trust architecture is a very good security solution. WeTEE will also attempt to integrate the zero-trust solution with existing solutions in the future. However, since Acurast's business attributes greatly differ from those of WeTEE, we are currently not considering reusing this project's code.

Like Integritee, WeTEE has adopted a solution based on TEE. However, WeTEE is not only a highly scalable, privacy-enabling network in the Polkadot ecosystem, but it has also built a decentralized application deployment infrastructure based on decentralized servers.

According to the information provided by the Integritee official document Sidechain SDK, Custom Business Logic / STF, and the integritee-worker repository code, application developers must use its SDK to build applications when developing applications in Integritee. The business code will also be programmed in Rust, and it needs to integrate with Integritee's Sidechain. During the application deployment phase, the TEE environment of Integritee comes from its subsidiary company, Securitee. Securitee's infrastructure is hosted in the open, convertible, scalable, and reliable cloud infrastructure of OVHcloud, which is Europe's leading cloud provider.

WeTEE adopts different technical routes and strategies for privacy computing. Application developers can use their preferred programming languages for application development without the need for excessive code modification. WeTEE's TEE environment is provided by miners and matches application deployment requests with TEE computing nodes based on an algorithm.

While WeTEE and Integritee's worker are similar in terms of business logic, the worker of WeTEE is developed in Golang. Reusing the code related to Integritee's worker would introduce unnecessary business code modules and may cause some degree of project delay.

57911698153648_ pic

Project Technical Solutions Worker programming Languages Execution Hardware Supported deployment languages
WeTEE • Kubernetes and Docker as computing cluster solutions
• Gramine or Ego as confidential container solutions
• Worker with TEE(run as K8s Operator) provides sgx remote attestation, key management, program Confidentiality Injection and uploads sgx remote attestation as part of proof of work to blockchain network consensus.
golang x86 Server with Intel SGX C, Python, Go, Rust, Javascript and all program codes that support Gramine
Integritee Woker with TEE provides Sidechain、Off-Chain Worker、Oracle as confidential solutions rust x86 Server with Intel SGX Rust(Fork integritee worker and add own Rust code)
Acurast Acurast Zero-Trust Execution Layer as End-to-End Zero Trust Job Execution as confidential solutions rust mobile device Javascript

@BurnWW BurnWW requested a review from Noc2 October 25, 2023 01:46
@takahser takahser self-assigned this Oct 25, 2023
@takahser takahser self-requested a review October 30, 2023 15:13
@BurnWW BurnWW changed the title WeTEE Network WeTEE Network Grant Nov 1, 2023
Copy link
Collaborator

@takahser takahser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@BurnWW thanks for the work you put into this so far, this looks very interesting already. I have a few questions as well:

  • Can you talk a bit more about how your governance model will work? For example, how will you incentivise people to vote?

    (..) allowing more people to participate in the chain governance and thus promoting better community development

  • When moving a web2 project to your platform, what adjustments are necessary, if any?
  • Can web2 users deploy their apps on your K3/K8 systems or are you using K3/K8 only to run the platform in the background, thus requiring web2 users to deploy their apps differently?
  • Are there any other technical limitations to be expected when compared to using web2 infrastructure, e.g. performance?
  • It'd be nice if you could include one or two demo apps in the milestones that can be directly deployed on your infrastructure. Doesn't have to be anything sophisticated, a hello-world type app would do, just as a proof-of-concept that and how web2 integrations with your platform work.
  • How does your solution compare to ankr?
  • Expanding on the future roadmap beyond the grant would give more confidence in the long-term vision.

@takahser takahser added the changes requested The team needs to clarify a few things first. label Nov 9, 2023
@BurnWW
Copy link
Contributor Author

BurnWW commented Nov 14, 2023

@takahser Thanks for your response

Q1: Can you talk a bit more about how your governance model will work? For example, how will you incentivise people to vote?

WeTEE aims to provide a decentralized application deployment platform that integrates a trusted execution environment for decentralized applications, allowing the deployment environment of applications to gradually break free from the constraints of centralized data centers.

And as a startup product, there will inevitably be a huge gap with the current mature Web2 public cloud, which also means that there is a huge amount of development work and demand for developers. Looking back at the growth process of the public cloud, the development roadmap of each public cloud’s cloud business is centered around the needs of various applications in the market environment at that time for the cloud, but the demand side of the application is mostly large customers of the public cloud, and the development priority of the public cloud functions is also tilted towards large customers.

This also means that some functions with problems in the public cloud, but with less impact on large customers, cannot be promptly fixed or improved. Functions of this type, which have high demand among public cloud users but are not in line with the interests of public cloud vendors, may not even receive development support.

And the WeTEE DAO organization hopes to change this situation, with the following main strategies:

  • Make WeTEE's R&D more community-driven and transparent
  • The Core Team is mainly responsible for the development of the WeTEE base, but will open the authority to modify functions to the community for voting.

The WeTEE DAO hopes to evolve naturally and form an open Domain Driven Design pattern under the DAO model, where each domain can collaborate and develop relatively independently, promoting the progress of WeTEE together.
The overall style of the WeTEE DAO will practice the K.I.S.S. principle, so the current organizational structure of the WeTEE DAO is as follows:

  • Core Group: WeTEE team, composed of R&D, capital, and marketing personnel
  • Advisory Group: external assistance team, composed of advisors, sponsors, and partners
  • R&D Group: composed of individuals who submit product suggestions, code, and designs to WeTEE.

The expected result after a period of natural evolution is the birth of two groups:

  • Domain: naturally formed research domains in WeTEE
  • Domain core: naturally formed opinion leaders in the domain, consisting of one person or multiple people.

WeTEE is organized and operates in the form of DAO, which has greater advantages in financial and development transparency, allowing developers and service providers to establish trust with the project more quickly.

In terms of details, WeTEE has created sudo, gov, guild, project, asset, treasury, and other modules based on Substrate for on-chain governance. We have also created the DTIM tool based on the Matrix protocol for work collaboration and organizational governance.

In order to make the community more active, WeTEE DAO has adopted the following strategies:

  1. Attract more like-minded people through the Social DAO approach, increasing community activity.
  2. Create on-chain governance tools through Substrate, as well as instant messaging and work collaboration and organizational governance tools specifically designed for Web3, DTIM, simplifying the voting process and reducing participation difficulty.
  3. Each group and main project adopts an internal governance approach, allowing participants to focus on their own areas.

In order to attract more developers to participate in the development of WeTEE, WeTEE DAO has adopted the following strategies:

  1. Individuals who contribute new features or bug fixes to WeTEE will receive rewards.
  2. Individuals engaging in white hat activities and submitting them to WeTEE will receive rewards.
  3. Individuals who involved in adapting applications for WeTEE will receive rewards.
  4. Personnel who become core in their domain will be rewarded.
  5. If an application is integrated or used by other WeTEE applications, the original developer will be rewarded.

In order to attract more developers to participate in the voting of WeTEE, WeTEE DAO has adopted the following strategies:

  1. The workflow, on-chain parameters, and addition or deletion of functions of WeTEE can all be adjusted through voting.
  2. Regular proposal discussion meetings or conferences will be held to
    allow members to express their opinions and make suggestions.
  3. Each vote must be preceded by a reasonable discussion.
  4. Voting itself will set a certain participation threshold, but the threshold will not be high.
  5. For a passed vote, the proposer of the vote will receive rewards such as identity and credibility value.
  6. For a passed vote, voters who voted in favor will receive rewards.
  7. WeTEE will try to judge the actual value of a vote based on the actual final result generated by its content, and feedback adjustments to the "Fair Points" of voters, which will affect their rewards.
  8. "Fair Points" will influence the growth of the voter's identity. Upon reaching a certain accumulation threshold, they can apply to join the Core Team.

Q2: When moving a web2 project to your platform, what adjustments are necessary, if any?

We currently support two encryption schemes, with the following deployment scenarios:

  • If the user does not use any tool for container migration, but directly deploys to the wetee platform, the program will run in normal mode, but there will be no protection ofconfidential computing on the program and data.
  • If developers use golang to develop programs, they need to follow the ego document to use ego-go to compile the program, package it into a docker image, and then run it on the platform. Almost no code needs to be changed.
  • If developers use other programming languages, they can use Gramine’s Docker image conversion tool, Gramine Shielded Containers (GSC), to automatically convert their programs into confidential containers. Developers do not need to perform any additional operations, and most programs do not require code modification.

Q3: Can web2 users deploy their apps on your K3/K8 systems or are you using K3/K8 only to run the mn b platform in the background, thus requiring web2 users to deploy their apps differently?

Web2 users can directly deploy programs on the K3/K8 cluster connected to WeTEE. Users can use Wetee in the same way as they use k8s/k3s. In the future, we will also provide one-click deployment tools, where users only need to execute one command or click one button to complete the deployment. We will also provide decentralized CI/CD services, allowing users to modify the code and then deploy it with just one click.

Q4: Are there any other technical limitations to be expected when compared to using web2 infrastructure, e.g. performance?

Different confidential schemes will have some limitations.

  • Ego programs will consume about 10% more machine performance than normal golang programs, and there are also some limitations.
  • Gramine programs also consume about 10% more machine performance than normal lower programs, and there is an official machine performance adjustment document.

Q5: It'd be nice if you could include one or two demo apps in the milestones that can be directly deployed on your infrastructure. Doesn't have to be anything sophisticated, a hello-world type app would do, just as a proof-of-concept that and how web2 integrations with your platform work.

In Milestone 2 Delivery "01 App Example" will provide a program example that can be run directly on wetee.
Because we adopted two relatively mature solutions, Ego and Gramine, in theory, all Gramine cases and Ego cases can be run directly on our platform.

Q6: How does your solution compare to ankr?

According to the official documentation provided by Ankr, as stated in Ankr Docs:

Ankr is the leading Web3 infrastructure company. It has a set of different products for building, earning, gaming, and more — all on blockchain.

Its main functions are as follows:
RPC Service

RPC Service (https://www.ankr.com/rpc/) — a platform that provides access to our top-class nodes infrastructure to query the vast list of supported chains, monitor requested data telemetry, and test the RPC API methods you require before actually using them.

an1
an2

Ankr Staking

Ankr Staking aims to bring the benefits of DeFi to the masses with Liquid Staking, Delegated Staking, DeFi, Bridge, Switch, and Parachain Crowdloan.

From the main structure of Ankr, Ankr is a Hub that integrates and encapsulates various blockchain RPC/REST APIs. Application developers can quickly develop applications by calling the APIs provided by Ankr. For business data that needs to be persisted in the application, you can choose IPFS or STORJ for storage according to actual situations.

This makes Ankr closer to a SaaS-like platform that provides integrated services related to blockchain, and provides gateway functions for blockchain-related APIs and storage services. However, the official documentation of Ankr and STORJ does not explain the distribution of their servers, nor do they clearly explain how they manage their physical servers, so it is impossible to speculate how their backend business processes customer business data.

Therefore, based on the above two points, the main differences between WeTEE and Ankr are as follows:

  1. For application development, developers have complete autonomy over the development paradigm of their applications and are not necessarily limited to API-oriented development.
  2. WeTEE not only supports the deployment of blockchain-based DApps, but also supports the deployment of Web2 applications.
  3. WeTEE strives to be closer to native K3S/K8S during the application deployment phase.
  4. WeTEE adopts a technical approach and strategy for privacy computing, allowing application developers to use their familiar programming languages and development paradigms without having to modify code excessively.
  5. The TEE environment of WeTEE is provided by miners and matched with application deployment requests and TEE computing nodes based on algorithms, making it more transparent compared to Ankr.

From a business scenario perspective, Ankr is more focused on areas such as Gamefi/Defi/parallel chain construction, where users are currently unable to freely deploy applications. Moreover, from the perspective of project operation, Ankr adopts a corporate operation model, which is the same as the public cloud company model mentioned in the first question, and thus has the same potential problems and risks.

Q7: Expanding on the future roadmap beyond the grant would give more confidence in the long-term vision.

When you express interest in the future development of WeTEE, our team is deeply encouraged, as your support leads us to believe that our code and products will achieve meaningful and valuable results.

This grant will allow us to focus on developing core features, laying a reliable business foundation for WeTEE. Upon reviewing the outcomes of our submitted grant, you will be able to easily verify WeTEE's core business using the documentation guidance and Docker examples provided by WeTEE.

The R&D of WeTEE can be divided into three major stages, each of which contains several small R&D stages or iterative cycles:

  • Core R&D stage
  • User experience optimization stage
  • Ecosystem construction stage

During the current core development stage, WeTEE will concentrate all research and development resources on WeTEE's own development, including on-chain workers, apps, tasks, as well as K3S/K8S operators relevant to physical servers, app deployment models, task deployment models, and worker attestation models.

Once this part is completed, WeTEE will spend a small amount of time to reorganizing the code and conducting a retrospective on the completed work, after which the development will move into the next phase of core research and development.

At this stage, the R&D content mainly includes the WeTEE test/main network, the WeTEE Dapp SDK, accessing the Polkadot mainnet using Coretime, as well as the WeTEE monitoring system and the WeTEE Web user interface.

After completing this stage of development, WeTEE will invite seed users to conduct usability testing and user acceptance testing, and operational work will be carried out in accordance with the requirements of WeTEE DAO. Following initial user feedback, targeted fixes and optimizations will be implemented to address any issues within WeTEE.

The following will enter the third phase of core R&D, which is also the last core R&D phase in the current planning. In this phase, WeTEE will dynamically allocate R&D resources to the ‘blockchain-related’ or ‘hardware-related’ fields based on user and market feedback.

  • Blockchain-related:
    • Integrate https://github.com/paritytech/frontier and support wallet applications such as Metamask
    • Improve the multi-language version of the SDK to help users better integrate accounts and business between Web2 apps and Web3 DApps
    • Improve the decentralized collaboration system of WeTEE, allowing WeTEE users to view the status of application clusters and application deployment events at any time
  • Hardware-related:
    • Compatibility with AMD SEV confidential solution
    • Compatibility with Intel TDX confidential solution
    • Performance optimization of confidential computing, distributed storage, and network for server K3S/K8S.

@BurnWW BurnWW requested a review from takahser November 14, 2023 10:05
@takahser
Copy link
Collaborator

@BurnWW thanks for your very detailed and helpful answers.

I have a few final questions, but I'm already going to mark your proposal as ready for review.

Ego programs will consume about 10% more machine performance than normal golang programs, and there are also some limitations.
Gramine programs also consume about 10% more machine performance than normal lower programs, and there is an official machine performance adjustment document.

While the Gramine doc mentions "1-10% overhead", I didn't find such information for Ego. Did you get this number (10%) from their resources or did you evaluate it yourself, e.g. by testing it?

If the user does not use any tool for container migration, but directly deploys to the wetee platform, the program will run in normal mode, but there will be no protection ofconfidential computing on the program and data.

Does that mean that anybody in the network can theoretically read their data?

If developers use golang to develop programs, they need to follow the ego document to use ego-go to compile the program, package it into a docker image, and then run it on the platform. Almost no code needs to be changed.
If developers use other programming languages, they can use Gramine’s Docker image conversion tool, Gramine Shielded Containers (GSC), to automatically convert their programs into confidential containers. Developers do not need to perform any additional operations, and most programs do not require code modification.

Would it be possible to maintain a "soft-fork mentality" that doesn't require the user to do any changes on their containers (I.e. no hard fork of the relevant code, so to speak), while still being protected? While I'm not sure if it's technically possible, this would be the ideal scenario because it'd significantly lower the barriers of entry for web2 users. I imagine that converting their containers or using golang with ego would discourage a lot of web2 teams. My personal experience shows that these kind of conversion tools often are not as simple as advertised (not in the containerisation space in particular, but in general).

@takahser takahser added ready for review The project is ready to be reviewed by the committee members. and removed changes requested The team needs to clarify a few things first. labels Nov 15, 2023
Copy link
Collaborator

@takahser takahser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@BurnWW Finally, it'd be good to have a bit more information about the feature set of each deliverable included.

applications/WeTEE_Network.md Outdated Show resolved Hide resolved
applications/WeTEE_Network.md Outdated Show resolved Hide resolved
applications/WeTEE_Network.md Outdated Show resolved Hide resolved
@BurnWW
Copy link
Contributor Author

BurnWW commented Nov 19, 2023

@takahser Thanks for your response

Q1:While the Gramine doc mentions "1-10% overhead", I didn't find such information for Ego. Did you get this number (10%) from their resources or did you evaluate it yourself, e.g. by testing it?

During the WeTEE technology selection phase, we carried out evaluations across multiple dimensions such as hardware types, operating systems, virtualization, encryption methods, performance overhead, software ecosystem, and user learning costs.

Ultimately, we decided to adopt libos + K3S/K8S + Intel SGX (AMD SEV) as the architectural direction for the WeTEE technology stack.

The libos, including Gramine and Ego as mentioned, strikes a balance between the ease of use for application developers and the performance requirements of WeTEE.

libos is an operating system design pattern that moves the implementation of application program interfaces (APIs) from the operating system kernel to user space libraries.
The goal of this design pattern is to provide a more flexible and lightweight operating system solution by reducing the functionality of the operating system kernel and offering higher-level application program interfaces to improve performance and applicability.

We have selected several different yet representative applications to run on the WeTEE technical stack and observed the performance overhead of the applications. Ultimately, we arrived at performance overhead conclusions similar to those in the Gramine official documentation, namely an overhead of "1-10%".

According to our analysis, the performance overhead can be roughly divided into two categories: the overhead brought by libos and the overhead brought by cryptographic computations. The performance overhead is also influenced by various factors such as CPU model, operating system kernel version, system component versions, and current system load. Of course, different types of applications also exhibit significant differences in performance overhead.

Gramine or Ego, acting as libos, provides applications with a consistent interface compared to the kernel by intercepting and simulating system calls (syscalls), allowing applications to run on libos without awareness. Leveraging the interface abstraction layer provided by libos, similar to glibc or musl, applications can directly perform the required syscall operations in the system user space, resulting in minimal performance overhead.

The primary impact on performance comes from the overhead of Intel SGX and the type of system calls required during application runtime. Due to the differences in applications, in general, the performance loss of single-process programs in SGX mode ranges from 5% to 10%. This aligns with the results of our testing after migrating applications to Gramine and Ego.

To test performance overhead, we used a simple program implemented in Golang that calculates prime numbers up to 10,000.

package main

import (
	"fmt"
	"time"
)

func main() {
	firstDate := time.Now()
	defer func() {
		fmt.Println("Time-consuming:", time.Since(firstDate))
	}()
	// Task Queue Channel
	intChan := make(chan int, 1000)
	// The output channel, all the calculation results are placed here.
	primeChan := make(chan int, 2000)
	// Exit marked pipe
	exitChan := make(chan int, 4)
	// Distribute tasks
	go putNum(intChan)
	// Start four coroutines to calculate prime numbers and put them into the result channel.
	for i := 0; i < 4; i++ {
		go cal(intChan, primeChan, exitChan)
	}
	// Start coroutine, continuously get end flag from exitChan, when the number of acquisitions reaches 4
	go closeWork(primeChan, exitChan)
	// The main thread traverses the result set in range.
	for i := range primeChan {
		fmt.Println(i)
	}
	fmt.Println("Traversal ends")
}

/*
*
PutNum:Coroutine is responsible for putting all the numbers that need to be calculated into the intChan channel. Note: After everything is put in, close the intChan channel so that consumers don’t get stuck in an infinite loop when iterating through it with for-range
*/
func putNum(intChan chan int) {
	for i := 1; i <= 100000; i++ {
		intChan <- i
	}
	close(intChan)
}

/*
*
Determine whether all work coroutines have ended. If they have ended, close primeChan to notify the main thread.
*/
func closeWork(primeChan chan int, exitChan chan int) {
	for i := 0; i < 4; i++ {
		<-exitChan
	}
	close(primeChan)
	close(exitChan)
}

/*
*
for-range loop traverses intChan and calculates whether it is a prime number. For-range will traverse to the closed unknown of the channel. When the range loop ends, put an identifier into exitChan indicating that the current coroutine has ended
*/
func cal(intChan chan int, primeChan chan int, exitChan chan int) {
	for v := range intChan {
		flag := true
		for i := 2; i < v; i++ {
			if v%i == 0 {
				flag = false
				break
			}
		}
		if flag {
			primeChan <- v
		}
	}
	exitChan <- 0
}

Compile and run using Golang:

### Program Compilation
$ go build -o wetee_prime_test_native wetee_prime_test.go

### Run
$ ./wetee_prime_test_native

Compile and run using Ego with the same source code:

### Program Compilation
$ ego-go build wetee_prime_test.go

### Sign
$ ego sign wetee_prime_test

### Run
$ ego run wetee_prime_test

Our test hardware is as follows:

### CPU 
Intel(R) Core(TM) i7-9700F

### Operating System
ubuntu 22.04

### Golang version
go1.21.0 linux/amd64

### Ego version
EGo v1.4.1 (8b99356398dd3bcb5f74e5194d20ce421f607404)

The test results are as follows:

test go ego overhead
1 861.689517ms 887.994298ms 3%
2 871.182404ms 879.994342ms 1%
3 869.130757ms 863.994439ms -1%
4 819.968487ms 875.994353ms 6%
5 854.097488ms 887.994268ms 3%
6 779.094409ms 819.9947ms 5%
7 865.685818ms 879.994304ms 1%
8 869.595235ms 879.994298ms 1%
9 790.125787ms 875.994316ms 10%
10 864.461084ms 895.99418ms 3%

Q2: Does that mean that anybody in the network can theoretically read their data?

If the application itself provides unprotected data interfaces (APIs) or program listening ports, then anyone in the network can retrieve data through these interfaces or ports. However, if we understand it correctly, your question should mainly focus on the following scenario :

When a third party (an individual or the owner of a physical server in the network) illegally accesses the data in the container through technical means, what are the disposal measures of WeTEE?

WeTEE primarily focuses on container security by implementing defense measures in two aspects: remote access and physical access.

  1. Remote Access to Physical Machines :
    ◦ WeTEE provides network security reinforcement guidelines to block privileged port access.
  2. Remote Access to K3S/K8S or Containers :
    ◦ The Agent will set RBAC for the local K3S/K8S to manage user and service account access to resources.
    ◦ Fine-grained control of Pod network traffic is achieved through Network Policies.
    ◦ Container data protection is implemented in reference to Pod Security Standards provided by K8S.
    ◦ Each Pod is assigned an appropriate ServiceAccount to restrict its access to other resources in the cluster.
    ◦ Miners are prompted and guided to update K3S/K8S when necessary.
  3. Remote Access to Applications :
    ◦ Inbound requests are only allowed for the specific ports designated during application deployment.
  4. Physical Access to Servers :
    ◦ WeTEE provides physical server security reinforcement guidelines, guiding miners to implement basic security measures for physical servers.
    ◦ Periodic system checks are conducted, and miners are prompted to update the system to mitigate potential zero-day vulnerabilities.
    ◦ At the current stage, the Agent must run in a TEE environment with hash verification.
    ◦ Applications running in the TEE environment encrypt their data, rendering it inaccessible to third parties.

Security reinforcement settings are relatively complex, and due to space limitations, it is not possible to provide detailed elaboration. In summary, data protection for applications running in non-TEE environments relies on the data protection policies inherent to K3S/K8S, while applications in TEE environments benefit from strict data protection measures.
Furthermore, WeTEE continuously tracks relevant CVE events to further optimize security reinforcement settings.

Q3: Would it be possible to maintain a "soft-fork mentality" that doesn't require the user to do any changes on their containers (I.e. no hard fork of the relevant code, so to speak), while still being protected? While I'm not sure if it's technically possible, this would be the ideal scenario because it'd significantly lower the barriers of entry for web2 users. I imagine that converting their containers or using golang with ego would discourage a lot of web2 teams. My personal experience shows that these kind of conversion tools often are not as simple as advertised (not in the containerisation space in particular, but in general).

Currently, a large number of Intel SGX-enabled hardware assets require support. In this regard, Gramine stands out as a promising libos, providing an abstract interface layer for glibc/musl without intrusive modifications to programs. This allows developers to smoothly migrate existing applications to the Intel SGX hardware platform. The emergence of Gramine offers developers a more convenient way to leverage the advantages of Intel SGX hardware, easing the process for them.

Meanwhile, we will continue to optimize the functionality of WeTEE, continuously improving and reducing the difficulty for developers in application migration.

• In the first stage
WeTEE will provide base images with Gramine based on Ubuntu Server / Rocky Linux. Here is an example of the base image code using Ubuntu Server.

ARG UBUNTU_IMAGE=ubuntu:20.04

FROM ${UBUNTU_IMAGE}

# ARGs cannot be grouped since each FROM in a Dockerfile initiates a new build
# stage, resulting in the loss of ARG values from earlier stages.
ARG UBUNTU_CODENAME=focal

RUN apt-get update && \
    DEBIAN_FRONTEND=noninteractive apt-get install -y curl gnupg2 binutils

RUN curl -fsSLo /usr/share/keyrings/gramine-keyring.gpg https://packages.gramineproject.io/gramine-keyring.gpg && \
    echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/gramine-keyring.gpg] https://packages.gramineproject.io/ '${UBUNTU_CODENAME}' main' > /etc/apt/sources.list.d/gramine.list

RUN curl -fsSLo /usr/share/keyrings/intel-sgx-deb.key https://download.01.org/intel-sgx/sgx_repo/ubuntu/intel-sgx-deb.key && \
    echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/intel-sgx-deb.key] https://download.01.org/intel-sgx/sgx_repo/ubuntu '${UBUNTU_CODENAME}' main' > /etc/apt/sources.list.d/intel-sgx.list

RUN apt-get update && \
    DEBIAN_FRONTEND=noninteractive apt-get install -y gramine \
    sgx-aesm-service \
    libsgx-aesm-launch-plugin \
    libsgx-aesm-epid-plugin \
    libsgx-aesm-quote-ex-plugin \
    libsgx-aesm-ecdsa-plugin \
    libsgx-dcap-quote-verify \
    psmisc && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

RUN mkdir -p /var/run/aesmd/

COPY restart_aesm.sh /restart_aesm.sh

ENTRYPOINT ["/bin/sh", "-c"]
CMD ["/restart_aesm.sh ; exec /bin/bash"]

Developers can directly use the base image to package a specialized Dockerfile for Gramine. An example is shown below.

FROM wetee/ubuntu:22.04
// Developer-defined workflow
.....

// run
gramine-sgx redis-server

Developers can complete the construction of confidential containers without needing to focus extensively on the details of Gramine. Additionally, developers have the option to directly use Gramine Shielded Containers (GSC) to generate the image.

• In the second phase
WeTEE will provide decentralized image building services, enabling developers to upload their code to Github or image compilation nodes. The system will then perform automated image building based on the Dockerfile. This initiative will liberate developers from the tedious task of image building, allowing for fully automated application deployment and upgrades.

Once WeTEE provides support for AMD SEV and Intel TDX, users will no longer need to modify their code, prepare Dockerfiles, or worry about compatibility issues. SEV and TDX, in the form of confidential virtual machines, provide a confidential computing environment for programs.

@BurnWW
Copy link
Contributor Author

BurnWW commented Nov 19, 2023

@takahser Thanks for your response We have updated the descriptions of all Milestones.

@BurnWW BurnWW requested a review from takahser November 19, 2023 11:01
Copy link
Collaborator

@takahser takahser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@BurnWW thanks for another very helpful reply. I can see that you have a good plan here, and you seem to have thought of everything and since you made the deliverables clearer, I'm happy to approve it. The decentralized image building services you mention for the next phase sound promising as well - if it works, that could help driving adoption to this kind of platform significantly.
BTW, another question that popped up in my mind is, whether current K8 Tooling would be compatible with your platform, e.g. ArgoCD, OpenShift, etc.

Copy link
Collaborator

@Noc2 Noc2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed reply. I'm happy to approve it as wellö

@BurnWW BurnWW closed this Nov 21, 2023
@BurnWW BurnWW reopened this Nov 21, 2023
@BurnWW
Copy link
Contributor Author

BurnWW commented Nov 21, 2023

@takahser Thanks for your response.

In the grant application, the following business was mentioned:

  • WeTEE incorporates an intelligent compiler that possesses auto code analysis capabilities. This compiler can deeply parse the codes of Web2 applications and astutely identify the compilable sections related to business logic. WeTEE will extract these parts of code and automate its compilation into TEE application containers.
  • When application developers need to update their applications to provide new features or security patches, they can fully leverage the hot update mechanism provided by WeTEE for rapid application updates.

To achieve this business goal, the WeTEE team has researched common CI/CD software in the current market environment, such as those listed in List of Continuous Integration Services. These are all excellent works of software engineering, but in order to meet WeTEE's demand for ease of use for application developers, we are still looking for an appropriate open-source CI/CD solution, striving to strike a balance between lightweight and ease of use for WeTEE.

Currently, the requirements of WeTEE for CI/CD are as follows:

  • Interchangeability: The CI/CD should be lightweight and atomically replaceable.
  • Encryption: CI/CD needs to run in the TEE environment of WeTEE.
  • Ecological Compatibility: CI/CD system needs to have a well-designed API for easy invocation by K3S/K8S and WeTEE.
  • Component Independence: Minimize invasive modifications of the K3S/K8S system by the CI/CD.
  • Security: CI/CD needs to have certain access control policies, which can achieve
    security reinforcement through configuration modification.

When you mentioned ArgoCD, we decided to put ArgoCD at the top of our CI/CD candidate list because currently we know that ArgoCD runs well in TEE environments.

The core R&D of WeTEE's CI/CD is expected to begin in the 'User experience optimization stage' :

The R&D of WeTEE can be divided into three major stages, each of which contains several small R&D stages or iterative cycles:

  • Core R&D stage
  • User experience optimization stage
  • Ecosystem construction stage

The function of CI/CD is expected to be provided in sync with the WeTEE main network, WeTEE Dapp SDK, and WeTEE monitoring system and WeTEE web user interface.

Sorry for clicking the wrong close button just now. It's been corrected.

@BurnWW BurnWW changed the title WeTEE Network Grant WeTEE Grant Nov 28, 2023
Copy link
Contributor

@keeganquigley keeganquigley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the thorough answers @BurnWW I'm happy to approve as well.

@Noc2 Noc2 merged commit 10064ac into w3f:master Nov 28, 2023
12 of 13 checks passed
Copy link
Contributor

Congratulations and welcome to the Web3 Foundation Grants Program! Please refer to our Milestone Delivery repository for instructions on how to submit milestones and invoices, our FAQ for frequently asked questions and the support section of our README for more ways to find answers to your questions.

Before you start, take a moment to read through our announcement guidelines for all communications related to the grant or make them known to the right person in your organisation. In particular, please don't announce the grant publicly before at least the first milestone of your project has been approved. At that point or shortly before, you can get in touch with us at [email protected] and we'll be happy to collaborate on an announcement about the work you’re doing.

Lastly, please remember to let us know in case you run into any delays or deviate from the deliverables in your application. You can either leave a comment here or directly request to amend your application via PR. We wish you luck with your project! 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready for review The project is ready to be reviewed by the committee members.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants