Skip to content

Interactive implementation of the Chaum-Pedersen Zero-Knowledge Proof protocol

Notifications You must be signed in to change notification settings

BenWolfaardt/interactive_ZKP

Repository files navigation

Interactive Zero-Knowledge Proof

This project serves as a practical demonstration of the Chaum-Pedersen Zero-Knowledge Proof protocol. It showcases an interactive model where the prover (represented by the client) and the verifier (embodied by the server) engage in a sequence of communications to validate claims without revealing underlying secrets. The exchange between the prover and verifier is facilitated through gRPC, enabling efficient and structured data transmission for the verification process.

Table of Contenet

  • Introduction
  • Approach
  • Overview of Solution
  • Using the Project
    • Set up your Environment
    • Run it Locally
    • Run it on Docker (docker-compose)
    • Test it
    • Development
      • pre-commit
      • protobuf
  • Deployment to AWS: Strategy and Steps
    • Simple AWS-Native Approach
    • Complex, Flexible Approach
  • Future Considerations
  • Diving Deeper
  • Nuances

Introduction

The Chaum-Pederson protocol is a zero-knowledge proof (ZKP) mechanism that enables one party to prove the possession of a certain discrete logarithm without revealing its actual value. This cryptographic protocol is particularly revered for its completeness and soundness properties: completeness ensures that an honest prover can always convince a verifier of the truth of a valid statement, while soundness guarantees that a dishonest prover cannot convince the verifier of a false statement, except with some negligible probability. It is grounded in the realm of integer-based modular exponentiations, which form the backbone of the commitment step where the prover sends a value to the verifier without disclosing the secret. This is followed by the challenge step, where the verifier sends a randomly chosen challenge to the prover, and then the response step, where the prover sends a response that allows the verifier to check the validity of the initial commitment. The Chaum-Pederson protocol exemplifies an interactive ZKP, demanding an active dialogue between the prover and verifier, yet it maintains the zero-knowledge property whereby no additional information is leaked beyond the validity of the statement being proven.

Overview of Solution

Now that what needs to be done is understood an overview of the solution will be highlighted.

Overview of Solution:

  • Directory Structure:

    • src/: Contains the core Python files for the service, including:
      • client.py: Manages the CLI for user interaction (registration, login).
      • lib.py: Handles the mathematical operations for generating ( g, h, p, q ) values.
      • server.py: Implements gRPC for server-client communication.
      • settings.py: Reads and validates the YAML configuration.
    • proto/: Includes .proto definition files and autogenerated Python gRPC files for protocol buffers.
    • tests/: Stores unittests for testing various components of the project.
    • configs/: Contains YAML configuration for initial service variables.
    • scripts/: Houses scripts for service startup using docker-compose.
  • Key Components:

    • Configuration and Validation:
      • The YAML configuration file sets initial variables and is validated by settings.py to adhere to the required number of bits.
    • Mathematical Library (lib.py):
      • Approach 1 is used to generate cryptographic values ( g, h, p, q ), based on the bits specified in the YAML config.
      • Approach 2 was my curiosity going down a rabbit hole...
      • This script operates independently and outputs values that can be manually entered into the YAML config.
    • Client-Server Interaction:
      • The server.py and client.py manage the gRPC-based communication.
      • Chaum-Pedersen protocol mathematics are distributed across these files corresponding to the protocol steps.
      • Includes basic exception handling for robust process flow.
    • Command-Line Interface (CLI):
      • Provided by client.py to allow user interaction with the system for functions like registration and login.
  • Protobuf and gRPC:

    • proto/zkp_auth.proto: Protocol buffers definition file.
    • Autogenerated files (zkp_auth_pb2.py, zkp_auth_pb2_grpc.py, zkp_auth_pb2.pyi) are detailed under the Development section in the documentation, explaining their generation.
  • Containerization and Deployment:

    • Scripts in the scripts/ directory facilitate starting the service in a containerized environment using Docker.
  • Code Quality and Standards:

    • Utilization of a pre-commit hook alongside linting and formatting tools (flake8, black, isort, mypy, etc.) ensures high code quality and consistency.

This summary captures the project's directory layout, main and secondary directories, the nature of key components, gRPC and Protobuf usage, containerization strategy, and code quality measures.

Approach

Below is the order in which I approached this particular project.

  1. Comprehensive review of all relevant documentation.
  2. Conceptualization of the core principles and objectives.
  3. Detailed examination of the mathematical foundations.
  4. Assessment of system requirements and specifications.
  5. Drafting of a preliminary system architecture plan.
  6. Creation of a foundational script to grasp and apply the protocol:
    • 6.1. Exploration of the Chaum-Pedersen protocol mechanics.
    • 6.2. Generation of publicly agreed-upon variables ( g, h, p, q ).
  7. Incremental development process:
    • 7.1. Establishment of foundational functions and classes derived from zkp_auth.proto.
    • 7.2. Facilitation of communication between the client and server using gRPC.
    • 7.3. Integration and testing of the cryptographic mathematics.
    • 7.4. Automation of user authentication to streamline the verification process.
  8. Conducting preliminary tests and refinements.
  9. Construction and execution of a comprehensive suite of unit tests.
  10. Progressive enhancements:
  • 10.1. Development of a Command Line Interface (CLI) for user interaction.
  • 10.2. Configuration management using a YAML file accessible to all services.
  • 10.3. Containerization of the project for deployment ease and reproducibility.

Using the Project

Things to know to get up and running.

Setting up your Environment

In order to be able to run this project the user will require:

Run Locally

Set up the environment.

  poetry shell
  poetry install

Next we will start our client and server in seperate terminals - be sure to start the server first.

Open Terminal 1

python -m src.server

Open Terminal 2

python -m src.client local

See python -m src.client -h for a list of choices.

Run it on Docker (docker-compose)

You are encouraged to have a look at the contents of the below script as well as the docker-entrypoint.sh inside the server and client ci directories respectively.

bash scripts/start_docker_compose_services.sh

Test it

python -m unittest discover -s tests

Development

If you're going to be fiddling with this please consult.

pre-commit

Should you want to work on this project it is necessary to make use of the pre-commit.

Please note that all below commands that beging with pre-commit need to be run from within the virtual environment.

  1. Ensure that it is installed on your device - documentation
  2. Install this project's specific pre-commit.
  pre-commit install
  1. Run all of the pre-commit hooks.
  pre-commit run --all-files
  1. Run specific hook.

    See the hook id inside the .pre-commit-config.yaml file. Examples: mypy, isort, flake8, etc.

  pre-commit run <hook_name> --all-files

protobuf

This project communicates on port 50051 by making use of grpc protobuf - see ./proto/zkp_auth.proto for the protobuf definition.

Additionally, should you want to update or generate the other files in the ./proto directory perform the following:

  1. To generate zkp_auth_pb2.py and zkp_auth_pb2_grpc.py
  python -m grpc_tools.protoc -I=. --python_out=. --grpc_python_out=. ./proto/zkp_auth.proto`
  1. To generate zkp_auth_pb2.pyi
  `protoc --mypy_out=. proto/zkp_auth.proto`

Note that this requires protoc, Protocol Buffer Compiler, to be installed on your device - installation.

Deployment to AWS: Strategy and Steps

Deploying to AWS involves a thoughtful consideration of the technologies in use and their alignment with our project goals and infrastructure. The following is an overview of the technological stack and deployment strategies:

  • Continuous Integration / Continuous Deployment:
    • Options include GitLab CI/CD, GitHub Actions, or AWS CodeBuild/CodePipeline.
  • Infrastructure:
    • Infrastructure as Code (IaC) with Terraform for provisioning.
    • AWS services, weighing options like EC2 versus ECS and ECR versus Docker Hub.

Given these considerations, two deployment approaches are proposed - a simpler AWS-native strategy and a more complex, flexible approach.

Simple AWS-Native Approach

The simple approach leverages AWS services exclusively to streamline the process:

  1. Prepare a buildspec.yml for AWS CodeBuild to define the build process.
  2. Set up an Amazon Elastic Container Registry (ECR) for Docker image storage.
  3. Configure AWS CodeBuild for continuous integration.
  4. Establish AWS CodePipeline to automate code retrieval and deployment processes.
  5. Use Amazon Elastic Container Service (ECS) for container orchestration and deployment.
  6. Enable automated deployments through CodePipeline, connecting the build artifacts to ECS services.

Complex, Flexible Approach

The more intricate strategy demands a solid DevOps background but offers a robust and finely-tuned deployment process:

  1. Set up a gitlab-ci.yml for the GitLab CI/CD pipeline.
  2. Use Terraform to define and create the AWS infrastructure, including:
    • 2.1. Amazon ECR for Docker image registry.
    • 2.2. Amazon EC2 instances for scalable compute capacity.
    • 2.3. GitLab runners configured on AWS to execute the CI/CD pipeline.
  3. Implement Ansible playbooks for consistent setup and configuration of the deployment environment.
  4. Establish monitoring and observability with tools such as DataDog or Prometheus for system health insights.
  5. Push code changes to trigger the pipeline and deploy automatically onto the configured AWS infrastructure.

In both strategies, consider the security implications of each step, ensure IAM roles and policies are tightly scoped, and validate that networking configurations (like VPCs and security groups) align with best practices for a secure, scalable cloud environment.

Future Considerations

Moving forward, the following could be improved:

  • Integrate a Persistent Storage Solution:

    • Implementing a database such as SQLite would address the challenge of synchronizing public variable updates across client and server containers. This shared, persistent store would facilitate dynamic updates.
    • Transitioning from a transient in-memory dictionary to a persistent storage mechanism would prevent data loss on service restarts and maintain the state of registered users.
  • Session Management Enhancements:

    • Introducing a session timeout would greatly enhance security for a production-ready system.
    • Generating unique and non-repeatable session IDs for each login instance would mitigate replay attacks and session hijacking risks.
  • Secure Public Variables Selection:

    • Ensuring the uniqueness of the prover's first-step value k is crucial to prevent predictability and enhance security. Implementing a more sophisticated hashing strategy could enforce this uniqueness.
  • Address Protobuf Limitations:

    • The current use of int64 in protobuf restricts the system to 63-bit integers. Considering alternative representations, like strings, could enable the use of larger numbers without altering the protobuf definition.
    • For the proof-of-concept phase, the current setup is adequate, but for advanced security, this limitation must be revisited.
  • Optimizations for Handling Large Numbers:

    • For larger numbers, such as 512-bit values, performing algorithmic complexity analysis (Big O notation) would identify performance bottlenecks.
    • Exploring parallel processing techniques like threading or multiprocessing could significantly reduce computation times for these large numbers.
  • Security and Performance Improvements:

    • Adopting elliptic curve cryptography (ECC) could dramatically improve both the security and efficiency of the protocol, making it suitable for higher security demands and modern computational standards.

Diving Deeper

To further understand the Chaum-Pedersen Interactive Zero-Knowledge Protocol, please consult:

  • Cryptography: An Introduction (3rd Edition) by Nigel Smart - book
    • Sigma Protocols, Chaum–Pedersen Protocol - Chapter 25. Zero-Knowledge Proof, Section 3.2, page 377
    • Discrete Logarithms - Chapter 13. page 203
    • Prime Numbers, Miller–Rabin Test - Chapter 12. Primality Testing and Factoring, Section 1.3, page 188
    • Basic Algorithms, Greatest Common Divisors - Chapter 1. Modular Arithmetic, Groups, Finite Fields and Probability, Section 3.1, page 10
    • Commitments and Oblivious Transfer - Chapter 24, page 363
  • Chaum pedersen zero knowledge protocol - video

Nuances

Error: "ModuleNotFoundError: No module named '<module_name>'" Answer: Ensure that your poetry environment is active and that you have run export PYTHONPATH='src':$PYTHONPATH.

About

Interactive implementation of the Chaum-Pedersen Zero-Knowledge Proof protocol

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published