-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add localnet support #2
base: master
Are you sure you want to change the base?
Conversation
- new chain id - remove binary from git - make `config.yaml.sample` instead of default
WalkthroughThe changes in this pull request involve modifications to the Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Server
participant Config
participant EigenPod
User->>Config: Request configuration
Config->>Server: Provide chain_id
Server->>EigenPod: Validate chainID
EigenPod-->>User: Return validation result
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (5)
server/config.yaml.sample (3)
3-3
: Document the provider URL configuration.The localhost provider URL uses port 3500, which is non-standard. Consider adding:
- A comment explaining the expected service running on this port
- Any setup requirements for the local provider
Line range hint
6-7
: Consider adding security configuration options.The configuration exposes gRPC and JSON-RPC ports without any visible security settings. Consider:
- Adding commented examples for TLS configuration
- Including authentication options
- Adding warning comments about security implications in production
Line range hint
1-7
: Enhance the sample configuration with more guidance.As this is a sample configuration file, it would be helpful to:
- Add a header comment explaining:
- The purpose of this sample file
- Instructions for creating the actual config.yaml
- Required modifications for production use
- Add comments for each section explaining the configuration options
Here's a suggested enhancement:
+# Configuration sample for proofs-generation server +# To use: +# 1. Copy this file to config.yaml +# 2. Modify values according to your environment +# 3. DO NOT commit the actual config.yaml file + # Chain configuration +# chain_id: ID of the blockchain network +# - 32382: Local development network +# - See documentation for other supported networks +# provider: URL of the Ethereum JSON-RPC provider chain_id: 32382 provider: "http://localhost:3500" # Server ports +# WARNING: In production, ensure proper security measures are in place +# grpc_port: Port for gRPC server (default: 50051) +# json_rpc_port: Port for JSON-RPC server (default: 8989) grpc_port: "50051" json_rpc_port: "8989"eigen_pod_proofs.go (2)
32-32
: Consider improving the chainID validation and error messaging.While the addition of chainID 32382 for localnet support is valid, there are a few suggestions to enhance this implementation:
- The error message could be more descriptive by listing the supported chain IDs
- Consider using a constant or enum for the chain IDs to improve maintainability
Here's a suggested improvement:
+const ( + MainnetChainID = 1 + GoerliChainID = 5 + SepoliaChainID = 17000 + LocalnetChainID = 32382 +) func NewEigenPodProofs(chainID uint64, oracleStateCacheExpirySeconds int) (*EigenPodProofs, error) { - if chainID != 1 && chainID != 5 && chainID != 17000 && chainID != 32382 { - return nil, errors.New("chainID not supported") + supportedChains := map[uint64]string{ + MainnetChainID: "Mainnet", + GoerliChainID: "Goerli", + SepoliaChainID: "Sepolia", + LocalnetChainID: "Localnet", + } + if _, ok := supportedChains[chainID]; !ok { + return nil, fmt.Errorf("unsupported chainID %d. Supported chains: %v", chainID, supportedChains) }
32-34
: Consider adding documentation for the localnet chainID.Since this is adding support for a new network type (localnet), it would be helpful to add documentation explaining:
- The purpose of chainID 32382
- When and how to use the localnet configuration
Add a comment block above the function:
// NewEigenPodProofs creates a new instance for managing EigenPod proofs. // Supported chain IDs: // - 1: Mainnet // - 5: Goerli // - 17000: Sepolia // - 32382: Localnet (for local development and testing)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (3)
.gitignore
(1 hunks)eigen_pod_proofs.go
(1 hunks)server/config.yaml.sample
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- .gitignore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Outside diff range and nitpick comments (1)
server/proof_server.go (1)
Line range hint
213-227
: Fix error handling in return statement.The function returns 'err' directly which could be nil. Consider explicitly handling the error:
- }, err + }, nil
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
server/proof.pb.go
is excluded by!**/*.pb.go
📒 Files selected for processing (6)
eigen_pod_proofs.go
(1 hunks)proof_utils.go
(0 hunks)proto/proof.proto
(1 hunks)prove_withdrawal.go
(2 hunks)server/config.yaml.sample
(1 hunks)server/proof_server.go
(1 hunks)
💤 Files with no reviewable changes (1)
- proof_utils.go
🚧 Files skipped from review as they are similar to previous changes (2)
- eigen_pod_proofs.go
- server/config.yaml.sample
🔇 Additional comments (4)
proto/proof.proto (2)
19-21
: LGTM! Well-structured field additions.
The new fields follow good protobuf practices:
- Proper sequential field numbering
- Consistent type usage (uint64)
- Meaningful field names
- Request context preservation (slot and validator_index)
21-21
: Verify the timestamp field usage.
The timestamp
field has been added to the response, but its purpose and source need verification.
Let's check how this field is being used:
✅ Verification successful
The timestamp field is correctly used and verified in the codebase
The timestamp field in the ValidatorProofResponse is properly integrated into the system:
- It originates from the execution payload of withdrawal blocks (both Deneb and Capella versions)
- It's used in critical proof verification workflows:
- Proven against execution payload using
ProveTimestampAgainstExecutionPayload
- Verified through
verifyTimestampAgainstExecutionPayload
- Included in withdrawal proofs for smart contract verification
- Proven against execution payload using
- The field is consistently handled across the codebase with proper root calculations and proof generations
The implementation aligns with the protocol's requirements for validator proof generation and verification.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check the usage of the timestamp field in the codebase
# Expected: Find where the timestamp is populated in the server implementation
# Search for timestamp field usage in proof server implementation
rg -A 5 "timestamp.*ValidatorProofResponse"
# Search for any timestamp-related logic near validator proof handling
rg -B 5 -A 5 "GetValidatorProof.*timestamp"
Length of output: 91
Script:
#!/bin/bash
# Let's try a broader search to understand the context and usage of timestamp
# Search for files containing ValidatorProof
fd -e proto -e go -e rs -e js -e ts . | xargs rg -l "ValidatorProof"
# Search for timestamp usage in those contexts
rg -B 3 -A 3 "timestamp"
# Look for the proto definition and its context
rg -B 5 -A 5 "message.*ValidatorProof"
Length of output: 25632
server/proof_server.go (2)
120-122
: LGTM: New fields enhance response context.
The addition of Slot, ValidatorIndex, and Timestamp fields to ValidatorProofResponse provides useful context for proof verification.
Line range hint 213-227
: LGTM: New withdrawal proof fields provide necessary context.
The addition of HistoricalSummaryIndex, BlockRootIndex, and WithdrawalIndexWithinBlock fields to WithdrawalProofResponse enhances the proof verification context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 47
🧹 Outside diff range and nitpick comments (36)
generation/generate_validator_proof.go (3)
21-25
: Consider adding spec content validation.While the error handling for file parsing is good, consider adding validation for the parsed spec content to ensure it contains all required fields and valid values before proceeding.
spec, err := commonutils.ParseSpecJSONFile(specFile) if err != nil { log.Debug().AnErr("Error with parsing spec file", err) return err } + if err := validateSpec(spec); err != nil { + log.Debug().AnErr("Invalid spec content", err) + return err + }
47-47
: Document or extract the magic number.The constant value
1000
used inNewEigenPodProofs(chainID, 1000)
should be documented or extracted as a named constant to explain its significance.+// Maximum number of validators supported +const maxValidators = 1000 + - epp, err := eigenpodproofs.NewEigenPodProofs(chainID, 1000) + epp, err := eigenpodproofs.NewEigenPodProofs(chainID, maxValidators)
16-19
: Consider documenting the spec file format.Since the proof generation now depends on a specification file, consider adding documentation (e.g., in README.md or as code comments) that describes:
- The required format and schema of the spec file
- Example spec file contents for different networks (mainnet, testnet, localnet)
- Impact of different spec configurations on proof generation
generation/generate_balance_update_proof.go (2)
16-18
: Consider grouping parameters into a configuration structThe function has 6 parameters which reduces readability. Consider grouping related parameters into a configuration struct for better maintainability.
+type BalanceUpdateProofConfig struct { + SpecFile string + OracleBlockHeaderFile string + StateFile string + ValidatorIndex uint64 + ChainID uint64 + OutputPath string +} -func GenerateBalanceUpdateProof( - specFile string, oracleBlockHeaderFile string, stateFile string, validatorIndex uint64, chainID uint64, output string, -) error { +func GenerateBalanceUpdateProof(config BalanceUpdateProofConfig) error {
20-24
: Enhance error logging for spec file parsingConsider using Error level instead of Debug for parsing errors, as these are likely to be configuration issues that need attention. Also, include the file path in the error message for better debugging.
spec, err := commonutils.ParseSpecJSONFile(specFile) if err != nil { - log.Debug().AnErr("Error with parsing spec file", err) + log.Error().Str("file", specFile).AnErr("Failed to parse network specification file", err) return err }go.mod (1)
23-23
: Consider yaml.v3 instead of yaml.v2The addition of
gopkg.in/yaml.v2
is notable asyaml.v3
is available and recommended for new projects. Version 3 includes better error messages, security fixes, and improved handling of aliases.Consider upgrading to
gopkg.in/yaml.v3
which is already included as an indirect dependency (see line 82).common/merkle_utils.go (1)
192-198
: Consider enabling structured logging instead of commented-out print statements.While storing the result in a variable improves debuggability, the commented-out print statements suggest a need for better observability. Consider:
- Using a proper logging framework with different log levels
- Adding structured logging that can be enabled/disabled via configuration
Example implementation:
- res := target_hash == root - - // fmt.Printf( - // "Validated proof root: %#x, leaf: %#x, index: %d, result: %t\n", - // root, element, index, res, - // ) - return res + res := target_hash == root + if logger.IsDebugEnabled() { + logger.Debug("Validated Merkle proof", + "root", fmt.Sprintf("%#x", root), + "leaf", fmt.Sprintf("%#x", element), + "index", index, + "result", res, + ) + } + return restx_submitoor/tx_submitter/submit_withdrawal_credential_proof.go (1)
86-92
: Consider adding file path validation.While the implementation is functionally correct, consider adding validation for the spec file path to prevent potential security issues:
- Validate file existence before parsing
- Ensure the path is within expected boundaries
- Add size limits for the spec file
Here's a suggested improvement:
+func validateFilePath(path string) error { + if path == "" { + return fmt.Errorf("empty file path") + } + + fileInfo, err := os.Stat(path) + if err != nil { + return fmt.Errorf("invalid file path: %w", err) + } + + if fileInfo.Size() > 10*1024*1024 { // 10MB limit + return fmt.Errorf("file too large: %d bytes", fileInfo.Size()) + } + + return nil +} func (u *EigenPodProofTxSubmitter) SubmitVerifyWithdrawalCredentialsTx(withdrawalCredentialsProofConfig string, submitTransaction bool) ([]byte, error) { // ... existing code ... + if err := validateFilePath(cfg.SpecFile); err != nil { + log.Debug().AnErr("Invalid spec file", err) + return nil, err + } networkSpec, err := commonutils.ParseSpecJSONFile(cfg.SpecFile) // ... rest of the code ... }data/deneb_goerli_spec.json (1)
1-133
: Architectural recommendation for configuration management.Consider the following recommendations for managing this configuration:
- Add validation scripts to verify the configuration against official specs
- Document the source and version of each parameter
- Implement configuration validation in the application startup
- Add a version field to track configuration changes
- Consider using a configuration management system for different networks
Would you like me to help create validation scripts or documentation templates?
eigen_pod_proofs.go (3)
31-33
: Consider using a more specific type for networkSpecThe
map[string]any
type for networkSpec is very generic. Consider defining a specific struct type that represents the expected network specification structure to improve type safety and documentation.
51-57
: Consider enhancing default hashRooter error handlingThe default hashRooter implementation could be more informative by including the type of the source in the error message when hashing is not supported.
hashRooter: func(source any) ([32]byte, error) { if hashable, ok := source.(fastssz.HashRoot); ok { return hashable.HashTreeRoot() } - return phase0.Root{}, errors.New("object does not support hashing") + return phase0.Root{}, fmt.Errorf("object of type %T does not support hashing", source) },
191-202
: Add documentation for new methodsThe new methods
WithNetworkSpec
andHashTreeRoot
lack documentation. Consider adding GoDoc comments to explain their purpose, parameters, and return values.+// WithNetworkSpec configures the EigenPodProofs instance with the provided network specification. +// It initializes the dynamic SSZ hasher and updates the hash root computation strategy. +// Returns the modified instance for method chaining. func (epp *EigenPodProofs) WithNetworkSpec(networkSpec map[string]interface{}) *EigenPodProofs { +// HashTreeRoot computes the SSZ hash tree root of the given source using the configured hasher. +// Returns an error if the source cannot be hashed with the current configuration. func (epp *EigenPodProofs) HashTreeRoot(source any) ([32]byte, error) {tx_submitoor/tx_submitter/submit_withdrawal_proof.go (2)
115-121
: Enhance error handling and add network spec validationWhile the network spec parsing is implemented well, consider these improvements:
- Include the spec file path in the error message for better debugging
- Add validation of the parsed network spec before using it
networkSpec, err := commonutils.ParseSpecJSONFile(cfg.SpecFile) if err != nil { - log.Debug().AnErr("Error with parsing spec file", err) + log.Debug().Str("file", cfg.SpecFile).AnErr("Error parsing network spec file", err) return nil, err } +// Add basic validation of the network spec +if err := validateNetworkSpec(networkSpec); err != nil { + log.Debug().Str("file", cfg.SpecFile).AnErr("Invalid network spec", err) + return nil, fmt.Errorf("invalid network spec: %w", err) +} u.eigenPodProofs = u.eigenPodProofs.WithNetworkSpec(networkSpec)
137-140
: Improve error messages with operation contextThe error handling is good, but the error messages could be more descriptive for better debugging.
versionedOracleState, err := beacon.CreateVersionedState(&oracleState) if err != nil { - log.Debug().AnErr("Error with CreateVersionedState", err) + log.Debug().AnErr("Failed to create versioned oracle state", err). + Str("state_file", cfg.BeaconStateFiles.OracleStateFile). + Msg("State version creation failed") return nil, err } // ... (later in the code) versionedSignedBlock, err := beacon.CreateVersionedSignedBlock(block) if err != nil { - log.Debug().AnErr("Error with CreateVersionedSignedBlock", err) + log.Debug().AnErr("Failed to create versioned signed block", err). + Str("block_file", file). + Msg("Block version creation failed") return nil, err }Also applies to: 163-166
beacon/shared.go (4)
Line range hint
16-20
: LGTM! Consider enhancing the documentation.The comment about validator structure consistency is helpful. Consider expanding it to include:
- Which versions it's been tested with
- A reference to the spec that guarantees this consistency
- // the validator structure does not change across any versions, so we can use fastssz + // The validator structure is stable across all consensus layer versions (tested with phase0 through capella) + // as per the consensus specs, so we can safely use fastssz for hashing
Line range hint
116-120
: LGTM! Maintain consistency in documentation style.The comment about historical summaries structure stability follows the same pattern as the validator structure comment. For consistency, consider using the same documentation style across all similar comments.
- // historical summaries structure is not expected to change across versions, so we can use fastssz + // The historical summaries structure is stable across all consensus layer versions (tested with phase0 through capella) + // as per the consensus specs, so we can safely use fastssz for hashing
Line range hint
183-187
: LGTM! Enhance comment clarity and consistency.The comment about withdrawal structure is less specific than similar comments in the file. Consider providing the same level of detail about version compatibility and spec compliance.
- // not a dynamic structure, so we can use fastssz + // The withdrawal structure is stable across all consensus layer versions (tested with phase0 through capella) + // as per the consensus specs, so we can safely use fastssz for hashing
Line range hint
1-204
: Consider documenting SSZ hashing design decisions.The file contains multiple comments about using
fastssz
for hashing stable consensus layer data structures. Consider documenting these design decisions in:
- A package-level comment in
beacon/doc.go
- The project's technical documentation
- A design document explaining the performance benefits and safety considerations
This would help future maintainers understand the rationale behind using
fastssz
across the codebase.generation/generate_withdrawal_fields_proof.go (3)
Line range hint
21-33
: Consider refactoring the function parameters for better maintainability.The function has 12 parameters which makes it difficult to maintain and use correctly. Consider:
- Creating a configuration struct to group related parameters
- Moving
specFile
after the main input files for better readabilityExample refactor:
type WithdrawalFieldsConfig struct { SpecFile string OracleBlockHeaderFile string StateFile string HistoricalSummaryStateFile string BlockHeaderFile string BlockBodyFile string ValidatorIndex uint64 WithdrawalIndex uint64 HistoricalSummariesIndex uint64 BlockHeaderIndex uint64 ChainID uint64 OutputFile string } func GenerateWithdrawalFieldsProof(config WithdrawalFieldsConfig) error {
55-62
: Improve logging and error handling for spec file parsing.The current implementation has several areas for improvement:
- Debug statements using
fmt.Println
should be replaced with structured logging- Error message could be more specific about what went wrong during parsing
-fmt.Println("start parsing spec") +log.Debug().Msg("Starting spec file parsing") spec, err := commonutils.ParseSpecJSONFile(specFile) if err != nil { - log.Debug().AnErr("GenerateWithdrawalFieldsProof: error with parsing spec file", err) + log.Debug().Err(err).Str("file", specFile).Msg("Failed to parse network spec file") return fmt.Errorf("failed to parse network spec file %s: %w", specFile, err) } -fmt.Println("end parsing spec") +log.Debug().Msg("Completed spec file parsing")
128-134
: Remove misleading comment and improve error handling specificity.The comment "not a dynamic structure" is misleading and unrelated to the hash tree root calculation. Additionally, the error handling could be more specific.
-// not a dynamic structure, so we can use fastssz blockHeaderRoot, err := withdrawalBlockHeader.HashTreeRoot() if err != nil { - log.Debug().AnErr("GenerateWithdrawalFieldsProof: error with HashTreeRoot of blockHeader", err) + log.Debug().Err(err). + Uint64("slot", uint64(withdrawalBlockHeader.Slot)). + Msg("Failed to compute block header root") return err } executionPayloadRoot, err := epp.HashTreeRoot(withdrawalBlock.Body.ExecutionPayload) if err != nil { - log.Debug().AnErr("GenerateWithdrawalFieldsProof: error with HashTreeRoot of executionPayload", err) + log.Debug().Err(err). + Uint64("timestamp", withdrawalBlock.Body.ExecutionPayload.Timestamp). + Msg("Failed to compute execution payload root") return err }server/proof_server.go (1)
Line range hint
237-266
: Simplify error returnThe final return statement includes an unnecessary error variable.
return &WithdrawalProofResponse{ StateRoot: "0x" + hex.EncodeToString(oracleBlockHeader.StateRoot[:]), // ... other fields ... HistoricalSummaryIndex: targetBlockRootsGroupSummaryIndex, BlockRootIndex: withdrawalBlockRootIndexInGroup, WithdrawalIndexWithinBlock: withdrawalIndex, -}, err +}, nilcommon_utils/utils.go (3)
321-324
: Enhance error handling with input validation and more descriptive messagesThe error handling could be improved by:
- Validating the filePath parameter
- Providing more context in error messages
+ if filePath == "" { + return nil, errors.New("spec file path cannot be empty") + } data, err := os.ReadFile(filePath) if err != nil { - log.Debug().Str("file", filePath).Msgf("error with reading file: %v", err) + log.Debug().Str("file", filePath).Msgf("failed to read spec file: %v", err) return nil, err }
402-410
: Extract magic numbers into named constantsThe default values for MAX_TRANSACTIONS_PER_PAYLOAD, MAX_BYTES_PER_TRANSACTION, and MAX_BLOB_COMMITMENTS_PER_BLOCK should be defined as constants.
+const ( + DefaultMaxTransactionsPerPayload = 1048576 + DefaultMaxBytesPerTransaction = 1073741824 + DefaultMaxBlobCommitmentsPerBlock = 4096 +) if _, exists := res["MAX_TRANSACTIONS_PER_PAYLOAD"]; !exists { - res["MAX_TRANSACTIONS_PER_PAYLOAD"] = 1048576 + res["MAX_TRANSACTIONS_PER_PAYLOAD"] = DefaultMaxTransactionsPerPayload } if _, exists := res["MAX_BYTES_PER_TRANSACTION"]; !exists { - res["MAX_BYTES_PER_TRANSACTION"] = 1073741824 + res["MAX_BYTES_PER_TRANSACTION"] = DefaultMaxBytesPerTransaction } if _, exists := res["MAX_BLOB_COMMITMENTS_PER_BLOCK"]; !exists { - res["MAX_BLOB_COMMITMENTS_PER_BLOCK"] = 4096 + res["MAX_BLOB_COMMITMENTS_PER_BLOCK"] = DefaultMaxBlobCommitmentsPerBlock }
316-412
: Consider breaking down the ParseSpecJSONFile function for better maintainabilityThe function is quite complex with multiple responsibilities. Consider breaking it down into smaller, focused functions:
- readAndUnmarshalSpec
- parseSpecialTypes (domains, fork versions)
- parseTimeRelatedFields
- setDefaultValues
This would improve readability and make the code easier to maintain and test.
Example structure:
func ParseSpecJSONFile(filePath string) (map[string]any, error) { data, err := readAndUnmarshalSpec(filePath) if err != nil { return nil, err } result := make(map[string]any, len(data)) if err := parseSpecialTypes(data, result); err != nil { return nil, err } if err := parseTimeRelatedFields(data, result); err != nil { return nil, err } setDefaultValues(result) return result, nil }proof_utils.go (3)
381-383
: Add more detailed documentation about the expected JSON format.The current comment explains how to download the spec file but doesn't describe its expected format or structure.
Add a comment block describing:
- Expected JSON structure
- Required and optional fields
- Format of domain types and fork versions
458-477
: Consider extracting default values into constants.The default values for domain types and other parameters could be moved to package-level constants for better maintainability and documentation.
Example refactor:
+const ( + DefaultDomainApplicationMask = phase0.DomainType{0x00, 0x00, 0x00, 0x01} + DefaultDomainBlsToExecutionChange = phase0.DomainType{0x0a, 0x00, 0x00, 0x00} + DefaultDomainApplicationBuilder = phase0.DomainType{0x00, 0x00, 0x00, 0x01} + DefaultMaxTransactionsPerPayload = 1048576 + DefaultMaxBytesPerTransaction = 1073741824 + DefaultMaxBlobCommitmentsPerBlock = 4096 +)Then use these constants in the code:
- res["DOMAIN_APPLICATION_MASK"] = phase0.DomainType{0x00, 0x00, 0x00, 0x01} + res["DOMAIN_APPLICATION_MASK"] = DefaultDomainApplicationMask
381-479
: Consider adding unit tests for the spec parsing logic.The
ParseSpecJSONFile
function handles complex parsing logic for different data types. Adding comprehensive unit tests would help ensure:
- Correct parsing of different data types
- Proper handling of edge cases
- Validation of default values
Would you like me to help create unit tests for this function?
beacon/spec.go (3)
30-34
: Remove UnnecessaryisType
Helper FunctionAfter simplifying the
GetValueFromKey
function, theisType
helper function is no longer needed. Removing it will clean up the code and reduce clutter.Apply this diff to remove the helper function:
-// Helper function to check the type -func isType[T, V any]() bool { - var t T - var v V - return fmt.Sprintf("%T", t) == fmt.Sprintf("%T", v) -}
36-140
: Reduce Code Duplication by Generalizing Retrieval FunctionsThe multiple
Get
functions are similar in functionality, leading to code duplication. Consider creating a generalized function to handle the retrieval of unsigned integer values from the spec map.Introduce a generic retrieval function:
func GetUint64Value(spec map[string]any, key string) (uint64, error) { return GetValueFromKey[uint64](spec, key) }Then, replace specific functions:
- func GetMaxAttestations(spec map[string]any) (uint64, error) { - return GetValueFromKey[uint64](spec, "MAX_ATTESTATIONS") - } + func GetMaxAttestations(spec map[string]any) (uint64, error) { + return GetUint64Value(spec, "MAX_ATTESTATIONS") + }This approach simplifies the codebase and makes it easier to maintain and extend in the future.
8-140
: Add Unit Tests for Spec Retrieval FunctionsCurrently, there are no unit tests covering the spec retrieval functions. Adding tests will help ensure that the functions work correctly and handle error cases appropriately.
Would you like assistance in creating unit tests for these functions?
solidityProofGen/main.go (1)
406-409
: Simplify error handling forComputeBeaconStateTopLevelRoots
For better readability, you can handle the error directly in the if statement.
Apply this diff to simplify error handling:
- oracleBeaconStateTopLevelRoots, err := epp.ComputeBeaconStateTopLevelRoots(&versionedOracleState) - if err != nil { - fmt.Println("Error computing beacon state top level roots", err) - return err - } + if oracleBeaconStateTopLevelRoots, err := epp.ComputeBeaconStateTopLevelRoots(&versionedOracleState); err != nil { + fmt.Println("Error computing beacon state top level roots", err) + return err + }solidityProofGen/utils.go (1)
152-156
: Refactor duplicated code to improve maintainabilityThe code for parsing the spec file and handling errors is duplicated in multiple functions.
Consider creating a helper function to parse the spec file and handle errors:
func parseSpec(specFile string, networkSpec map[string]any) error { if err := eigenpodproofs.ParseSpecJSONFile(specFile, networkSpec); err != nil { return fmt.Errorf("error parsing spec file: %v", err) } return nil }Then, in your functions:
if err := parseSpec(specFile, networkSpec); err != nil { return nil, nil, err }This reduces code duplication and improves readability.
Also applies to: 272-277, 365-368
beacon/deneb.go (1)
284-288
: Remove redundant commented-out code for clarityThere are several instances of commented-out code like
// copy(beaconStateTopLevelRoots.ForkRoot[:], hh.Hash())
. If this code is no longer needed, consider removing it to improve readability.merkle_util_test.go (2)
170-170
: Fix incorrect assertion messageIn your assertion, the format string contains
%v
but no argument is provided to fill it. This can lead to formatting errors in the test output.Apply this diff to correct the assertion:
- assert.True(t, flag, "Validator Proof %v failed") + assert.True(t, flag, "Validator Proof failed")Alternatively, if you intended to include an argument:
assert.True(t, flag, "Validator Proof %v failed", validatorIndex)
Line range hint
580-584
: Remove commented code and handle errors consistentlyIn
TestGetHistoricalSummariesBlockRootsProofProofCapellaAgainstDeneb
, there's commented code and inconsistent error handling.Apply this diff:
- //oldBeaconStateTopLevelRoots, _ := ComputeBeaconStateTopLevelRoots(&oldBeaconState) currentBeaconStateTopLevelRoots, err := beacon.ComputeBeaconStateTopLevelRootsDeneb(¤tBeaconState, epp.networkSpec, epp.dynSSZ) if err != nil { fmt.Println("error") + t.Fatalf("Failed to compute beacon state top-level roots: %v", err) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
go.sum
is excluded by!**/*.sum
📒 Files selected for processing (22)
beacon/capella.go
(27 hunks)beacon/deneb.go
(35 hunks)beacon/shared.go
(3 hunks)beacon/spec.go
(1 hunks)common/merkle_utils.go
(1 hunks)common_utils/utils.go
(2 hunks)data/deneb_goerli_spec.json
(1 hunks)eigen_pod_proofs.go
(5 hunks)generation/generate_balance_update_proof.go
(2 hunks)generation/generate_validator_proof.go
(2 hunks)generation/generate_withdrawal_fields_proof.go
(5 hunks)generation/main.go
(2 hunks)go.mod
(4 hunks)merkle_util_test.go
(12 hunks)proof_utils.go
(2 hunks)prove_withdrawal.go
(5 hunks)server/proof_server.go
(8 hunks)solidityProofGen/main.go
(12 hunks)solidityProofGen/utils.go
(12 hunks)tx_submitoor/main.go
(1 hunks)tx_submitoor/tx_submitter/submit_withdrawal_credential_proof.go
(2 hunks)tx_submitoor/tx_submitter/submit_withdrawal_proof.go
(4 hunks)
✅ Files skipped from review due to trivial changes (1)
- tx_submitoor/main.go
🚧 Files skipped from review as they are similar to previous changes (1)
- prove_withdrawal.go
🔇 Additional comments (34)
generation/main.go (1)
Line range hint 1-67
: Verify chain ID handling for localnet support
Since this PR adds localnet support, we should verify that the chain ID handling is consistent across the codebase.
✅ Verification successful
Let me gather more context about the chain ID validation patterns to ensure a comprehensive verification.
Chain ID validation is properly implemented for localnet support
The codebase already includes proper chain ID validation that explicitly supports localnet (chain ID 31337) along with other networks (1 for mainnet, 5 for goerli, and 17000). The validation is centralized in eigen_pod_proofs.go
and the chain ID parameter is consistently passed through the proof generation functions in generation/main.go
.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for chain ID usage and validation
echo "Searching for chain ID validation patterns..."
rg -l "chainID|chain_id" --type go
# Search for hardcoded chain IDs that might conflict with localnet
echo "Searching for hardcoded chain IDs..."
rg -n "[^a-zA-Z0-9]31337[^a-zA-Z0-9]" --type go
# Look for config loading patterns
echo "Searching for config loading patterns..."
rg -n "config.*yaml" --type go
Length of output: 924
Script:
#!/bin/bash
# Check chain ID validation patterns in detail
echo "Checking chain ID validation in eigen_pod_proofs.go..."
rg -A 5 -B 2 "chainID.*31337" eigen_pod_proofs.go
echo "Checking chain ID related function signatures..."
ast-grep --pattern 'func $$$($$ chainID $$ uint64 $$) $$$'
echo "Checking chain ID parameter validation..."
rg -A 5 -B 2 "chainID.*!=" --type go
Length of output: 1772
generation/generate_validator_proof.go (2)
16-19
: LGTM! Function signature updated appropriately.
The addition of the specFile
parameter is well-structured and maintains backward compatibility.
49-49
: Verify all callers are updated with spec file parameter.
The state root computation now depends on the network spec. Let's verify that all calling code has been updated to provide the spec file.
✅ Verification successful
Let me check the actual implementation and calls to verify if they are properly updated.
All callers are properly updated with spec file parameter
Looking at the implementation and all callers, I can confirm that:
- The function
GenerateValidatorFieldsProof
acceptsspecFile
as its first parameter - Both callers in
solidityProofGen/main.go
andgeneration/main.go
provide the spec file parameter correctly
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for all calls to GenerateValidatorFieldsProof
# Expect: All calls should include the spec file parameter
rg "GenerateValidatorFieldsProof\(" --type go
Length of output: 386
Script:
#!/bin/bash
# Check the implementation and calls with context
rg "GenerateValidatorFieldsProof\(" --type go -B 2 -A 2
# Also check the function signature in the implementation file
rg "func GenerateValidatorFieldsProof" generation/generate_validator_proof.go -A 5
Length of output: 1607
go.mod (4)
18-18
: LGTM: zerolog version update
The update of github.com/rs/zerolog
to v1.32.0 is a reasonable version bump that includes performance improvements and bug fixes.
50-50
: LGTM: Indirect dependency updates
The updates to indirect dependencies github.com/klauspost/cpuid/v2
and github.com/prysmaticlabs/go-bitfield
appear to be routine maintenance updates.
Also applies to: 60-60
85-86
: Verify the use of forked go-eth2-client
The replacement of the official attestantio/go-eth2-client
with a fork needs justification:
- Using a fork of a critical Ethereum client library could impact security and compatibility
- The fork is pinned to a specific commit rather than a release tag
#!/bin/bash
# Check the differences between the original and forked repos
gh repo view pk910/go-eth2-client --json description,createdAt,updatedAt,isArchived
gh repo view attestantio/go-eth2-client --json description,createdAt,updatedAt,isArchived
# Check commit history to understand fork divergence
gh api repos/pk910/go-eth2-client/commits?per_page=5 --jq '.[].commit.message'
16-16
: Verify the use of forked dynamic-ssz package
The addition of github.com/pk910/dynamic-ssz
and its replacement with a fork github.com/ExocoreNetwork/dynamic-ssz
requires careful consideration:
- The fork is pinned to a specific commit rather than a release tag
- Using a fork instead of the original package could pose maintenance challenges
Also applies to: 87-87
✅ Verification successful
The previous attempt failed because we got results for a different repository. Let's try to understand how the dynamic-ssz package is being used in the codebase to make an informed decision about the fork usage.
Fork usage of dynamic-ssz appears justified and well-integrated
The dynamic-ssz package is actively used across multiple core components (eigen_pod_proofs.go, beacon/deneb.go, beacon/capella.go, solidityProofGen/utils.go) with consistent import patterns. The fork is pinned to a specific commit (v0.0.6-0.20241114202339-72e6e55258eb) which is a recent version from November 2024, indicating active maintenance. The widespread usage and consistent integration suggest this is a deliberate architectural choice rather than a maintenance concern.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check the differences between the original and forked repos
gh repo view ExocoreNetwork/dynamic-ssz --json description,createdAt,updatedAt,isArchived
gh repo view pk910/dynamic-ssz --json description,createdAt,updatedAt,isArchived
# Check commit history to understand fork divergence
gh api repos/ExocoreNetwork/dynamic-ssz/commits?per_page=5 --jq '.[].commit.message'
Length of output: 697
Script:
#!/bin/bash
# Let's try searching for dynamic-ssz usage in the codebase
rg -l "dynamic-ssz"
# Search for SSZ-related imports and usage
rg "\"github.com/.*dynamic-ssz\""
# Check if there are any direct references to ExocoreNetwork/dynamic-ssz
rg "ExocoreNetwork/dynamic-ssz"
# Let's also check the replace directive context
rg -A 2 -B 2 "replace.*dynamic-ssz"
Length of output: 1095
common/merkle_utils.go (1)
192-198
: Enhance test coverage for the validation logic.
Given the changes to the validation logic and the introduction of a new chain ID (31337), consider adding the following test cases:
- Edge cases for proof validation
- Integration tests with the new chain ID
Let's verify the current test coverage:
data/deneb_goerli_spec.json (6)
18-20
: Verify deposit contract configuration.
The deposit contract address, chain ID, and network ID are critical security parameters that must be validated.
#!/bin/bash
# Description: Verify deposit contract configuration
# Check against Goerli testnet deposit contract
# Verify deposit contract address
curl -s https://api-goerli.etherscan.io/api?module=contract&action=getcontractcreation&contractaddresses=0xff50ed3d0ec03aC01D4C79aAd74928BFF48a7b2b | jq .
# Verify chain matches Goerli
if [[ "5" != $(cat data/deneb_goerli_spec.json | jq -r .DEPOSIT_CHAIN_ID) ]]; then
echo "Chain ID mismatch!"
fi
34-36
: Review economic parameters for consistency.
The economic parameters (balances, penalties, rewards) need careful review as they affect validator economics:
EFFECTIVE_BALANCE_INCREMENT
: 1 GweiEJECTION_BALANCE
: 16 ETH- Slashing penalties increase across forks (128 → 64 → 32)
- Slashing multipliers increase across forks (1 → 2 → 3)
These parameters appear to align with Ethereum's economic model, but should be verified against the official specifications.
#!/bin/bash
# Description: Verify economic parameters
# Compare with ethereum/consensus-specs repository
gh api repos/ethereum/consensus-specs/contents/specs/deneb/beacon-chain.md -q '.content' | base64 -d | grep -E "EFFECTIVE_BALANCE|EJECTION_BALANCE|MIN_SLASHING|PROPORTIONAL_SLASHING"
Also applies to: 51-54, 89-91, 96-98
107-111
: Verify timing parameters.
Critical timing parameters that affect network synchronization:
SECONDS_PER_ETH1_BLOCK
: 14 secondsSECONDS_PER_SLOT
: 12 secondsSLOTS_PER_EPOCH
: 32SHUFFLE_ROUND_COUNT
: 90
These values are standard for Goerli but should be verified.
#!/bin/bash
# Description: Verify timing parameters
# Compare with ethereum/consensus-specs repository
gh api repos/ethereum/consensus-specs/contents/presets/mainnet/phase0.yaml -q '.content' | base64 -d | grep -E "SECONDS_PER|SLOTS_PER_EPOCH|SHUFFLE_ROUND"
21-33
: Verify domain constants against consensus specs.
These domain constants are critical for message signing and verification. They must match the official Ethereum consensus specifications exactly.
#!/bin/bash
# Description: Verify domain constants against consensus specifications
# Compare with ethereum/consensus-specs repository
gh api repos/ethereum/consensus-specs/contents/specs/phase0/beacon-chain.md -q '.content' | base64 -d | grep -A 15 "Domain types"
120-122
:
Critical: Verify terminal block parameters.
The terminal block parameters are crucial for the transition:
TERMINAL_BLOCK_HASH
: Zero hashTERMINAL_BLOCK_HASH_ACTIVATION_EPOCH
: Max uint64TERMINAL_TOTAL_DIFFICULTY
: 10790000
These parameters must be validated against the Goerli merge configuration.
#!/bin/bash
# Description: Verify terminal block parameters
# Compare with ethereum/consensus-specs repository
gh api repos/ethereum/consensus-specs/contents/configs/goerli/config.yaml -q '.content' | base64 -d | grep -E "TERMINAL_"
2-17
: Verify fork configuration alignment with Goerli testnet.
The fork epochs and versions need to be validated against the official Goerli testnet specifications to ensure network compatibility.
✅ Verification successful
Fork configuration appears to be correctly aligned with Goerli/Prater testnet
Based on the repository context and configuration file, this is a specialized configuration for EigenLayer's proof generation system. The fork epochs and versions are consistent with the Goerli/Prater testnet configuration, as evidenced by:
- The CONFIG_NAME is correctly set to "prater"
- The fork sequence follows the correct order: Altair -> Bellatrix -> Capella -> Deneb
- The fork versions follow the consistent pattern with Goerli's version bytes (0x..001020)
- The configuration is used in conjunction with Goerli testnet examples shown in the README
- The DEPOSIT_CHAIN_ID and DEPOSIT_NETWORK_ID are correctly set to 5 (Goerli)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify fork configuration against Goerli testnet specifications
# Compare with ethereum/consensus-specs repository
gh api repos/ethereum/consensus-specs/contents/configs/goerli/config.yaml -q '.content' | base64 -d | grep -E "ALTAIR_FORK|BELLATRIX_FORK|CAPELLA_FORK|DENEB_FORK"
Length of output: 391
Script:
#!/bin/bash
# Let's try searching within the repository for any related configuration files or documentation
# that might help validate these fork versions and epochs
# Search for files containing fork configuration
fd -e yaml -e json -e md | xargs rg -l "DENEB_FORK|ALTAIR_FORK|BELLATRIX_FORK|CAPELLA_FORK"
# Search specifically for Goerli/Prater related configuration files
fd -e yaml -e json -e md | xargs rg -l "goerli|prater" -i
# Look for any documentation about fork configurations
rg "fork.*version|fork.*epoch" -g "*.md" -i
Length of output: 289
Script:
#!/bin/bash
# Let's check the content of the README files and the spec file to understand the context better
# Check main README for any fork configuration context
cat README.md
# Check onchain tests README
cat onchain_tests/README.md
# Look at the complete spec file to understand the full context
cat data/deneb_goerli_spec.json
Length of output: 16643
eigen_pod_proofs.go (3)
14-15
: LGTM: Import additions for SSZ hashing support
The new imports for fastssz and dynamic-ssz packages are correctly added to support the enhanced hashing functionality.
65-65
: LGTM: Method updates for dynamic SSZ support
The modifications to use hashRooter and pass networkSpec/dynSSZ to the computation methods are consistent and maintain proper error handling.
Also applies to: 105-105, 107-107
37-37
: Verify chainID 31337 usage across the codebase
The addition of chainID 31337 for localnet support is consistent with the PR objectives. However, ensure this chainID is consistently handled across the codebase.
✅ Verification successful
ChainID 31337 is consistently implemented across the codebase
The chainID 31337 (for localnet) is properly handled in all relevant places:
- Used in chainID validation in
eigen_pod_proofs.go
- Consistently handled in
prove_withdrawal.go
for Capella slot settings - Follows the same pattern as other supported networks (mainnet:1, goerli:5, holesky:17000)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for chainID usage to ensure consistency
echo "Searching for chainID comparisons..."
rg -n "chain[_-]?id.*31337" --type go --type yaml
Length of output: 134
Script:
#!/bin/bash
# Let's try a broader search for chainID validations and 31337
echo "Searching for chainID validations..."
rg -n "chainID.*==" --type go -A 2
echo -e "\nSearching for hardcoded 31337..."
rg -n "31337" --type go --type yaml
Length of output: 1471
tx_submitoor/tx_submitter/submit_withdrawal_proof.go (2)
34-35
: LGTM: Well-structured configuration enhancement
The addition of the required SpecFile
field is clean and consistent with the PR's objective to enhance network support.
115-121
:
Add security validations for file paths and withdrawal operations
The code processes user-provided file paths and withdrawal data without sufficient security validations:
- File paths should be validated to prevent path traversal attacks
- Consider adding withdrawal amount limits and rate limiting
Let's verify the current file path handling:
Also applies to: 137-140, 163-166
server/proof_server.go (4)
32-32
: LGTM: Proper initialization of spec provider
The addition of the spec provider follows the established pattern with proper error handling and type assertions.
Also applies to: 46-46, 70-74
99-104
: LGTM: Proper spec initialization
The network spec retrieval and initialization is properly implemented with appropriate error handling.
Also applies to: 122-122
158-163
: LGTM: Proper spec initialization
The network spec retrieval and initialization is properly implemented with appropriate error handling.
Also applies to: 169-170
135-137
:
Add null checks for Deneb and LatestExecutionPayloadHeader
There's a potential nil pointer dereference when accessing versionedState.Deneb.LatestExecutionPayloadHeader.Timestamp
. Add appropriate null checks before accessing these fields.
+if versionedState.Deneb == nil || versionedState.Deneb.LatestExecutionPayloadHeader == nil {
+ return nil, fmt.Errorf("invalid state: missing Deneb or LatestExecutionPayloadHeader")
+}
return &ValidatorProofResponse{
StateRoot: "0x" + hex.EncodeToString(beaconBlockHeader.StateRoot[:]),
StateRootProof: commonutils.ConvertBytesToStrings(stateRootProof.StateRootProof.ToBytesSlice()),
ValidatorContainer: commonutils.GetValidatorFields(versionedState.Deneb.Validators[req.ValidatorIndex]),
ValidatorContainerProof: commonutils.ConvertBytesToStrings(validatorContainerProof.ToBytesSlice()),
Slot: req.Slot,
ValidatorIndex: req.ValidatorIndex,
Timestamp: versionedState.Deneb.LatestExecutionPayloadHeader.Timestamp,
}, nil
Likely invalid or redundant comment.
proof_utils.go (1)
387-388
: LGTM! Good improvements to error handling.
The error handling has been improved with:
- Proper use of structured logging with context
- Descriptive error messages
- Consistent error handling patterns
Also applies to: 392-393
solidityProofGen/utils.go (2)
260-262
: Ensure all function calls to SetUpWithdrawalsProofCapella
are updated
The function SetUpWithdrawalsProofCapella
now has new parameters. Verify that all calls to this function include the new specFile
and networkSpec
arguments.
Run the following script:
#!/bin/bash
# Description: Find all calls to SetUpWithdrawalsProofCapella to verify they have been updated.
rg 'SetUpWithdrawalsProofCapella\(' --glob '!solidityProofGen/utils.go' -A 2 -B 2
This will help identify any function calls that need updating.
148-150
: Ensure all function calls are updated with new parameters
The function SetUpWithdrawalsProof
now includes new parameters specFile
and networkSpec
. Ensure all calls to this function are updated accordingly to prevent compilation errors.
Run the following script to find all calls to SetUpWithdrawalsProof
:
This will help you locate usages that may need updating.
beacon/capella.go (4)
29-31
: Function call updated correctly
The function ProveExecutionPayloadAgainstBlockBodyCapella
is now called with networkSpec
and dynSSZ
as arguments, matching its updated signature.
50-50
: Verify compatibility of the new hasher implementation
The hasher hh
is now initialized using dynssz.NewHasher()
. Ensure that all subsequent method calls on hh
are compatible with the dynssz
hasher and that it behaves as expected in this context.
42-46
: Ensure function signature update is reflected throughout the codebase
The function ProveExecutionPayloadAgainstBlockBodyCapella
now accepts networkSpec
and dynSSZ
as additional parameters. Verify that all calls to this function have been updated to include the new arguments.
Run the following script to identify function calls that may need updating:
✅ Verification successful
Let me search for more context around the function call to verify if it's passing the required parameters.
Function calls are properly updated with the new parameters
The verification shows that the only call to ProveExecutionPayloadAgainstBlockBodyCapella
in the codebase (in beacon/capella.go
) correctly passes all three required parameters: withdrawalBeaconBlockBody
, networkSpec
, and dynSSZ
. No issues were found.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find all calls to ProveExecutionPayloadAgainstBlockBodyCapella not passing the new parameters.
# Search for function calls without the new parameters
rg -A 2 'ProveExecutionPayloadAgainstBlockBodyCapella\(' --type go | rg -v 'networkSpec'
Length of output: 377
Script:
#!/bin/bash
# Get more context around the function call
rg -B 3 -A 3 'executionPayloadAgainstBlockHeaderProof, executionPayloadRoot, err := ProveExecutionPayloadAgainstBlockBodyCapella\(' --type go
Length of output: 511
19-20
: Verify all callers are updated to match new function signature
The function ProveExecutionPayloadAgainstBlockHeaderCapella
now accepts two new parameters: networkSpec map[string]any
and dynSSZ *dynssz.DynSsz
. Please ensure that all callers of this function have been updated to pass the necessary arguments.
Run the following script to identify function calls that may need updating:
✅ Verification successful
Let me gather more context about the function calls by checking the actual implementation and callers.
Based on the code analysis, I can now generate the final response:
Function call is correctly updated with new parameters
The only caller of ProveExecutionPayloadAgainstBlockHeaderCapella
in prove_withdrawal.go
is already passing the required parameters epp.networkSpec
and epp.dynSSZ
when invoking the function. The code is consistent and no updates are needed.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find all calls to ProveExecutionPayloadAgainstBlockHeaderCapella not passing the new parameters.
# Search for function calls without the new parameters
rg -A 2 'ProveExecutionPayloadAgainstBlockHeaderCapella\(' --type go | rg -v 'networkSpec'
Length of output: 477
Script:
#!/bin/bash
# Get the full implementation of ProveExecutionPayloadAgainstBlockHeaderCapella
ast-grep --pattern 'func ProveExecutionPayloadAgainstBlockHeaderCapella($$$) ($$$) {
$$$
}'
# Get the full context of the caller
cat prove_withdrawal.go
Length of output: 18227
beacon/deneb.go (4)
11-11
: Verify the reliability and compatibility of the new dependency
The import of dynssz "github.com/pk910/dynamic-ssz"
introduces a new external dependency. Ensure that this library is reliable, well-maintained, and compatible with the project's licensing requirements.
17-18
: Ensure all function calls include new parameters
The addition of networkSpec
and dynSSZ
as parameters requires all calls to ProveExecutionPayloadAgainstBlockHeaderDeneb
to be updated accordingly. Verify that all invocations of this function throughout the codebase have been updated to include these new parameters.
862-865
:
Correct error logging when withdrawal count exceeds maximum
Ensure that fmt.Println
is logging the correct error variable.
Apply this diff:
if num > maxSize {
err := ssz.ErrIncorrectListSize
- fmt.Println(err)
+ fmt.Println(err)
return nil, err
}
Likely invalid or redundant comment.
784-788
:
Fix improper error logging and potential nil pointer dereference
The error variable err
may be nil
when calling fmt.Println(err)
, which can lead to a nil pointer dereference. Additionally, using fmt.Println
for logging in production code is not recommended.
Apply this diff to correct the error handling:
if err != nil {
retErr = err
- fmt.Println(err)
+ fmt.Println(retErr)
}
Consider using a logging framework or returning the error for proper handling.
Likely invalid or redundant comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (5)
generation/generate_balance_update_proof.go (1)
41-47
: Consider combining EPP initialization stepsThe EPP initialization and network spec application are currently split into two steps. This could lead to a partially initialized EPP if there's an error between the steps. Consider combining them for atomic initialization.
-epp, err := eigenpodproofs.NewEigenPodProofs(chainID, 1000) +epp, err := eigenpodproofs.NewEigenPodProofs(chainID, 1000, spec) if err != nil { - log.Debug().AnErr("Error creating EPP object", err) + log.Error().AnErr("Failed to create EigenPodProofs instance", err) return err } -epp = epp.WithNetworkSpec(spec)beacon/capella.go (1)
72-75
: Enhance error messages for size validation failuresThe size validation error messages could be more descriptive to aid in debugging.
Consider adding more context to error messages:
- return nil, err + return nil, fmt.Errorf("failed to validate %s: max size %d, got %d: %w", + "ProposerSlashings", maxSize, num, err)Also applies to: 95-98, 118-121, 141-144, 164-167, 207-210, 300-304
beacon/deneb.go (1)
805-808
: Simplify endianness conversionThe manual byte-by-byte endianness conversion could be simplified using a helper function or standard library utilities.
Consider using a helper function:
-// reverse the endian-ness to little -for i := 0; i < 32; i++ { - baseFeePerGas[i] = baseFeePerGasBE[31-i] -} +func reverseBytes32(in [32]byte) [32]byte { + out := [32]byte{} + for i := range in { + out[i] = in[31-i] + } + return out +} + +baseFeePerGas = reverseBytes32(baseFeePerGasBE)merkle_util_test.go (1)
95-106
: Consider parameterizing test data file pathsThe hardcoded file paths could make the tests less portable across different environments. Consider moving these paths to test configuration variables or environment variables, making it easier to run tests with different data sets.
- specFile := "data/deneb_goerli_spec.json" - stateFile := "data/deneb_goerli_slot_7413760.json" - oracleHeaderFile := "data/deneb_goerli_block_header_7413760.json" - headerFile := "data/deneb_goerli_block_header_7426113.json" - bodyFile := "data/deneb_goerli_block_7426113.json" + specFile := os.Getenv("TEST_SPEC_FILE") + if specFile == "" { + specFile = "data/deneb_goerli_spec.json" // default path + } + // Similar pattern for other file pathsprove_withdrawal.go (1)
239-251
: Abstract execution payload processing to eliminate duplicationThe processing of the
ExecutionPayload
for bothDeneb
andCapella
versions is similar and can be unified to reduce code duplication. This improves readability and maintainability.Consider introducing a helper function:
// Helper function to process execution payload func processExecutionPayload(withdrawalBlock *spec.VersionedSignedBeaconBlock, withdrawalBlockHeader *phase0.BeaconBlockHeader, epp *EigenPodProofs) (common.Proof, phase0.Root, []phase0.Root, []*capella.Withdrawal, uint64, error) { var executionPayload *ExecutionPayload switch withdrawalBlock.Version { case spec.DataVersionDeneb: if withdrawalBlock.Deneb == nil || withdrawalBlock.Deneb.Message == nil || withdrawalBlock.Deneb.Message.Body == nil { return common.Proof{}, phase0.Root{}, nil, nil, 0, errors.New("no deneb block") } executionPayload = withdrawalBlock.Deneb.Message.Body.ExecutionPayload case spec.DataVersionCapella: if withdrawalBlock.Capella == nil || withdrawalBlock.Capella.Message == nil || withdrawalBlock.Capella.Message.Body == nil { return common.Proof{}, phase0.Root{}, nil, nil, 0, errors.New("no capella block") } executionPayload = withdrawalBlock.Capella.Message.Body.ExecutionPayload default: return common.Proof{}, phase0.Root{}, nil, nil, 0, errors.New("unsupported version") } // Prove the execution payload against the block header executionPayloadProof, executionPayloadRoot, err := beacon.ProveExecutionPayloadAgainstBlockHeader( withdrawalBlockHeader, executionPayload, epp.networkSpec, epp.dynSSZ, ) if err != nil { return common.Proof{}, phase0.Root{}, nil, nil, 0, err } // Calculate execution payload field roots executionPayloadFieldRoots, err := beacon.ComputeExecutionPayloadFieldRoots( executionPayload, epp.networkSpec, ) if err != nil { return common.Proof{}, phase0.Root{}, nil, nil, 0, err } withdrawals := executionPayload.Withdrawals timestamp := executionPayload.Timestamp return executionPayloadProof, executionPayloadRoot, executionPayloadFieldRoots, withdrawals, timestamp, nil }Use this function in
ProveWithdrawal
:- // Existing version-specific code for Deneb and Capella + withdrawalProof.ExecutionPayloadProof, withdrawalProof.ExecutionPayloadRoot, withdrawalExecutionPayloadFieldRoots, withdrawals, timestamp, err := + processExecutionPayload(withdrawalBlock, withdrawalBlockHeader, epp) + if err != nil { + return nil, nil, err + }This change consolidates the execution payload processing, making the code more concise and easier to manage.
Also applies to: 264-276
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
go.sum
is excluded by!**/*.sum
📒 Files selected for processing (7)
beacon/capella.go
(27 hunks)beacon/deneb.go
(35 hunks)common/merkle_utils.go
(1 hunks)generation/generate_balance_update_proof.go
(2 hunks)merkle_util_test.go
(13 hunks)prove_withdrawal.go
(7 hunks)server/proof_server.go
(9 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- common/merkle_utils.go
- server/proof_server.go
🧰 Additional context used
📓 Learnings (1)
merkle_util_test.go (1)
Learnt from: MaxMustermann2
PR: ExocoreNetwork/proofs-generation#2
File: merkle_util_test.go:141-141
Timestamp: 2024-11-18T15:48:34.395Z
Learning: In `merkle_util_test.go`, it's acceptable to ignore the error returned by `ComputeExecutionPayloadFieldRootsDeneb`, following the existing codebase precedent.
🪛 golangci-lint
merkle_util_test.go
138-138: Error return value of epp.ComputeBeaconStateTopLevelRoots
is not checked
(errcheck)
139-139: Error return value of epp.ComputeBeaconStateRoot
is not checked
(errcheck)
🔇 Additional comments (10)
generation/generate_balance_update_proof.go (2)
48-52
: LGTM! Error logging follows established pattern
The error logging for HashTreeRoot operation correctly uses Error level, which is consistent with the established pattern for critical operations.
16-18
: Verify consistent spec file parameter usage
The addition of the specFile
parameter is part of a broader change. Let's verify that other proof generation functions have been updated consistently.
✅ Verification successful
Based on the results, I can now generate the final response as I have sufficient information to verify the concern.
All proof generation functions consistently use the spec file parameter
The verification shows that all proof generation functions have been updated consistently:
GenerateValidatorFieldsProof
in generation/generate_validator_proof.goGenerateBalanceUpdateProof
in generation/generate_balance_update_proof.goGenerateWithdrawalFieldsProof
in generation/generate_withdrawal_fields_proof.go
All these functions include the specFile
parameter and properly utilize it by calling ParseSpecJSONFile
. The implementation is consistent across the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that all proof generation functions use the spec file parameter consistently
# Check for proof generation function signatures
rg -A 2 "^func Generate.*Proof\(" --type go
# Check for ParseSpecJSONFile usage
rg "ParseSpecJSONFile" --type go
Length of output: 2448
beacon/capella.go (3)
11-11
: LGTM: Function signature changes are consistent
The addition of networkSpec
and dynSSZ
parameters across functions enables dynamic network configuration support.
Also applies to: 19-20, 44-45, 235-235, 392-392
88-88
: LGTM: Proper SSZ merkleization implementation
The SSZ hashing implementation correctly uses merkleization with dynamic sizes and follows the specification.
Also applies to: 111-111, 134-134, 157-157, 180-180, 223-223, 358-358, 382-382, 515-515, 551-551, 582-582, 606-606, 675-675, 698-698, 767-767, 836-836
19-20
: Verify network specification usage
Ensure that all callers of these functions provide the correct network specification parameters.
✅ Verification successful
Let me check the actual usage of these functions to verify the network specification parameter.
Let me check how the networkSpec is being passed to these functions from the callers.
Based on the analysis of the code, I can now provide the final response:
Network specification parameter is properly handled
All callers of these Capella functions are correctly initializing and passing the network specification:
- The
networkSpec
is properly initialized from spec JSON files usingParseSpecJSONFile
- It's correctly passed through the
EigenPodProofs
struct which manages the network configuration - The server implementation obtains it from the beacon client's spec response
- The parameter is consistently used across all implementations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Find all callers of these functions to verify network spec usage
rg -l "ProveExecutionPayloadAgainstBlockHeaderCapella|ComputeExecutionPayloadFieldRootsCapella|ComputeBeaconStateTopLevelRootsCapella"
Length of output: 220
Script:
#!/bin/bash
# Check the implementation and usage of these functions
rg "ProveExecutionPayloadAgainstBlockHeaderCapella|ComputeExecutionPayloadFieldRootsCapella|ComputeBeaconStateTopLevelRootsCapella" -A 5
Length of output: 2672
Script:
#!/bin/bash
# Check how networkSpec is initialized and passed
rg "networkSpec.*:=|networkSpec.*=" -B 2 -A 2
Length of output: 4036
beacon/deneb.go (3)
11-11
: LGTM: Dynamic SSZ integration
The addition of dynamic SSZ support and network specification parameters enhances the flexibility of the implementation.
Also applies to: 17-18
127-129
: LGTM: Well-documented dynamic SSZ usage
The code clearly documents which types are spec-dependent and appropriately uses dynamic SSZ for these cases. The comments provide good context for maintainers.
Also applies to: 150-152, 185-186, 195-197, 646-647, 659-660
310-313
: Verify network spec parameter getter implementations
The code relies heavily on various getter functions to retrieve parameters from the network specification. We should ensure these functions are properly implemented and return consistent values.
Also applies to: 335-338, 360-363, 399-402, 431-434, 456-459, 480-483, 505-508, 526-529, 549-552, 621-624, 692-695
merkle_util_test.go (2)
136-141
: LGTM: Network spec integration looks good
The integration of network specifications through WithNetworkSpec
is well-implemented and aligns with the PR's objective of adding localnet support.
🧰 Tools
🪛 golangci-lint
138-138: Error return value of epp.ComputeBeaconStateTopLevelRoots
is not checked
(errcheck)
139-139: Error return value of epp.ComputeBeaconStateRoot
is not checked
(errcheck)
358-361
: LGTM: Consistent error handling across test functions
The error handling in the modified test functions maintains consistency with the existing codebase patterns and successfully integrates the network specifications.
Also applies to: 436-437, 458-461, 517-520, 580-580, 638-641, 715-718, 778-778, 926-929
config.yaml.sample
instead of defaultSummary by CodeRabbit
.gitignore
to include additional server configuration and generated data files.chainID
value of31337
in theNewEigenPodProofs
function.chain_id
in the sample configuration from17000
to31337
.ValidatorProofResponse
andWithdrawalProofResponse
to include new fields:slot
,validator_index
,timestamp
,HistoricalSummaryIndex
,BlockRootIndex
, andWithdrawalIndexWithinBlock
.deneb_goerli_spec.json
, containing parameters related to the Ethereum consensus mechanism.specFile
for specifying a specification JSON file in various proof generation functions.