-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement app-host functionalities, save as draft #7
base: main
Are you sure you want to change the base?
Conversation
Currently, there are still errors and unimplemented code, but the main structures and workflow has been done. |
crates/app-host/src/block_tracer.rs
Outdated
// todo: use a singleton tokio runtime with limited worker to spawn task | ||
let handle = self.rt.spawn(async move { | ||
BlockTracer::get_block_trace(l2_client, block).await | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it necessary that BlockTracer
has its own runtime? Can't you just use the current runtime?
// todo: use a singleton tokio runtime with limited worker to spawn task | |
let handle = self.rt.spawn(async move { | |
BlockTracer::get_block_trace(l2_client, block).await | |
}); | |
// todo: use a singleton tokio runtime with limited worker to spawn task | |
let handle = tokio::spawn(async move { | |
BlockTracer::get_block_trace(l2_client, block).await | |
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I want to limit the concurrency of this procedure. Currently the batch is proved sequentially according to batch_index, where there may be tens of blocks within one batch. It may already be useful to have the limit. What's more, in case there is possibility that batches can be proved concurrently, this need not be changed then
handles.push(handle); | ||
} | ||
|
||
let mut block_traces = Vec::with_capacity(handles.len()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this could be simplified to something like Result::Ok(tokio::join!(handles))
Result::Ok(block_traces) | ||
} | ||
|
||
async fn get_block_trace(l2_client: Arc<L2Client>, block: u64) -> BlockTrace { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
async fn get_block_trace(l2_client: Arc<L2Client>, block: u64) -> BlockTrace { | |
async fn get_block_trace(l2_client: Arc<L2Client>, block: u64) -> Result<BlockTrace> { |
If it can break, it's better to return Result
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is by intention since I plan to handle error and retry in this method. The block_trace is a must-have thing which causes the whole process being blocked. I plan to handle the error here instead throw it out
} | ||
|
||
async fn fetch_logs(&self) -> Result<(), EventLogError> { | ||
let last_finalize_block = self.get_latest_finalized_block().await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why only fetch up to latest finalized block? To avoid reorg handling?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, it's the same as the one in our go-ethereum's l1_event fetch logic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this will be suitable here, the 12-15min delay will be too large. We want to submit SGX proof shortly after the ZK proof was submitted.
ScrollChain::FinalizeBatch::SIGNATURE_HASH, | ||
]; | ||
|
||
let from: u64 = self.fetched_block_number; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is fetched_block_number
updated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
currently the fetcher logic is not robust yet, the detail logic will be checked again.
pub fn new( | ||
commit_batch_tx: Sender<CommitBatchEvent>, | ||
finalize_batch_tx: Sender<FinalizeBatchEvent>,) -> Self { | ||
todo!() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should also initialize fetched_block_number
to avoid syncing from genesis.
_teeProof: tee_proof, | ||
}; | ||
|
||
let tx = self.eth.transact(self.scroll_chain_address, &call).await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: we might need to add more robust resend logic here
crates/app-host/src/state_manager.rs
Outdated
}) | ||
} | ||
|
||
fn collect_batch_infos(&self, begin_batch_index: u64, end_batch_index: u64) -> Result<ProveBundleRequest> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And how do we construct ProveBatchRequest
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's constructed within on_batch_commit_event_received
Co-authored-by: Péter Garamvölgyi <[email protected]>
Co-authored-by: Péter Garamvölgyi <[email protected]>
No description provided.