Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Small cleanups #1836

Merged
merged 3 commits into from
Aug 17, 2023
Merged

Small cleanups #1836

merged 3 commits into from
Aug 17, 2023

Conversation

nazar-pc
Copy link
Member

Just a variety of tiny things that bothered me, no logic changes or anything like that.

Code contributor checklist:

@@ -134,11 +131,11 @@ impl<Block: BlockT> Validator<Block> for PotGossipValidator {
match PotProof::decode(&mut data) {
Ok(proof) => {
// Perform AES verification only if the proof is a candidate.
if let Err(err) = self.pot_state.is_candidate(*sender, &proof) {
trace!("gossip::validate: not a candidate: {err:?}");
if let Err(error) = self.pot_state.is_candidate(*sender, &proof) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i like err as an instance of Error rather than error though non blocker

@nazar-pc nazar-pc merged commit 0746a6f into main Aug 17, 2023
@nazar-pc nazar-pc deleted the small-cleanups branch August 17, 2023 12:08
@@ -98,17 +98,19 @@ where

/// Initializes the chain state from the consensus tip info.
async fn initialize(&self) {
info!("time_keeper::initialize: waiting for initialization ...");
debug!("Waiting for initialization");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was this changed? Without target and no prefix, the logs don't have much context

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They actually do, as mentioned in previous PR reviews, these are excessive and hard to read, logger already prints the module when you log something and you don't need to do it manually, here is an example:

2023-08-17T18:22:46.374851Z DEBUG single_disk_farm{disk_farm_index=0}: subspace_farmer::single_disk_farm::farming: Solution found slot=846148283 sector_index=9
2023-08-17T18:22:46.379039Z  INFO single_disk_farm{disk_farm_index=0}: subspace_farmer::reward_signing: Successfully signed reward hash 0x1f4767f4785b35cf0766f783e076356a3f0794aed2cfc10ad8647396aad173d0

And while I understand you're still developing it, info level should be used very occasionally for things users would actually care about and understand, hence downgrade to debug.

crates/sc-proof-of-time/src/time_keeper.rs Show resolved Hide resolved
crates/sc-proof-of-time/src/time_keeper.rs Show resolved Hide resolved
} else {
trace!("time_keeper::on gossip: {proof}, time=[{elapsed:?}], {sender}");
self.gossip.gossip_message(proof.encode());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is needed .. this is a difference between time keeper and node client

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm... I'm sorry, I think I missed it when copy-pasting from one file to another 😕

Why though? The whole point of gossip engine is to handle gossip, I don't think you need to manually re-gossip what you have received.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Substrate gossip requires you to manually resend against the topic, if you want to propagate to the peers (we want to in case of time keeper). ValidationResult::ProcessAndKeep and ValidationResult::Discard only indicate if these should be kept in the per-peer queue or not(to validate future messages for duplicates, etc), from when I looked at their code recently. Pls correct if it is not so ..

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think you're write, this probably needs to be restored. At the same time why should this only be done by time keeper? Client should also re-gossip received messages, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consensus nodes already receive O(num time keeper) messages per second now which cannot be avoided. Would be good to avoid adding more volume to that. And if they gossip, will go back to time keepers also. We could possibly consider two different topics in future.

In case of time keepers, the gossip was needed for node sync IIRC. This part is still changing though

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think that is correct. You assume that every consensus node is connected to at least one timekeeper. That is not guaranteed to be the case, very far from it actually.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, I was assuming a close to full mesh topology. In that case we would indeed need to gossip from consensus nodes also

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants