-
Notifications
You must be signed in to change notification settings - Fork 0
Conversation
try { | ||
const currrentEpoch = await this.blockchainService.getCurrentCapacityEpoch(); | ||
const [event, eventMap] = await this.blockchainService | ||
.createExtrinsic({ pallet: 'frequencyTxPayment', extrinsic: 'payWithCapacityBatchAll' }, { eventPallet: 'utility', event: 'BatchCompleted' }, providerKeys, batch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe not in MVP 1 but we would rather like to pass a list of expected events to blockchain service to specifically catch errors instead try-catch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not totally sure about benefits of using batch in here since we are already batching in prior steps. I think using payWithCapacityBatchAll
might complicate the exact events and errors that we might get from chain. I think since the batching is already done I wouldn't want to change it but if in future it complicates error handling we might need to revisit it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a capacity batching, basically instead of sending one IPFS message to frequency we send as many as capacity accepts in batch (10 for now) than sending one tx at a time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also currently we are only sending one message at a time, I kept it like reconnection service, in case we would want to batch more than one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, I think what I might have been missing is understanding the benefits of capacity batching for our scenario. I know we can use it for transactional purposes if we want a few calls apply at the same time but that is not that relevant for us since each message is treated independently. Are there any other benefits to capacity batching besides transaction?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good. Added some questions and suggestions.
private async handleCapacityExhausted() { | ||
this.logger.debug('Received capacity.exhausted event'); | ||
this.capacityExhausted = true; | ||
await this.publishQueue.pause(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this pause being applied in redis or is it only applied in current worker instance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
its only applied to publishQueue only no other queues, I believe
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is that applied to the queue representation inside redis or inside workers memory? To put it another way if we have two workers does the other worker also pause it's processing of that queue or not?
@OnEvent('capacity.refilled', { async: true, promisify: true }) | ||
private async handleCapacityRefilled() { | ||
this.logger.debug('Received capacity.refilled event'); | ||
this.capacityExhausted = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where do we unpause a paused queue? Is it only this flag?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is refilled event, queue is paused in capacity.exhausted
and unknown.error events
, the pause is put till next epoch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we find out about the next epoch? Is the scheduler used for that?
Merging this to unblock other stories, things that are going to be tackled in future stories
|
Requirement
Content publisher is intended to process a
publishQueue
and post IPFS messages on frequency blockchainClose #6
Details
PublishingService
: processing the publishQueuePublisher
: the actualIPFS
batch processor executing transaction for publication to frequency via apublish
function that take batchesPublisherModule
: module exporting publisher processorBlockchain Module
within worker and removed from apiAcceptance Criteria:
Tests
publishQueue
and ensure a message is registered on frequency