-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add E2E Testing for NFT SDK #148
Conversation
b8f0b32
to
7740e40
Compare
bf777d1
to
6ccc921
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The overall logic here looks good! The only bigger issue is that, even though I think I suggested that myself, I'm actually having second thoughts about including all these tests in the record-tester
flow. I believe most if not all of the logic here will be the same anyway, the only difference is that I think we should have a separate command/tester just for that.
The main reasons for that are:
- Including these tests in the same flow will introduce a big pause between test streams. There's a whole minute until a recording starts being processed to become an asset, so I expect that to add at least 5 minutes before new livestreams are made. This will increase our time for getting an alert when a region goes down.
- Our recordings (and probably transcoding) output is not deterministic. This means that by exporting the transcoded recording to IPFS, we are going to create a new file every time. Piñata storage is preeetty expensive (I think 5x the price of an S3), so I think this bill could grow pretty quickly. We already pay thousands of dollars a month for old test recordings saved on Google S3, on Piñata it's gonna 📈. We could either find some way to unpin the file after the test is done, but I think it will be simpler if we just always export the same file to IPFS.
So a simpler VOD (NFT) specific tester could work like:
- Start from the same video file as record-tester
- Import that to an asset 1
- Try transcoding that to another asset 2
- Try exporting the asset 1 (not asset 2) to IPFS
- Make sure all that works
- Success, sleep and try again in a few (just like
continuous_record_tester
here)
Let's talk more about this next week! I'll bring it up on our planning and we can talk about it ad-hoc or schedule additional meetings to go through it as well.
wait_interval = 5 * time.Second | ||
elapsed_time = 0 * time.Second | ||
for { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For these wait intervals, you should also consider using a time.Ticker
instead for go-ishness. e.g.
wait_interval = 5 * time.Second | |
elapsed_time = 0 * time.Second | |
for { | |
ticker := time.NewTicker(5 * time.Second) | |
for range ticker { | |
... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will follow-up with this
return 0, err | ||
} | ||
|
||
task, err := rt.lapi.GetTask(exportTask.ID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably use the same mechanism to wait for both the asset transcode task as well as the asset export task. This means that in both cases we can just wait for the task and get the asset in the end of the process, instead of polling for the task as you did on the transcode case.
You could use this as an example and create a similar helper here that you can re-use: https://github.com/livepeer/video-nft/blob/main/src/minter.ts#L480
With a separate helper that only waits for a task (which can be in the go-api-client
lib as well sine it's a pretty common logic), the logic here should become a little cleaner as well.
721b3ae
to
e797a25
Compare
e797a25
to
6511b62
Compare
939b137
to
db0989f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
func (rt *vodTester) Clean() { | ||
// TODO | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of having this clean function, you could simply do a defer api.DeleteAsset(asset.ID)
after every asset you created on the test routine above.
We probably don't want to infinitely stack up the account with so many assets from these tests. Although we won't be really deleting the files from cloud storage nor even the objects from DB anyway 🤷
So it's more not to pollute the API responses/dashboard than for any concrete resource optimization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will follow-up with this, need to add DeleteAsset
to go-api-client
#146
Refactor
record-tester
to usego-api-client
After
record-tester
finishes ->Testing locally:
go-api-client PR: livepeer/go-api-client#7