This repository has been archived by the owner on Sep 26, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 6
Data transfer to MCP #145
Comments
7 tasks
We should probably push to prod bucket rather than staging bucket. |
Misc TODOs
|
Some pgstac database quirks ☠️ realised while migrating the datasets that we should be aware of:
|
We are having a longer discussion on slack so will follow up with next steps |
2 tasks
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Epic
#118
Description
One of the most important tasks in migration of VEDA to MCP is migration of the data files to the MCP s3 bucket.
Based on decisions made in #118, it's been decided that this will be carried out by running the data transformation and ingestion pipeline with a flag set to transfer the data to the MCP s3 bucket.
A role based policy will be used to gain access to the MCP s3 bucket from the ingestion pipeline.
Examples
The current data link points to a UAH bucket. That's where the data files exist.
At the end of the data migration, the links should look like the following and the data should exist in that link:
Acceptance Criteria:
Checklist for collections
HLSS30.002(these are in lpdaac protected buckets; @anayeaye is working on creating our own version of these)HLSL30.002( same )nightlights-hd-3bandsChecklist:
Concept diagramsThe text was updated successfully, but these errors were encountered: