You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Do this in production in the CloudFoundry CLI...Wiki below (Ask @johnnyporkchops if questions)
WIKI for scraping and indexing CMS pages in production using CloudFoundry CLI
This is basically the same as the local process but you do not need to get a database dump first.
Login to CloudFoundry and target space: cf target -s prod
SSH into CMS: cf ssh cms
Configure for Python: export DEPS_DIR=/home/vcap/deps for f in /home/vcap/profile.d/*.sh; do source "$f"; done
cd app/fec
./manage.py scrape_cms_pages
This will create output.json at /search/management/data/
Export the env vars for the drawer for the shell session:
- export SEARCHGOV_DRAWER_KEY_MAIN=xxxx
- export SEARCHGOV_DRAWER_HANDLE=xxxx
Note: You can get these creds using cf env cms in a new tab
Verify that the env var creds are there: echo $SEARCHGOV_DRAWER_KEY_MAIN
Note: You cannot verify the env vars by just typing “env” like you could on your local because that will show local env vars, but not those available to the cloud shell session you are in.
./manage.py index_pages
Remove the output.json file for good measure since it’s git-ignored and not part of the repo: rm search/management/data/output.json
Create ticket for April '23 manual Wagtail global search index
Note: If you are sharing a screenshot of the dashboard as confirmation of latest index dates, be sure to NOT to include the key in your screenshot area.
The text was updated successfully, but these errors were encountered:
Summary
What we're after:
Update the global search.gov index for new pages that were published since last index March, 2023.
Related issues
#5394
Completion criteria:
- [ ] wiki instructions to get Production CMS database dump
- [ ] Please put copy of the db dump in the private FEC GoogleDocs
--- OR---
Do this in production in the CloudFoundry CLI...Wiki below (Ask @johnnyporkchops if questions)
WIKI for scraping and indexing CMS pages in production using CloudFoundry CLI
Login to CloudFoundry and target space:
cf target -s prod
SSH into CMS:
cf ssh cms
Configure for Python:
export DEPS_DIR=/home/vcap/deps
for f in /home/vcap/profile.d/*.sh; do source "$f"; done
cd app/fec
./manage.py scrape_cms_pages
This will create output.json at /search/management/data/
Export the env vars for the drawer for the shell session:
-
export SEARCHGOV_DRAWER_KEY_MAIN=xxxx
-
export SEARCHGOV_DRAWER_HANDLE=xxxx
Note: You can get these creds using
cf env cms
in a new tabVerify that the env var creds are there:
echo $SEARCHGOV_DRAWER_KEY_MAIN
Note: You cannot verify the env vars by just typing “env” like you could on your local because that will show local env vars, but not those available to the cloud shell session you are in.
./manage.py index_pages
Remove the output.json file for good measure since it’s git-ignored and not part of the repo:
rm search/management/data/output.json
Note: If you are sharing a screenshot of the dashboard as confirmation of latest index dates, be sure to NOT to include the key in your screenshot area.
The text was updated successfully, but these errors were encountered: