Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace pod name from 1.x #897

Merged
merged 1 commit into from
Jul 6, 2020
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 10 additions & 10 deletions xml/cap_admin_backup-restore.xml
Original file line number Diff line number Diff line change
Expand Up @@ -654,14 +654,14 @@
key will depend on whether <literal>current_key_label</literal> has been
defined on the source cluster. This value is defined in
<filename>/var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml</filename>
of the <literal>api-group-0</literal> pod and also found in various
of the <literal>api-0</literal> pod and also found in various
tables of the &mysql; database.
</para>
<para>
Begin by examining the configuration file for
the<literal>current_key_label</literal> setting:
</para>
<screen>&prompt.user;kubectl exec --stdin --tty --namespace kubecf api-group-0 -- bash -c "cat /var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml | grep -A 3 database_encryption"</screen>
<screen>&prompt.user;kubectl exec --stdin --tty --namespace kubecf api-0 -- bash -c "cat /var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml | grep -A 3 database_encryption"</screen>
<itemizedlist>
<listitem>
<para>
Expand All @@ -676,7 +676,7 @@
setting, run the following command and save the output for the
restoration process:
</para>
<screen>&prompt.user;kubectl exec api-group-0 --namespace kubecf -- bash -c 'echo $DB_ENCRYPTION_KEY'</screen>
<screen>&prompt.user;kubectl exec api-0 --namespace kubecf -- bash -c 'echo $DB_ENCRYPTION_KEY'</screen>
</listitem>
</itemizedlist>
</step>
Expand Down Expand Up @@ -762,12 +762,12 @@ secrets:
</step>
<step>
<para>
Stop the monit services on the <literal>api-group-0</literal>,
Stop the monit services on the <literal>api-0</literal>,
<literal>cc-worker-0</literal>, and <literal>cc-clock-0</literal> pods:
</para>
<!-- CAP 1.3 NOTE
It appears that on 2.14.5 I need to manually terminate the loggregator_agent processes whenever I do a monit stop all, as they were not shutting down correctly (monit was just losing reference to the processes instead). Once I ensure that they were all actually terminated before proceeding to whatever the next step was, things worked fine. -->
<screen>&prompt.user;for n in api-group-0 cc-worker-0 cc-clock-0; do
<screen>&prompt.user;for n in api-0 cc-worker-0 cc-clock-0; do
kubectl exec --stdin --tty --namespace kubecf $n -- bash -l -c 'monit stop all'
done
</screen>
Expand Down Expand Up @@ -811,10 +811,10 @@ It appears that on 2.14.5 I need to manually terminate the loggregator_agent pro
</step>
<step>
<para>
Start the monit services on the <literal>api-group-0</literal>,
Start the monit services on the <literal>api-0</literal>,
<literal>cc-worker-0</literal>, and <literal>cc-clock-0</literal> pods
</para>
<screen>&prompt.user;for n in api-group-0 cc-worker-0 cc-clock-0; do
<screen>&prompt.user;for n in api-0 cc-worker-0 cc-clock-0; do
kubectl exec --stdin --tty --namespace kubecf $n -- bash -l -c 'monit start all'
done
</screen>
Expand All @@ -830,7 +830,7 @@ done
<para>
Run the rotation for the encryption keys:
</para>
<screen>&prompt.user;kubectl exec --namespace kubecf api-group-0 -- bash -c \
<screen>&prompt.user;kubectl exec --namespace kubecf api-0 -- bash -c \
"source /var/vcap/jobs/cloud_controller_ng/bin/ruby_version.sh; \
export CLOUD_CONTROLLER_NG_CONFIG=/var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml; \
cd /var/vcap/packages/cloud_controller_ng/cloud_controller_ng; \
Expand All @@ -839,9 +839,9 @@ bundle exec rake rotate_cc_database_key:perform"
</step>
<step>
<para>
Restart the <literal>api-group</literal> pod.
Restart the <literal>api</literal> pod.
</para>
<screen>&prompt.user;kubectl delete pod api-group-0 --namespace kubecf --force --grace-period=0</screen>
<screen>&prompt.user;kubectl delete pod api-0 --namespace kubecf --force --grace-period=0</screen>
</step>
</substeps>
</step>
Expand Down