Kubernetes deployment for the ForgeRock platform. Branches:
- Recommended for production: release/6.5.2 branch
- Technology preview: skaffold-6.5 branch.
- Under development master branch
Note: The charts in the helm/ directory are deprecated and will be removed in the future. The Helm charts are being replaced with Kustomize.
This GitHub repository is a read-only mirror of ForgeRock's Bitbucket Server repository. Users with BackStage accounts can make pull requests on our Bitbucket Server repository. ForgeRock does not accept pull requests on GitHub.
These samples are provided on an “as is” basis, without warranty of any kind, to the fullest extent permitted by law. ForgeRock does not warrant or guarantee the individual success developers may have in implementing the code on their development platforms or in production configurations. ForgeRock does not warrant, guarantee or make any representations regarding the use, results of use, accuracy, timeliness or completeness of any data or information relating to these samples. ForgeRock disclaims all warranties, expressed or implied, and in particular, disclaims all warranties of merchantability, and warranties related to the code, or any service or software related thereto. ForgeRock shall not be liable for any direct, indirect or consequential damages or costs of any type arising out of any action taken by you or others related to the samples.
The draft ForgeRock DevOps Developer's Guides ( minikube| shared cluster] tracks the master branch, including information on the newer Kustommize/ Skaffold workflow. If you are just getting started this is the recommended path.
The documentation for the current release can be found on backstage.
The branch skaffold-6.5
is a preview of the upcoming 7.x workflow that simplifies deployment
by bundling the product configuration into the docker image for deployment. This workflow speeds iterative
development and greatly simplifies the Kubernetes runtime manifests. It eliminates the need for Git init containers
and the complexity around configuring different Git repositories and branches in the helm charts.
The new workflow combines the previously
independent forgeops
and forgeops-init
repositories into a single Git repository that holds configuration and Kubernetes
manifests. Documentation for this workflow is in progress. Please
see the early access documentation.
This preview branch enables the use of supported ForgeRock binaries in your deployment.
Adopting this workflow now is recommended as it will ease transition to the 7.x platform.
The provided configuration is a basic installation that can be further extended by developers to meet their requirements. Developers should fork this repository in Git, and modify the various configuration files.
The configuration provides the following features:
- Deployments for ForgeRock AM, IDM, DS and IG. IG is not deployed by default.
- AM and IDM are integrated, and share a common repository for users. The directory server instance (ds-idrepo) is used as the user store for both products, and as the managed repository for IDM objects. A separate postgres SQL database is NOT required.
- AM protects the IDM administration and end user UI pages.
- AM is configured with a single root realm
- A number of OIDC clients are configured for the AM/IDM integration and the smoke tests.
** Note the
idm-provisioning
,idmAdminClient
and theendUserUI
client configurations are required for the integration of IDM and AM. - Directory service instances are configured for:
- The shared AM/IDM repo (ds-idrepo)
- The AM dynamic runtime data store for polices and agents (currently the ds-idrepo is also used for this purpose).
- The Access Manager Core Token Service (ds-cts).
- A very simple landing page (/web)
- A Python test harness. This test harness (forgeops-test) exercises the basic deployment and can be modified to include additional tests.
When deployed, the following URLs are available (The domain name below is the default for minikube and can be modified for your environment)
- https://default.iam.example.com/web - web landing page
- https://default.iam.example.com/am - Access manager admin (admin/password)
- https://default.iam.example.com/admin - IDM admin (login with amadmin credentials)
- https://default.iam.example.com/enduser - End User UI page
- https://default.iam.example.com/ig - Identity Gateway (Optional)
The various configuration files are located in the docker
and bundled with their respective
products (amster, idm, ig, am).
Refer to the toubleshooting chapter in the DevOps Guide.
Troubleshooting suggestions:
- The script
bin/debug-log.sh
will generate an HTML file with log output. Useful for troubleshooting. - Simplify. Deploy a single product at a time (for example, ds), and make sure it is working correctly before deploying the next product.
- Describe a failing pod using
kubectl get pods; kubectl describe pod pod-xxx
- Look at the event log for failures. For example, the image can't be pulled.
- Examine any init containers. Did each init container complete with a zero (success) exit code? If not, examine the logs from that failed init container using
kubectl logs pod-xxx -c init-container-name
- Did the main container enter a crashloop? Retrieve the logs using
kubectl logs pod-xxx
. - Did a docker image fail to be pulled? Check for the correct docker image name and tag. If you are using a private registry, verify your image pull secret is correct.
- You can use
kubectl logs -p pod-xxx
to examine the logs of previous (exited) pods.
- If the pods are coming up successfully, but you can't reach the service, you likely have ingress issues:
- Use
kubectl describe ing
andkubectl get ing ingress-name -o yaml
to view the ingress object. - Describe the service using
kubectl get svc; kubectl describe svc xxx
. Does the service have anEndpoint:
binding? If the service endpoint binding is not present, it means the service did not match any running pods.
- Use
- Determine if your cluster is having issues (not enough memory, failing nodes). Watch for pods killed with OOM (Out of Memory). Commands to check:
kubectl describe node
kubectl get events -w
- Most images provide the ability to exec into the pod using bash, and examine processes and logs. Use
kubectl exec pod-name -it bash
. - If
skaffold dev
fails because it does not have permissions to push a docker image it may be trying to push to the docker hub (the reported image name will be something likedocker.io/am
).
When running on minikube, Skaffold assume that a push is not required as it candocker build
direct to the docker machine. If it is attempting to push the docker hub it is because Skaffold thinks it is not running on minikube. Make sure your minikube context is namedminikube
. An alternate solution is to modify the docker build inskaffold.yaml
and setlocal.push
to false. See the skaffold.dev documentation.
Create a copy of one of the environments. Example:
cd kustomize/env
cp -r dev test-gke
- Using a text editor, or sed, change all the occurences of the FQDN to your desired target FQDN.
Example, change
default.iam.forgeops.com
totest.iam.forgeops.com
- Update the DOMAIN in platform-config.yaml to the proper cookie domain for AM.
- Update kustomization.yaml with your desired target namespace (example:
test
). The namespace must be the same as the FQDN prefix. - Copy skaffold.yaml to skaffold-dev.yaml. This file is in .gitignore so it does not get checked in or overlayed on a Git checkout.
- In skaffold-dev.yaml, edit the
path
for kustomize to point to your new environment folder (example:kustomize/env/test-gke
). - Run your new configuration:
skaffold dev -f skaffold-dev.yaml [--default-repo gcr.io/your-default-repo]
- Warning: The AM install and config utility parameterizes the FQDN - but you may need to fix up other configurations in IDM, IG, end user UI, etc. This is a work in progress.
skaffold delete
or skaffold delete -f skaffold-dev.yaml
If you want to delete the persistent volumes for the directory:
kubectl delete pvc --all
The file cloudbuild.yaml
is a sample Google Cloud Builder project
that performs a continuous deployment to a running GKE cluster. Until AM file based configuration supports upgrade,
the deployment is done fresh each time.
The deployment is triggered from a git commit
to forgeops. See the
documentation on automated build triggers for more information. You can also manually submit a build using:
cd forgeops
gcloud builds submit
Track the build progress in the GCP console.
Once deployed, the following URLs are available: