From e61b5081d05b129c08bf8f38dcf45a86298bb156 Mon Sep 17 00:00:00 2001
From: Lagom Build Server Starting with Lagom 1.5 your application will include Akka management HTTP out of the box with health checks enabled by default. Akka management HTTP is a supporting tool for health checks, cluster bootstrap and a few other new features in Lagom 1.5. Cluster formation now also supports Cluster Bootstrapping as a new way to form a cluster. These new defaults may require at least two changes on your codebase. First, if you want to opt-in to cluster bootstrapping you must make sure you don’t set Starting with Lagom 1.5 your application will include Akka management HTTP out of the box with health checks enabled by default. Akka management HTTP is a supporting tool for health checks, cluster bootstrap and a few other new features in Lagom 1.5. Cluster formation now also supports Cluster Bootstrapping as a new way to form a cluster. These new defaults may require at least two changes on your codebase. First, if you want to opt-in to cluster bootstrapping you must make sure you don’t set You no longer have a Read the docs of the new Read the docs of the new With the removal of ConductR or Lightbend Orchestration, the docker images and deployment specs will have to be maintained manually. Therefore the recommended migration is to take ownership of the We have written a comprehensive guide on how to deploy a Lagom application in Kubernetes or OpenShift. With the removal of ConductR or Lightbend Orchestration, the docker images and deployment specs will have to be maintained manually. Therefore the recommended migration is to take ownership of the We have written a comprehensive guide on how to deploy a Lagom application in Kubernetes or OpenShift. We also found that such maintenance can be made easier by using kustomize. Lightbend Orchestration supported declaring secrets on When opting in to Akka Cluster Bootstrapping as a mechanism for Cluster formation you will have to setup a Service Discovery method for nodes to locate each other. When opting in to Akka Cluster Bootstrapping as a mechanism for Cluster formation you will have to setup a Service Discovery method for nodes to locate each other. New defaults have been added to the Lagom clustering configuration. The first default configuration concerns how long a node should try to join an cluster. This is configured by the setting The second important change is the default value for These two properties together are essential for recovering applications in production environments like Kubernetes. Without them, a Lagom node could reach a zombie state in which it wouldn’t provide any functionality but stay around consuming resources. The desired behavior for a node that is not participating on a cluster is to shut itself down and let the orchestration infrastructure re-start it. There are no changes affecting a Production Upgrade. If you are running a Lagom 1.4 cluster you can perform a rolling upgrade, just make sure you are using the latest version on the 1.4.x series and, from that, you migrate to the latest version available of the 1.5.x series. If you still haven’t adopted Starting with Lagom 1.5 your application will include Akka management HTTP out of the box with health checks enabled by default. Akka management HTTP is a supporting tool for health checks, cluster bootstrap and a few other new features in Lagom 1.5. Cluster formation now also supports Cluster Bootstrapping as a new way to form a cluster. These new defaults may require at least two changes on your codebase. First, if you want to opt-in to cluster bootstrapping you must make sure you don’t set Starting with Lagom 1.5 your application will include Akka management HTTP out of the box with health checks enabled by default. Akka management HTTP is a supporting tool for health checks, cluster bootstrap and a few other new features in Lagom 1.5. Cluster formation now also supports Cluster Bootstrapping as a new way to form a cluster. These new defaults may require at least two changes on your codebase. First, if you want to opt-in to cluster bootstrapping you must make sure you don’t set You no longer have a This means you will have to change you §Application extensions
-
seed-nodes
. seed-nodes
always takes precedence over any other cluster formation mechanism. Second, if you use Cluster Bootstrapping, you will have to setup a discovery mechanism (see the Lagom Cluster reference guide for more details). §Service Location
+seed-nodes
. seed-nodes
always takes precedence over any other cluster formation mechanism. Second, if you use Cluster Bootstrapping, you will have to setup a discovery mechanism (see the Lagom Cluster reference guide for more details).§Service Location
ServiceLocator
provided by the tooling libraries so you will have to provide one of your choice. We recommend using the new lagom-akka-discovery-service-locator
which is implemented using Akka Service Discovery implementations.lagom-akka-discovery-service-locator
for details on how to setup the Akka Service Discovery method. For example, lagom-akka-discovery-service-locator
for details on how to setup the Akka Service Discovery method. For example,akka {
discovery {
method = akka-dns
}
}
§Docker images and deployment specs
-Dockerfile
, deployment scripts and orchestration specs. Dockerfile
, deployment scripts and orchestration specs.§Secrets
build.sbt
which user’s code could then read from a file in the pod. Starting from Lagom 1.5 there is no specific support for secrets and the recommendation is to use the default option suggested by each target orchestrator. For example, when deploying to Kubernetes or OpenShift declare the secret as an environment variable on your Deployment
and inject the environment variable in your application.conf
. For example:my-database {
password = "${DB_PASSWORD}"
}
§Service Discovery
-§Upgrading a production system
+§Production Settings
+akka.cluster.shutdown-after-unsuccessful-join-seed-nodes
. Lagom will default this value to 60 seconds. After that period, the Actor System will shutdown if it fails to join a cluster.lagom.cluster.exit-jvm-when-system-terminated
. This was previously off
, but we always recommended it to be on
in production environments. As of Lagom 1.5.0, that setting defaults to on
. When enabled, Lagom will exit the JVM when the application leave the cluster or fail to join the cluster. In Dev and Test mode, this setting is automatically set to off
.§Upgrading a production system
§Downtime upgrade
ddata
as the cluster sharding mode and your application can tolerate a one time only downtime upgrade, we recommend you to enable ddata
. Taking advantage of that downtime we recommend you also enable the serializers for akka.Done
, akka.actor.Address
and akka.remote.UniqueAddress
. Once this upgrade is complete, further downtime is not required. Read all the details of this migration on the 1.4 Migration Guide§Application extensions
-
seed-nodes
. seed-nodes
always takes precedence over any other cluster formation mechanism. Second, if you use Cluster Bootstrapping, you will have to setup a discovery mechanism (see the Lagom Cluster reference guide for more details). §Service Location
+seed-nodes
. seed-nodes
always takes precedence over any other cluster formation mechanism. Second, if you use Cluster Bootstrapping, you will have to setup a discovery mechanism (see the Lagom Cluster reference guide for more details).§Service Location
ServiceLocator
provided by the tooling libraries so you will have to provide one of your choice. We recommend using the new lagom-akka-discovery-service-locator
which is implemented using Akka Service Discovery implementations.Loader
code:// before
+
// before
import com.lightbend.rp.servicediscovery.lagom.scaladsl.LagomServiceLocatorComponents
override def load(context: LagomApplicationContext) =
new MyApplication(context) with LagomServiceLocatorComponents
@@ -53,14 +53,18 @@
docs of the new
lagom-akka-discovery-service-locator
for more details.
With the removal of ConductR or Lightbend Orchestration, the docker images and deployment specs will have to be maintained manually. Therefore the recommended migration is to take ownership of the Dockerfile
, deployment scripts and orchestration specs.
We have written a comprehensive guide on how to deploy a Lagom application in Kubernetes or OpenShift.
+With the removal of ConductR or Lightbend Orchestration, the docker images and deployment specs will have to be maintained manually. Therefore the recommended migration is to take ownership of the Dockerfile
, deployment scripts and orchestration specs.
We have written a comprehensive guide on how to deploy a Lagom application in Kubernetes or OpenShift.
We also found that such maintenance can be made easier by using kustomize.
Lightbend Orchestration supported declaring secrets on build.sbt
which user’s code could then read from a file in the pod. Starting from Lagom 1.5 there is no specific support for secrets and the recommendation is to use the default option suggested by each target orchestrator. For example, when deploying to Kubernetes or OpenShift declare the secret as an environment variable on your Deployment
and inject the environment variable in your application.conf
. For example:
my-database {
password = "${DB_PASSWORD}"
}
When opting in to Akka Cluster Bootstrapping as a mechanism for Cluster formation you will have to setup a Service Discovery method for nodes to locate each other.
When opting in to Akka Cluster Bootstrapping as a mechanism for Cluster formation you will have to setup a Service Discovery method for nodes to locate each other.
New defaults have been added to the Lagom clustering configuration.
+The first default configuration concerns how long a node should try to join an cluster. This is configured by the setting akka.cluster.shutdown-after-unsuccessful-join-seed-nodes
. Lagom will default this value to 60 seconds. After that period, the Actor System will shutdown if it fails to join a cluster.
The second important change is the default value for lagom.cluster.exit-jvm-when-system-terminated
. This was previously off
, but we always recommended it to be on
in production environments. As of Lagom 1.5.0, that setting defaults to on
. When enabled, Lagom will exit the JVM when the application leave the cluster or fail to join the cluster. In Dev and Test mode, this setting is automatically set to off
.
These two properties together are essential for recovering applications in production environments like Kubernetes. Without them, a Lagom node could reach a zombie state in which it wouldn’t provide any functionality but stay around consuming resources. The desired behavior for a node that is not participating on a cluster is to shut itself down and let the orchestration infrastructure re-start it.
There are no changes affecting a Production Upgrade. If you are running a Lagom 1.4 cluster you can perform a rolling upgrade, just make sure you are using the latest version on the 1.4.x series and, from that, you migrate to the latest version available of the 1.5.x series.
If you still haven’t adopted ddata
as the cluster sharding mode and your application can tolerate a one time only downtime upgrade, we recommend you to enable ddata
. Taking advantage of that downtime we recommend you also enable the serializers for akka.Done
, akka.actor.Address
and akka.remote.UniqueAddress
. Once this upgrade is complete, further downtime is not required. Read all the details of this migration on the 1.4 Migration Guide