Any project, large or small, would ultimately like to follow industry best practices, such as continuous deployment. In order to support this, applications must be deployed early and often. This, in turn, triggers downtime and the users get affected by it because they could be logged out of the website, or worse, their work gets lost because the application’s intermediate state is not saved – never mind the actual downtime during the application deployment process.
Rolling upgrades solve this problem in an efficient way!
Existing solutions
- Clusters / High Availability
HA clusters are a great way to deploy applications, they are easy to manage and provide high availability and distributed sessions.
What is missing is the ability to deploy new versions of applications without cluster downtime or losing sessions.
- Blue-Green deployment strategies
This strategy will eliminate downtime. It allows to set up a “spare” (“Green”) cluster for the new application version, which allows complete testing of the new application. When it comes time to deploy, just switch the load balancer to the new version (“Green”). Once that’s done, the other cluster (“Blue”) can be upgraded to an even newer version, and the process can be repeated again.
There are two issues with blue-green deployment. One is the existence of extra cluster, and resources to maintain that goes along with that, such as cloud / VM nodes, database connections, etc. Second, application sessions are not preserved across the clusters, so user’s logins and sessions will be lost during application upgrades.
There are, of course, mitigation strategies for the downsides mentioned above, but they are not easy to implement, don’t solve the problems completely, and are not within reach for most organizations.
What is rolling upgrade?
- Seamless upgrade, unnoticeable to the users
- Maintains application availability
- Possible to upgrade a single node at-a-time
- Maintains application state (user isn’t logged out)
- Able to deploy different versions of the applications in the same cluster (A/B testing)
- Seamless rollback if necessary
Why rolling upgrade?
- Easy to implement with Payara Server
- Saves application session state across upgrades
- Very efficient in terms of resources utilization (e.g. cloud instances, database connections, etc)
- Supports continuous deployment (deploy early and often)
Differences between rolling and blue-green upgrade
- Doesn’t consume as many resources / machines / cloud servers / database connections
- Maintains application state (vs. blue/green does not)
- Doesn’t require additional maintenance of hot backup clusters
Steps required to set up rolling upgrade in Payara Server
Keep in mind that the clusters referenced here are not the Payara Server clusters as defined in the Admin GUI or domain.xml, but are Hazelcast-based clusters that are actually made up of Payara Server standalone or Payara Micro instances.
Traditional clusters could also be used for rolling upgrades, but make sure that Hazelcast is used as replication mechanism and not “replicated”
Before beginning a rolling upgrade to an application, ensure the following pre-requisites are met:
$ asadmin set <cluster_config>.availability-service.availability-enabled=true
$ asadmin set <cluster_config>.availability-service.web-container-availability.persistence-type=hazelcast
$ asadmin set <cluster_config>.availability-service.ejb-container-availability.sfsb-ha-persistence-type=hazelcast
Once you have prepared your Payara Server configuration, use the following steps to add a new application and gradually transition instances to the new version.
1.Using asadmin, add the following options when deploying your application:
a) Use an application version suffix.
$ asadmin deploy —name my_application:1.0
$ asadmin deploy —name my_application:2016-01-01.prod
b) deploy with --availabilityenabled
This is required for application session state replication to work, thus ensuring high availability and seamless upgrades:
$ asadmin deploy --availabilityenabled <other_options> myapp.war
1. Disable listeners
asadmin set <instance_name>.network-config.network-listeners.network-listener.ajp-listener.enabled=false
2. Remove old version from instance by deleting the application reference (optional, if using standalone instances
asadmin delete-application-ref --target <instance_name> <app_name_with_old_version_suffix>
3. Add the deployed application to the instance by creating a new application reference (optional, if using standalone instances)
asadmin create-application-ref --target <instance_name> <app_name_with_new_version_suffix>
4. Re-enable the previously disabled listener for the instance
asadmin set <instance_name>.network-config.network-listeners.network-listener.ajp-listener.enabled=true
5. Once the old application is removed from all instances, undeploy it
asadmin undeploy <app_name_with_old_version_suffix>
Putting it all together
<instance_name> in the example below, is one or more of the Payara instances to be upgraded. This will most likely be done in a loop to upgrade all instances in the cluster
# Deploy Application
$ asadmin deploy --availabilityenabled --enabled=false --name my_application:2016-01-01.prod --contextroot /my_application my_app-2016-01-01.war
# Disable instance HTTP (AJP) listener for each instance to be upgraded
$ asadmin set <instance_name>.network-config.network-listeners.network-listener.ajp-listener.enabled=false
# Optional, only for standalone instances - Remove old version of application from instance
$ asadmin delete-application-ref --target <instance_name> my_application:2010-01-01-prod.old # Add new version of application to instance
$ asadmin create-application-ref --target <instance_name> my_application:2016-01-01.prod
# Enable instance HTTP (AJP) listener for each instance to be upgraded
$ asadmin set <instance_name>.network-config.network-listeners.network-listener.ajp-listener.enabled=true
# After all instances have been upgraded and old version is no longer necessary
$ asadmin undeploy my_application:2010-01-01-prod.old
Managing Standalone Instance-based Clusters
References:
I have been attempting to set up a Payara environment following this blog post. However I have been unsuccessful so far. The first thing I am noticing is that when I have the DAS and a node running on the localhost and a second node running on a separate server I am not able to see all the instances as members of a single Hazelcast cluster. I have set up two standalone instances each belonging to a separate node. I have enabled Hazelcast on each instance. When I use the command list-hazelcast-members it only shows the two instances on the localhost (DAS). The domain and instance-1. The second instance-2 is never shown. Starting up the second instance I see it list its members (consisting of itself) in the log. I have tried several things and I fear that I missing some fundamental piece. If there is some more documentation to go through or any helpful push would in the right direction would be greatly appreciated.
Scott
Hi, Scott,
This sounds like Hazelcast setup issue. Are you running on AWS or another cloud provider by any chance? If so, they don’t support multicast, which is the default Hazelcast configuration.
I would suggest going to AWS-specific or another node discovery method in Hazelcast.
Please see Hazelcast documentation here for your reference: https://payara.gitbooks.io/payara-server/content/documentation/extended-documentation/hazelcast.html
Hope that helps