Skip to main content

Adding/ Editing Health checks to deployments in OpenShift

What are health checks in OpenShift

Health checks are required to ensure that application is resilient so application can be restarted without manual action. Such health checks can be configured using probes in openshift. Kubernetes provides two type of probes, readiness probe and liveness probe.

Liveness probe

Liveness probe checks that application is running and handles those situations when there is any issue with application.

Readiness probe

Readiness probe check whether application is ready to handle the traffic.

Another benefit of having these health checks is that you can restart or deploy application with zero downtime when you are having minimum 2 pods running for your application. In this case it will not bring all the pods down.

Supported health check types

You can configure the health check of below type for both of the above probes.

  • HTTP Health Check
  • It will call a web URL and if it returns the HTTP status code between 200 and 399 then it will consider as success otherwise failure.
  • Container Exec
  • Using this option we can execute command inside container and if it return 0 as exit code only then it will be considered as success.
  • TCP Socket
  • Here, it tries to open the socket to container. If it can establish connection then it will consider as success otherwise failure.

Configuring health checks

These health checks can be configured in multiple ways, like using Openshift console UI or using the templates.

Using Openshift console UI

To add or edit the health check you need to login to openshift console and go to your deployment config using the option "Application/Deployments/<deployment_name>".
health checks

In below screen you can provide the readiness probe details.
readiness probe

In below screen you can provide the liveness probe details.
liveness probe

Finally save the changes and then you can see those health check on your deployments under configuration tab.

Using templates

You can directly add or edit the blocks for readiness and liveness probes in templates. Then you can execute those templates to reflect your changes to deployment configurations.
These changes you need to add in your deploymentConfig template in containers section.
Readiness probe
{
   "kind": "DeploymentConfig"
   "spec": {
      "template": {
         "spec": {
            "containers": [
               {
                  "readinessProbe": {
                     "tcpSocket": {
                        "port": 8080
                     },
                     "timeoutSeconds": 1,
                     "periodSeconds": 10,
                     "successThreshold": 1,
                     "failureThreshold": 3
                  },
}

Liveness probe
{
   "kind": "DeploymentConfig"
   "spec": {
      "template": {
         "spec": {
            "containers": [
               {
                  "livenessProbe": {
                     "tcpSocket": {
                        "port": 8080
                     },
                     "timeoutSeconds": 1,
                     "periodSeconds": 10,
                     "successThreshold": 1,
                     "failureThreshold": 3
                  },
}

Comments

Popular Posts

Setting up kerberos in Mac OS X

Kerberos in MAC OS X Kerberos authentication allows the computers in same domain network to authenticate certain services with prompting the user for credentials. MAC OS X comes with Heimdal Kerberos which is an alternate implementation of the kerberos and uses LDAP as identity management database. Here we are going to learn how to setup a kerberos on MAC OS X which we will configure latter in our application. Installing Kerberos In MAC we can use Homebrew for installing any software package. Homebrew makes it very easy to install the kerberos by just executing a simple command as given below. brew install krb5 Once installation is complete, we need to set the below export commands in user's profile which will make the kerberos utility commands and compiler available to execute from anywhere. Open user's bash profile: vi ~/.bash_profile Add below lines: export PATH=/usr/local/opt/krb5/bin:$PATH export PATH=/usr/local/opt/krb5/sbin:$PATH export LDFLAGS=...

Why HashMap key should be immutable in java

HashMap is used to store the data in key, value pair where key is unique and value can be store or retrieve using the key. Any class can be a candidate for the map key if it follows below rules. 1. Overrides hashcode() and equals() method.   Map stores the data using hashcode() and equals() method from key. To store a value against a given key, map first calls key's hashcode() and then uses it to calculate the index position in backed array by applying some hashing function. For each index position it has a bucket which is a LinkedList and changed to Node from java 8. Then it will iterate through all the element and will check the equality with key by calling it's equals() method if a match is found, it will update the value with the new value otherwise it will add the new entry with given key and value. In the same way it check for the existing key when get() is called. If it finds a match for given key in the bucket with given hashcode(), it will return the value other...

Entity to DTO conversion in Java using Jackson

It's very common to have the DTO class for a given entity in any application. When persisting data, we use entity objects and when we need to provide the data to end user/application we use DTO class. Due to this we may need to have similar properties on DTO class as we have in our Entity class and to share the data we populate DTO objects using entity objects. To do this we may need to call getter on entity and then setter on DTO for the same data which increases number of code line. Also if number of DTOs are high then we need to write lot of code to just get and set the values or vice-versa. To overcome this problem we are going to use Jackson API and will see how to do it with minimal code only. Maven dependency <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.9.9</version> </dependency> Entity class Below is ...

Multiple data source with Spring boot, batch and cloud task

Here we will see how we can configure different datasource for application and batch. By default, Spring batch stores the job details and execution details in database. If separate data source is not configured for spring batch then it will use the available data source in your application if configured and create batch related tables there. Which may be the unwanted burden on application database and we would like to configure separate database for spring batch. To overcome this situation we will configure the different datasource for spring batch using in-memory database, since we don't want to store batch job details permanently. Other thing is the configuration of  spring cloud task in case of multiple datasource and it must point to the same data source which is pointed by spring batch. In below sections, we will se how to configure application, batch and cloud task related data sources. Application Data Source Define the data source in application properties or yml con...