Skip to Content
Skip Breadcrumb

A lot of enterprise applications are not yet cloud-ready or even designed as micro-services. Due to this fact, session stickiness is required for a lot of the enterprise Applications.
Let me explain how OpenShift Enterprise (= OCP) and origin can help you solve this topic for you.

« Management summary »

The most frequent question which I have gotten from a lot of customers is and was.

❓
Can I use Session stickiness in OpenShift and Kubernetes?

and the clear answer is

💥
Yes

Nevertheless, you should consider to get rid of this behaviour and use a shared session store.

History

The first version of OpenShift v3 was released on 24 June 2015 OpenShift Enterprise 3: Evolving PaaS for the Future with the following snippets in the haproxy-config.template

https://github.com/openshift/origin/blob/release-1.2/images/router/haproxy/conf/haproxy-config.template#L210-L219

  {{ if (eq $cfg.TLSTermination "") }}
    cookie OPENSHIFT_{{$cfgIdx}}_SERVERID insert indirect nocache httponly
  {{ else }}
    cookie OPENSHIFT_EDGE_{{$cfgIdx}}_SERVERID insert indirect nocache httponly secure
  {{ end }}
  http-request set-header Forwarded for=%[src];host=%[req.hdr(host)];proto=%[req.hdr(X-Forwarded-Proto)]
                {{ range $idx, $endpoint := endpointsForAlias $cfg $serviceUnit }}
  server {{$endpoint.IdHash}} {{$endpoint.IP}}:{{$endpoint.Port}} check inter 5000ms cookie {{$endpoint.IdHash}}
                {{ end }}
{{ end }}

https://github.com/openshift/origin/blob/release-1.2/images/router/haproxy/conf/haproxy-config.template#L237-L241
  cookie OPENSHIFT_REENCRYPT_{{$cfgIdx}}_SERVERID insert indirect nocache httponly secure
                {{ range $idx, $endpoint := endpointsForAlias $cfg $serviceUnit }}
  server {{$endpoint.IdHash}} {{$endpoint.IP}}:{{$endpoint.Port}} ssl check inter 5000ms verify required ca-file {{ $workingDir }}/cacerts/{{$cfgIdx}}.pem cookie {{$endpoint.IdHash}}
                {{ end }}
{{ end }}

These go template snippets mean that the haproxy adds the cookie OPENSHIFT_{{$cfgIdx}}_SERVERID, OPENSHIFT_EDGE_{{$cfgIdx}}_SERVERID or OPENSHIFT_REENCRYPT_{{$cfgIdx}}_SERVERID to the client and remove it when the request goes to the backend server.

A more detailed description can be found in the upstream document cookie keyword

Because of this configuration setting from the beginning this is still the default setup.
Since OpenShift 3.5 can this behavior be disabled via the route annotation haproxy.router.openshift.io/disable_cookies.

10000 foot view

Let’s take a view how the request is going to the Pod’s ( OCP Pod Doc , K8S Pod Doc ) IP (=Endpoint).

High level flow view

Image description

  • The user requests www.MY_CLOUD.DOMAIN.TLD and terminate on the Border control device(s) .

Now comes the tricky part because this Border control device(s) can be almost everything like Raspberry PI or a full-blown super height-available network farm.

💥
But whichever set-up is in front of the OCP Router at the end of the day you will have a route in OpenShift
  • The OCP Router make a lookup in the configuration and select the right backend pods.
❗️
The OCP Router DOES NOT make requests via the Kubernetes Service!
  • The request is forwarded to the Kubernetes Endpoint, and therefore to the application server.

What’s a « route »

A « route » is the **external entrypoint** to a Kubernetes Service.

This is one of the biggest difference between Kubernetes and OpenShift Enterprise(= OCP) and origin.

The Openshift Router is part of the Solution on the other hand is the Kubernetes Ingress a additional component which you need to install.
Both aspects have there pro and cons.

OpenShift Router

Up to 9th May 2018 was the haproxy based router and the F5 based router the only supported « router ».
Since the 9th May 2018 is also NGINX as « router » available.

The Router Overview and Routes describes the concept and the setup in the OpenShift.

The main reason that the stickiness works is that the OpenShift router have the Endpoints as targets and therefore the pod of the application.

Kubernetes Ingress

The Kubernetes Ingress Kind is the Kubernetes solution to handle external requests to the applications in a Kubernetes cluster via a Kubernetes Service.

There are several solutions available as ingress handler

Due to the fact that you can choose between several solutions, you can decide which one you like.

The stickiness

If you ask yourself:

❓
After all this router, ingress, loadbalancer stuff what’s now the solution for my stickiness?

Well the answer is, as so often in the IT, a multi level answer.

To be able to have the possibility to be session sticky is the following required.

  1. You must have a HTTP/HTTPS endpoint.
  2. The session handling is Cookie based

OpenShift solution

In OpenShift is the Cookie stickiness by default active as for now.

You can only use this stickiness with the following Route Types

Cite from Secured Routes

❗️
TLS termination in OpenShift Container Platform relies on SNI for serving custom certificates. Any non-SNI traffic received on port 443 is handled with TLS termination and a default certificate (which may not match the requested host name, resulting in validation errors).

Kubernetes solution

Here is the solution based on your decision for the ingress solution.

When you choose haproxy-ingress then you can use all the features of haproxy and therefore the session stickiness is easily possible.

For any other solution mentioned in Kubernetes Ingress are the solutions different or not possible.

For example is the cookie stickiness only in nginx plus available sticky

My personal opinion/conclusion

When you need a cookie session stickiness out of the box is OpenShift a handy way to go.

Even it probably sounds like I’m a RedHat Sales guy, but I’m not.
I am just a partner who likes OpenShift ;-)

Updates

  • 2018-05-27 Add Træfik ingress

You can contact me for any further questions and orders