Skip to Content
Skip Breadcrumb

In most enterprise environments are a lot systems separated via firewalls and the CSO does not want to allow the whole PaaS to access to the external Services.

Introduction

With the described egress solutions in this blog is it possible to keep the separation AND all benefits of the PaaS can be used.

  • High Availability
  • Scale out
  • Reproducible images/router

In the old days, around one year ago, was no egress router part in OpenShift.

Nowadays you have several options with different techniques.
The egress router is as a dedicated outgoing border to destinations outside of the PaaS.

This schema shows a possible setup.

OpenShift-Egress-options

Current options

I will describe the following solutions:

  • builtin via iproute2 package
  • builtin via squid
  • external via a generic haproxy image
  • Your own Solution. Out of scope

Okay what’s the difference ❓

Solution Multi home configmap Environment handycaps Source Documentation
iproute2 partly partly based on macvlan https://goo.gl/ZLA7Xx https://goo.gl/ghjcdC
squid partly partly https://goo.gl/iv6EC4 https://goo.gl/ghjcdC
haproxy ✔️ full partly requires knowledge about haproxy https://goo.gl/6zZTnm https://goo.gl/6zZTnm

Every solution have there valid use case.

Node selector

All above solutions can be placed on dedicated node with a “nodeSelector” Assigning Pods to Specific Nodes in the deploymentconfig a project.

❗ | A dedicated “nodeSelector” is only possible when the project “annotations” key “openshift.io/node-selector” is set to a empty string

iproute2 (kernel based router)

Red Hat calls this redirect mode and documented it at Deploying an Egress Router Pod in Redirect Mode.
This is the “easiest” Option and one of the fastest because the traffic is passed by the kernel.

There are some pitfalls when you use a this option so please read the hints very carefully Using an Egress Router to Allow External Resources to Recognize Pod Traffic

The default configuration and script is not prepared for a multi-homed and multi-route solution.
If you want to use another router-script in then Image then you will need to rebuild the Image from the existing Solution.

⚠️ | The egress router and the app MUST be in the same VNID (project/namespace)

squid (http proxy)

Red Hat call this http-proxy mode and documented it at Deploying an Egress Router HTTP Proxy Pod.

The default configuration and script is not prepared for a multi-homed solution. If you want to use another router-script in then Image then you will need to rebuild the Image from the existing Solution one you can build your own Solution.

This egress router CAN be in another project because it’s just a “normal” Service with a running pod.
This option can easily be used as a generic http proxy with white- an black- lists acls.

For the full syntax of the acl please take a look into the squid reference manual

haproxy (generic proxy)

This option is a generic tcp proxy based on haproxy 1.8.
You can use this solution like any other app due to the fact that I use the standard features from OpenShift.

I have created a Docker image which you can use out of the box.
haproxy18 which is based on this source haproxy18-centos.

There is also a Dockerfile for rhel7 which you can use to build your own haproxy with RHEL7.

Components

This Image have the following componentes.

The version in place is visible in the Dockerfile

LUA

HAProxy have the possibility to use LUA scripts since version 1.6.
LUA can be used at several stages. I have add this feature here just in case I will need it.

Socklog

I have described the Socklog in this blog post syslog in a container world

haproxy 1.8

The main component for this egress solution. I strongly suggest to read the detailed Dokumentation, due to the huge amount of feautures in haproxy.

haproxy_exporter

The haproxy_exporter offers a interface to prometheus to get the haproxy statistics into prometheus.

Setup in Openshift

I describe here the following scenario.

Generic-egress

I create a new project for the example, in case you can’t create a new project then you can use the current one.

prerequirement

  • Internet Access
  • docker hub is useable
  • default domain is set

In short

Here is a short example which you can copy and paste, if the prerequirement are fulfilled.

# oc new-app test-haproxy
# oc process -f https://gitlab.com/aleks001/haproxy17-centos/raw/master/haproxy-osev3.yaml \
    -p PROXY_SERVICE=test-scraper \
    -p SERVICE_NAME=tst-scr-svc \
    -p SERVICE_TCP_PORT=8443 \
    -p SERVICE_DEST_PORT=443 \
    -p SERVICE_DEST=www.google.com \
    | oc create -f -
# oc rsh -c test-scraper $( oc get po -o jsonpath='{.items[*].metadata.name }')
sh-4.2$ curl -v  http://test-scraper:${SERVICE_TCP_PORT}
...

logs from haproxy

oc logs -c test-scraper $( oc get po -o jsonpath='{.items[*].metadata.name }') -f
00000001:public_tcp.accept(0007)=0009 from [172.18.12.1:36314]
00000001:be_generic_tcp.srvcls[0009:000a]
00000001:be_generic_tcp.clicls[0009:000a]
00000001:be_generic_tcp.closed[0009:000a]
...

logs from haproxy-exporter

oc logs -c haproxy-exporter $( oc get po -o jsonpath='{.items[*].metadata.name }') -f

time="2017-09-28T17:20:01+02:00" level=info msg="Starting haproxy_exporter (version=0.8.0, branch=HEAD, revision=4ce06e84e1701827f2706fd58b1e1320a52e3967)" source="haproxy_exporter.go:476"
time="2017-09-28T17:20:01+02:00" level=info msg="Build context (go=go1.8.3, user=root@27187aec7434, date=20170824-21:39:12)" source="haproxy_exporter.go:477"
time="2017-09-28T17:20:01+02:00" level=info msg="Listening on :9101" source="haproxy_exporter.go:502"

Prometheus Stats

The template creates a route test-scraper which is reachable as any other route in the cluster. You can configure now the prometheus to scrape the metrics from the generic router.

Changes

  • 2018-11-06 update to haproxy 1.8 links and images

You can contact me for any further questions and orders