Skip to Content
Skip Breadcrumb

With Openshift 3.11 is the Prometheus Cluster Monitoring fully supported.

I explain below how to customize the alertmanager.yaml with your own receivers.

Please keep in mind that this solution is primary for the Plattform and not for the Applications them self.


Before you start let me explain what suggestions and setups are expected for the solution below.

  • [bastion] host for your OCP environment where you can run the playbook.
  • three [infrastructure] nodes where the prometheus/altermanager pods are running
  • a prepared alertmanager.yaml
  • a user/SA which have the permissions to modify the openshift-monitor Project
  • the oc tool is installed and a user with the right permissions is logged in.

I personally prefer to use the oc tool as it is out of the box compatible with the setuped OCP version.

You should read and understand how the setup in Openshift works as described in the doc from Prometheus Cluster Monitoring, how the default configuration of the OCP Alertmanager looks like and how the Alertmanager can be configured.

Ansible solution

Run on bastion host

The following line executes the playbook which creates the new alertmanager configuration. The new config will be automatically be deployed after the replacement was done.

ANSIBLE_LOG_PATH=ansible_log_$(date +%Y_%m_%d-%H_%M) ansible-playbook \
  -e ocp_env=dev \
  -e webhook_endpoint= \
  -e \


The playbook which handles the modification of the alermanager.yaml.

- name: Add webhook receiver to altermanager
  hosts: bastion

    - name: Check that webhook receiver is reachable
        body: '{"NodeAlias":"nodetest","Identifier":"MYID"}'
        body_format: json
        method: POST
        url: "{{ webhook_endpoint }}"
      with_items: "{{ groups['infranodes'] }}"
      delegate_to: "{{ item }}"
      changed_when: False

    - name: ALERTS | Create alertman backup tmpfile
        prefix: "ocp{{ ocp_env }}_alertman_backup"
        suffix: ".tmp"
      register: alertman_back_tmp

    - name: ALERTS | Create alertman all backup tmpfile
        prefix: "ocp{{ ocp_env }}_alertman_all_backup"
        suffix: ".tmp"
      register: alertman_all_back_tmp

    # Create backup from current setup
    - name: ALERTS | Get alertman secret
      shell: |
        oc get secrets -n openshift-monitoring \
        -o go-template='{{ index .data "alertmanager.yaml"}}' alertmanager-main \
        | base64 -d > {% endraw %} {{ alertman_back_tmp.path }}

    - name: ALERTS | Create receiver snipplet
        dest: /tmp/alert-man-snipplet
        src: templates/alert-receiver.j2
    - name: ALERTS | Get alertman secret
      shell: |
        oc get secrets -n openshift-monitoring -o yaml alertmanager-main > {{ alertman_all_back_tmp.path }}

    - name: ALERTS | Replace alertman config with new value
        path: "{{ alertman_all_back_tmp.path }}"
        regexp: "^  alertmanager.yaml:.*$"
        replace: "  alertmanager.yaml: {{lookup('file', '/tmp/alert-man-snipplet') | b64encode }}"

    - name: ALERTS | replace alertman secret
      shell: |
        oc replace -n openshift-monitoring -f {{ alertman_all_back_tmp.path }}

    - name: ALERTS | Remove receiver snipplet tmpfile
        path: /tmp/alert-man-snipplet
        state: absent

    - name: ALERTS | Remove alertman backup tmpfile
        path: "{{ alertman_all_back_tmp.path }}"
        state: absent


The altermanasger Template alert-receiver.j2.

  resolve_timeout: 5m
  group_wait: 30s
  group_interval: 5m
  repeat_interval: 12h
  receiver: default
  - receiver: myconf
  - match:
      alertname: DeadMansSwitch
    repeat_interval: 5m
    receiver: deadmansswitch
- name: default
- name: deadmansswitch
- name: myconf
  - to: '{{ email_receivers }}'
    from: 'admin@ocp{{ ocp_env }}.cloud.internal'
    smarthost: 'SMTPRelay.MyDomain:25'
    send_resolved: true
    #require_tls: false
  - url: "{{ webhook_endpoint }}"
    send_resolved: true


  • 17.05.2019 - add catch all receiver and require_tls

You can contact me for any further questions and orders