Einblick in eine
Continuous Delivery Pipeline:

Automated Test Result

Aggregation and Evaluation

Benjamin Nothdurft

- Case Study-

Introduction

What are we going to learn?

Why should you listen to us?

Specific Takeaways

  • When do you need to automate?

  • How to figure out who your customers are?

  • Find your requirements?

  • How to find the fitting tools?

  • What tools are trending these days?

  • How to kickstart things in your project!

Background Story

Business Model

Pipeline Visualization

Pipeline Visualization

Pipeline Visualization

Test Pyramid

Demo Time

Analyze the test results

Static HTML Pages

Jenkins Job Hierarchy

Jenkins Job Queue & Artifacts

Text

Individual HTML Test Report

Feature Team Release Automation Team
- No idea where to find documents
- No experience with Jenkins
- Very limited time for integration
- Inspect a lot of Jenkins jobs
- All documents at different places
- Results stored only for 30 days

Delivery Pipeline in Jenkins

Daily Fight

Solution Approach

Vision & Requirements

1) Central Storage

1) Central Storage

Vision & Requirements

2) Simple & Maintainable

1) Central Storage

Vision & Requirements

2) Simple & Maintainable

3) Easy to view via website

  • Ausgangsbild und Sachen weglöschen

Current State

Solution Blueprint

Part #1: Test Object

Test Aggregation Workflow

{
  "browser":"firefox",
  "timestamp":"2016-06-13T19:23:32.227Z",
  "pos":"1",
  "result":"FAILURE",
  "test":"EbayTest.ebayConfigurationBBOTest",
  "class":"com.epages.cartridges.de_epages.ebay.tests.EbayTest",
  "method":"ebayConfigurationBBOTest",
  "runtime":"67",
  "team":"ePages6",
  "test_url":"/20160613T192332227Z/esf-test-reports/
     com/epages/cartridges/de_epages/ebay/tests/
     EbayTest/ebayConfigurationBBOTest/test-report.html",
  "stacktrace":"java.lang.NullPointerException
     at com.epages.cartridges.de_epages.ebay.tests.EbayTest.ebayConfigurationBBOTest(EbayTest.java:490)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.lang.reflect.Method.invoke(Method.java:498)
     at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:86)
     at org.testng.internal.Invoker.invokeMethod(Invoker.java:643)
     at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:820)
     at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1128)"
 }

Test Object from Test Suite

{
    "epages_version": "6.17.48",
    "epages_repo_id": "6.17.48/2016.05.19-00.17.26",
    "env_os": "centos",
    "env_identifier": "distributed_three_hosts",
    "env_type": "install",
    "browser": "firefox",
    "timestamp": "20160519T011223091Z",
    "pos": "3",
    "result": "FAILURE",
    "test": "DigitalTaxmatrixBasketTest.testDigitalTaxmatrixBasket",
    "class": "com.epages.cartridges.de_epages.tax.tests.DigitalTaxmatrixBasketTest",
    "method": "testDigitalTaxmatrixBasket",
    "runtime": "275",
    "report_url": "http://myserver.epages.de:8080/job/Run_ESF_tests/3778/artifact/esf/
      esf-epages6-1.15.0-SNAPSHOT/log/20160519T001726091Z/
      esf-test-reports/com/epages/cartridges/de_epages/tax/tests/DigitalTaxmatrixBasketTest/
      testDigitalTaxmatrixBasket/test-report.html",
    "stacktrace": "org.openqa.selenium.TimeoutException: 
    Timed out after 30 seconds waiting for presence of element located by: By.className: 
    Saved Build info: version: '2.47.1', System info: host: 'ci-vm-ui-test-004', 
    ip: '127.0.1.1', os.name: 'Linux', os.arch: 'amd64', os.version: '3.13.0-43-generic', java.version: '1.8.0_45-internal' Driver info:
      org.openqa.selenium.support.events.EventFiringWebDriver at 
      org.openqa.selenium.support.ui.WebDriverWait.timeoutException(WebDriverWait.java:80) at 
      org.openqa.selenium.support.ui.FluentWait.until(FluentWait.java:229) at 
      com.epages.esf.controller.ActionBot.waitFor(ActionBot.java:491) at com.epages.esf.controller.ActionBot.waitFor(ActionBot.java:468) at 
      com.epages.cartridges.de_epages.coupon.pageobjects.mbo.ViewCouponCodes.createmanualCouponCode(ViewCouponCodes.java:159) at 
      com.epages.cartridges.de_epages.tax.tests.DigitalTaxmatrixBasketTest.setupCoupon(DigitalTaxmatrixBasketTest.java:882) at 
      com.epages.cartridges.de_epages.tax.tests.DigitalTaxmatrixBasketTest.testDigitalTaxmatrixBasket(DigitalTaxmatrixBasketTest.java:172)"
}

Test Object in Elasticsearch

Part #2: Elasticsearch

Test Aggregation Workflow

Implementation

Dockerfile

docker-entrypoint.sh

config/

     elasticsearch.yml.j2

circle.yml

scripts/

     build.sh

     start.sh

     stop.sh

     deploy.sh

CI Job

CI Job

Hub Repo

CI Project

img:latest

img:master

img:dev

img:stable

CI Job

CI Job

# Use official Elasticsearch image.
FROM elasticsearch:2.2.3

##################
# Install Jinja2 #
##################

ENV JINJA_SCRIPT="/render_jinja_template.py" \
    REPO_API_PATH="https://api.github.com/repos/gh-acc/gh-repo/contents/scripts/templating" \
    REPO_PROD_BRANCH="master"

# Install packages and clean-up
RUN apt-get update && apt-get install -y curl python-setuptools && \
    easy_install Jinja2 && \
    apt-get -y clean && \
     rm -rf /var/lib/apt/lists/*

# Add jinja templating script from repo epages-infra
RUN curl --retry 5 -H "Authorization: token ${REPO_ACCESS_TOKEN}" \
      -H 'Accept: application/vnd.github.v3.raw' \
      -o ${JINJA_SCRIPT} -L ${REPO_API_PATH}${JINJA_SCRIPT}?ref=${REPO_PROD_BRANCH} && \
    chown elasticsearch:elasticsearch ${JINJA_SCRIPT} && \
    chmod +x ${JINJA_SCRIPT}
    
...

to-elasticsearch/Dockerfile

#################
# Elasticsearch #
#################

ENV ES_PATH="/usr/share/elasticsearch" \
    ES_HTTP_BASIC="https://github.com/Asquera/elasticsearch-http-basic/releases/download/v1.5.1/elasticsearch-http-basic-1.5.1.jar"

RUN $ES_PATH/bin/plugin -install mobz/elasticsearch-head

RUN mkdir -p $ES_PATH/plugins/http-basic && \
    cd $ES_PATH/plugins/http-basic && \
    wget $ES_HTTP_BASIC
    
ENV ES_CONFIG_VOL="/usr/share/elasticsearch/config" \
    ES_DATA_VOL="/usr/share/elasticsearch/data" \
    ES_LOGS_VOL="/usr/share/elasticsearch/logs"

COPY config/ ${ES_CONFIG_VOL}/
RUN chown -R elasticsearch:elasticsearch ${ES_CONFIG_VOL}

VOLUME ["${ES_CONFIG_VOL}", "${ES_LOGS_VOL}"]

RUN rm /docker-entrypoint.sh

COPY docker-entrypoint.sh / 
RUN chown elasticsearch:elasticsearch /docker-entrypoint.sh && \
    chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]

CMD ["elasticsearch"]

to-elasticsearch/Dockerfile

#!/bin/bash

set -e

# Add elasticsearch as command if needed
if [ "${1:0:1}" = '-' ]; then
  set -- elasticsearch "$@"
fi

# Drop root privileges if we are running elasticsearch
if [ "$1" = 'elasticsearch' ]; then

  # Change the ownership of /usr/share/elasticsearch/data to elasticsearch
  chown -R elasticsearch:elasticsearch ${ES_CONFIG_VOL} ${ES_DATA_VOL} ${ES_LOGS_VOL}

  # Find env file in docker
  ES_ENV_PATH=$( find "${ES_CONFIG_VOL}" -maxdepth 3 -iname "${ES_ENV}" )

  # Render jinja templates of elasticsearch.yaml and logging.yml
  python ${JINJA_SCRIPT} -f "${ES_ENV_PATH}" \
                         -t "${ES_CONFIG_VOL}"/elasticsearch.yml.j2 \
                            "${ES_CONFIG_VOL}"/logging.yml.j2
  
  set -- gosu elasticsearch "${@}"
fi

# As argument is not related to elasticsearch,
# then assume that user wants to run his own process,
# for example a `bash` shell to explore this image
exec "${@}"

to-elasticsearch/docker-entrypoint.sh

###########
# Cluster #
###########

# Set the cluster name
cluster.name: {{ CLUSTER_NAME }}

########
# Node #
########

# Prevent Elasticsearch from choosing a new name on every startup.
node.name: {{ NODE_NAME }}

# Allow this node to be eligible as a master node
node.master: {{ NODE_MASTER }}

# Allow this node to store data
node.data: {{ NODE_DATA }}

########
# Path #
########

path.config: /usr/share/elasticsearch/config
path.plugins: /usr/share/elasticsearch/plugins
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/logs
path.work: /usr/share/elasticsearch/work

./config/elasticsearch.yml.j2

###########
# Network #
###########

network.bind_host: 0.0.0.0
network.publish_host: 0.0.0.0
transport.tcp.port: 9300
http.port: 9200
http.enabled: true

###############
# HTTP Module #
###############

http.cors.enabled: {{ HTTP_ENABLED }}
http.cors.allow-origin: {{ HTTP_ALLOW_ORIGIN }}
http.cors.allow-methods : {{ HTTP_ALLOW_METHODS }}
http.cors.allow-headers: {{ HTTP_ALLOW_HEADERS }}

#####################
# HTTP Basic Plugin #
#####################

http.basic.enabled: true
http.basic.user: {{ ES_USER }}
http.basic.password: {{ ES_PASSWORD }}

./config/elasticsearch.yml.j2

##################
# Slowlog Module #
##################

# Set threshold for shard level query execution logging
index.search.slowlog.threshold.query.warn    : 10s
index.search.slowlog.threshold.query.info    : 5s
index.search.slowlog.threshold.query.debug   : 2s
index.search.slowlog.threshold.query.trace   : 500ms

# Set threshold for shard level fetch phase logging
index.search.slowlog.threshold.fetch.warn    : 1s
index.search.slowlog.threshold.fetch.info    : 800ms
index.search.slowlog.threshold.fetch.debug   : 500ms
index.search.slowlog.threshold.fetch.trace   : 200ms

# Set threshold for shard level index logging
index.indexing.slowlog.threshold.index.warn  : 10s
index.indexing.slowlog.threshold.index.info  : 5s
index.indexing.slowlog.threshold.index.debug : 2s
index.indexing.slowlog.threshold.index.trace : 500ms

###########
# GC Logs #
###########

# Set threshold for young garbage collection logging
monitor.jvm.gc.young.warn  : 1000ms
monitor.jvm.gc.young.info  : 700ms
monitor.jvm.gc.young.debug : 400ms

./config/elasticsearch.yml.j2

# The variables used for rendering of jinja templates.

#################
# env variables #
#################

ES_ENV
ES_HEAP_SIZE

#####################
# elasticsearch.yml #
#####################

CLUSTER_NAME=to-elasticsearch
NODE_NAME=to-es-master-01
NODE_MASTER=true
NODE_DATA=true
HTTP_ENABLED=true
HTTP_ALLOW_ORIGIN=/.*/
HTTP_ALLOW_METHODS=OPTIONS, HEAD, GET, POST, PUT, DELETE
HTTP_ALLOW_HEADERS=Authorization
ES_USER
ES_PASSWORD

###############
# logging.yml #
###############

LOG_LEVEL=INFO

./config/env-to-master-01.list


usage: render.py [-h] [-v] [-e ENV [ENV ...]] [-f FILES [FILES ...]] 
                 -t TEMPLATES [TEMPLATES ...] [-d DEST]


script to render jinja template with env variables and output rendered file.


invocation:

  render_jinja_template.py -v 
                           -t <filename>.<ext>.j2
                           -e <key> <key>=<value> 
                           -f <env.list> 
                           -d </dest/dir>

  render_jinja_template.py --verbose 
                           --template <filename>.<ext>.j2 
                           --env <key> <key>=<value> 
                           --env-file <env.list> 
                           --dest </dest/dir>

./scripts/render_jinja_template.py

machine:
  services:
    - docker
  environment:
    # Test uses a dedicated docker container.
    TEST_TO_MASTER: "to-es-master"
    TEST_TO_MASTER_ENV: "env-to-es-master-01.list"
    # Docker run options are set to detach to background and share network addresses from host to container.
    LS_DOCKER_REMOVE: false
    LS_DOCKER_DETACH: true
    # Docker build image.
    ES_IMAGE_NAME: "epages/to-elasticsearch"
    ES_IMAGE_TAG: ${CIRCLE_BRANCH//\//-}
    # Host connection details.
    ES_HOST_URL: "http://0.0.0.0"
    ES_HOST_HTTP: 9200
    ES_HOST_TCP: 9300
    # Test connection times.
    SLEEP_BEFORE_TESTING: 15
    # Git merge script is needed for auto-merging dev to master branch.
    MERGE_SCRIPT: "merge-to.sh"
    MERGE_SCRIPT_URL_PREFIX: "https://raw.githubusercontent.com/ePages-de/repo/master/scripts/git"
    GIT_UPSTREAM_URL: "git@github.com:ePages-de/to-elasticsearch.git"
    GIT_UPSTREAM_BRANCH_MASTER: "master"
    GIT_UPSTREAM_BRANCH_PRODUCTION: "stable"
...

to-elasticsearch/circle.yml

...
dependencies:
  cache_directories:
    - "~/docker"
  override:
    # Docker environment used.
    - docker info
    # Load cached images, if available.
    - if [[ -e ~/docker/image.tar ]]; then docker load --input ~/docker/image.tar; fi
    # Build our image.
    - ./build.sh
    # Save built image into cache.
    - mkdir -p ~/docker; docker save ${ES_IMAGE_NAME}:${ES_IMAGE_TAG} > ~/docker/image.tar

test:
  override:
...

to-elasticsearch/circle.yml

...
test:
  override:
    - |
        printf "\n%s\n\n" "+++ Begin test of docker container [${TEST_TO_MASTER}] +++"
        export ES_DOCKER_CONTAINER="${TEST_TO_MASTER}"
        export ES_ENV="${TEST_TO_MASTER_ENV}"
        export ES_CONFIG="/tmp/${TEST_TO_MASTER}/config"
        export ES_DATA="/tmp/${TEST_TO_MASTER}/data"
        export ES_LOGS="/tmp/${TEST_TO_MASTER}/logs"
        mkdir -v -p ${ES_CONFIG} ${ES_DATA} ${ES_LOGS}
        cp -v -r config/* ${ES_CONFIG}/
        # Fire up our container for testing.
        ./start.sh; exit $?
        # Test the access to our Elasticsearch instance.
        sleep ${SLEEP_BEFORE_TESTING}; curl --retry 5 -u ${TEST_ES_USER}:${TEST_ES_PASSWORD} "${ES_HOST_URL}:${ES_HOST_HTTP}?pretty"
        # Stop running container.
        ./stop.sh; exit $?
        # Test our deployment script as well.
        export ES_DOCKER_CONTAINER="${TEST_TO_MASTER}-production"
        ./deploy.sh
        sleep ${SLEEP_BEFORE_TESTING}; curl --retry 5 -u "${ES_USER}:${ES_PASSWORD}" "${ES_HOST_URL}:${ES_HOST_HTTP}?pretty"
        printf "\n%s\n" "+++ End test of docker container [${TEST_TO_MASTER}] +++"
  post:
    - |
        printf "\n%s\n\n" "=== Archive artifacts of [${TEST_TO_MASTER}] ==="
        sudo mv -v -f "/tmp/${TEST_TO_MASTER}" "${CIRCLE_ARTIFACTS}/"

deployment:
...

to-elasticsearch/circle.yml

...
deployment:
  dev_actions:
    branch: dev
    commands:
      # Push image to Docker Hub.
      - docker login -u "${DOCKER_LOGIN_USERNAME}" -p "${DOCKER_LOGIN_PASSWORD}" -e "${DOCKER_LOGIN_EMAIL}"
      - docker push "${ES_IMAGE_NAME}:${ES_IMAGE_TAG}"
      # Merge tested commit into master.
      - wget -O "/tmp/${MERGE_SCRIPT}" "${MERGE_SCRIPT_URL_PREFIX}/${MERGE_SCRIPT}" && chmod 750 "/tmp/${MERGE_SCRIPT}"
      - /tmp/${MERGE_SCRIPT} -c "${CIRCLE_SHA1}" -e "${CIRCLE_BRANCH}" -t "${GIT_UPSTREAM_BRANCH_MASTER}" -r "${GIT_UPSTREAM_URL}"
  master_actions:
    branch: master
    commands:
      # Push image to Docker Hub.
      - docker login -u "${DOCKER_LOGIN_USERNAME}" -p "${DOCKER_LOGIN_PASSWORD}" -e "${DOCKER_LOGIN_EMAIL}"
      - docker push "${ES_IMAGE_NAME}:${ES_IMAGE_TAG}"
      # Merge tested commit into stable.
      - wget -O "/tmp/${MERGE_SCRIPT}" "${MERGE_SCRIPT_URL_PREFIX}/${MERGE_SCRIPT}" && chmod 750 "/tmp/${MERGE_SCRIPT}"
      - /tmp/${MERGE_SCRIPT} -c "${CIRCLE_SHA1}" -e "${CIRCLE_BRANCH}" -t "${GIT_UPSTREAM_BRANCH_PRODUCTION}" -r "${GIT_UPSTREAM_URL}"
  stable_actions:
    branch: stable
    commands:
      # Push image to Docker Hub.
      - docker login -u "${DOCKER_LOGIN_USERNAME}" -p "${DOCKER_LOGIN_PASSWORD}" -e "${DOCKER_LOGIN_EMAIL}"
      - docker push "${ES_IMAGE_NAME}:${ES_IMAGE_TAG}"
      # Tag with 'latest' and push image to Docker Hub.
      - docker tag "${ES_IMAGE_NAME}:${ES_IMAGE_TAG}" "${ES_IMAGE_NAME}:latest"
      - docker push "${ES_IMAGE_NAME}:latest"

to-elasticsearch/circle.yml

#!/bin/bash

export SCRIPT_DIR=$(dirname "$0")

# Half of the available RAM should be used for ES directly. The other half can
# be consumed by Lucene (via the OS' filesystem cache).
export ES_HEAP_SIZE=4g

# Do we have an Image and Tag to be used?
if [[ -z "${ES_IMAGE_NAME}" ]] ; then
    echo 'Variable ES_IMAGE_NAME is not set.'
    exit 1
fi

if [[ -z "${ES_IMAGE_TAG}" ]] ; then
    echo 'Variable ES_IMAGE_NAME is not set.'
    exit 1
fi

# We pull the official Image and only if that doesn't work are we
# triggereing the build step.
which docker > /dev/null 2>&1
if [[ $? -ne 0 ]] ; then echo 'Docker is not installed.' ; fi
docker pull ${ES_IMAGE_NAME}:${ES_IMAGE_TAG}
if [[ $? -ne 0 ]] ; then
   echo 'Pulling Image was not successful. Triggering local Image build...'
   ${SCRIPT_DIR}/build.sh || exit 1
fi

# Stop running instance.
${SCRIPT_DIR}/stop.sh

# Start new instance.
${SCRIPT_DIR}/start.sh

to-elasticsearch/deploy.sh

Part #3: Logstash

Test Aggregation Workflow

Implementation

Dockerfile

docker-entrypoint.sh

config/

     logstash-esf.conf.j2

circle.yml

scripts/

     build.sh . . .

test/

     metrics-from-files.sh

     metrics-from-es.sh

CI Job

CI Job

Hub Repo

CI Project

img:latest

img:master

img:dev

img:stable

CI Job

CI Job



input {

    # Read esf log as events
    
    # Wrap events as message in JSON object

}



filter {

    # Process/transform/enrich events
    
}



output {
	
    # Log to console
    
    # Ship events to elasticsearch
    # and index them as documents
    
    # Write info/debug/error log

}

to-logstash/config/logstash-esf.conf








input {
    {#- only if esf log sould be processed #}
    {%- if "log" in LS_INPUT %}

    ################
    # Read esf log #
    ################

    # read from files via pattern
    file {
        path => ["{{ LS_LOG_VOL }}/{{ LS_PATTERN }}"]
        start_position => "beginning"
    } 
    {%- endif %}
}

to-logstash/config/logstash-esf.conf

filter {
    {#- only if esf log should be processed #}
    {%- if "log" in LS_INPUT %}

    # exclude empty and whitespace lines
    if [message] != "" and [message] !~ /^[\s]*$/ {

        ######################################
        # Add source fields in desired order #
        ######################################
        
        # only if no error tags were created
        if (![tags]) {

            # add needed env variables to event
            mutate {
                add_field => {
                    "note" => ""
                    "epages_version" => "{{ EPAGES_VERSION }}"
                    "epages_repo_id" => "{{ EPAGES_REPO_ID }}"
                    "env_os" => "{{ ENV_OS }}"
                    "env_identifier" => "{{ ENV_IDENTIFIER }}"
                    "env_type" => "{{ ENV_TYPE }}"
                }
            }
        }

        # extract esf fields from message; the content wrapper
        json { source => "message" }
...
}

to-logstash/config/logstash-esf.conf

filter {
...
        # only if no error tags were created
        if (![tags]) {
        
            # add needed env variables to event
            mutate {
                add_field => {
                    "report_url" => "{{ ENV_URL }}%{test_url}"
                }
            }
        }

        ###################################
        # Remove not needed source fields #
        ###################################

        # only if no error tags were created
        if (![tags]) {

            # remove not needed fields from extraction of message
            mutate { remove_field => [ "host", "message", "path", 
                    "test_url", "@timestamp", "@version" ] }
        }
...
}

to-logstash/config/logstash-esf.conf

filter {
...
        ######################
        # Create document id #
        ######################
        
            if [env_identifier] != "zdt" {
                # generate document logstash id from several esf fields
                fingerprint {
                    target => "[@metadata][ES_DOCUMENT_ID]"
                    source => ["epages_repo_id", "env_os", "env_type", 
                                "env_identifier", "browser", "class", "method"]
                    concatenate_sources => true
                    key => "any-long-encryption-key"
                    method => "SHA1"    # return the same hash if all values of source fields are equal
                }
            } else {
                # do not overwrite results for zdt environment identifier
                fingerprint {
                    target => "[@metadata][ES_DOCUMENT_ID]"
                    source => ["epages_repo_id", "env_os", "env_type", 
                                "env_identifier", "browser", "class", "method", "report_url"]
                    concatenate_sources => true
                    key => "any-long-encryption-key"
                    method => "SHA1"    # return the same hash if all values of source fields are equal
                }
            }
    } # end exclude whitespace
    {%- endif %}
}

to-logstash/config/logstash-esf.conf

output {

    {%- if "verbose" in LS_OUTPUT or "console" in LS_OUTPUT %}
    
    #################################
    # Output for verbose or console #
    #################################

    # print all esf events as pretty json (info and error)
    stdout { codec => rubydebug { metadata => true } }
    {%- endif %} 
...
}

to-logstash/config/logstash-esf.conf

output {
... {%- if "elasticsearch" in LS_OUTPUT or "document" in LS_OUTPUT or "template" in LS_OUTPUT %}    

    ############################
    # Output for elasticsearch #
    ############################

        elasticsearch {

           hosts => {{ ES_HOSTS }}
           
           {%- if ES_USER and ES_PASSWORD %}
           user => "{{ ES_USER }}"
           password => "{{ ES_PASSWORD }}"
           {%- endif %}
           
           {%- if "elasticsearch" in LS_OUTPUT or "document" in LS_OUTPUT %}
           index => "{{ ES_INDEX }}"
           document_type => "{{ ES_DOCUMENT_TYPE }}"
           document_id => "%{[@metadata][ES_DOCUMENT_ID]}"
           {%- endif %}
           
           {%- if "elasticsearch" in LS_OUTPUT or "template" in LS_OUTPUT %}
           manage_template => true
           template => "{{ LS_CONFIG_VOL }}/template-esf.json"
           template_name => "{{ ES_INDEX }}"
           template_overwrite => true
           {%- endif %}
        }
    {%- endif %}
...
}

to-logstash/config/logstash-esf.conf

output {
... 
    {%- if "log" in LS_OUTPUT or "info" in LS_OUTPUT %}
    #######################
    # Output for info log #
    #######################
    
    # only if no error tags were created
    if (![tags]) {
            # log esf events to logstash output data
            file {
                path => "{{ LS_LOG_VOL }}/{{ LS_INFO }}"
                codec => "json"   # cannot be changed
            }
    }
    {%- endif %}
    {%- if "log" in LS_OUTPUT or "error" in LS_OUTPUT %}
    ########################
    # Output for error log #
    ########################

    # if error tags were created during input processing
    if [tags] {
        # log failed esf events to logstash filter errors
        file {
            path => "{{ LS_LOG_VOL }}/{{ LS_ERROR }}"
            codec => "json"    # cannot be changed
        }
    }
    {%- endif %}
}

to-logstash/config/logstash-esf.conf

machine:
  pre:
    # Configure elasticsearch circle service.
    - sudo cp -v "/home/ubuntu/to-logstash/test/service-elasticsearch.yml" "/etc/elasticsearch/elasticsearch.yml"; cat $_
  hosts:
    elasticsearch.circleci.com: 127.0.0.1
  services:
    - elasticsearch
    - docker
  environment:
    # Circle run tests with parallelism.
    CIRCLE_PARALLEL: true
    # Tests use dedicated docker containers, log directories and elasticsearch indexes.
    TEST_SAMPLE: "to-logstash-test-process-sample"
    TEST_PRODUCTION: "to-logstash-test-deploy-production"
    ... # SET ENV VARS

dependencies:
  override:
    ... # CONFIGURE DOCKER
    # Make sure circle project parallelism is set to at least 2 nodes.
    - | 
        if [[ "${CIRCLE_NODE_TOTAL}" -eq "1" ]]; then {
          echo "Parallelism [${CIRCLE_NODE_TOTAL}x] needs to be 2x to fasten execution time." 
          echo "You also need to set our circle env CIRCLE_PARALLEL [${CIRCLE_PARALLEL}] to true." 
        }; fi
test:
   ...

to-logstash/circle.yml

test:
  override:
    - ? >
        case $CIRCLE_NODE_INDEX in 
        0)
          printf "\n%s\n" "+++ Begin test of docker container [${TEST_SAMPLE}] +++"
          printf "\n%s\n\n" "=== Prepare test and setup config and log dirs on host ==="
          export LS_DOCKER_CONTAINER="${TEST_SAMPLE}"
          export LS_LOG="/tmp/${TEST_SAMPLE}/log"
          export LS_CONFIG="/tmp/${TEST_SAMPLE}/config"
          export ES_INDEX="${TEST_SAMPLE}"
          mkdir -v -p ${LS_LOG} ${LS_CONFIG}
          cp -v -r config/* ${LS_CONFIG}/
          cp -v test/${TEST_LOG} ${LS_LOG}/
          printf "\n%s\n" "--- Prepare test completed."
          # Fire up the container
          ./start.sh; [[ $? -eq 1 ]] && exit 1
          # Sleep is currently needed as file input is handeld as a data stream
          # see: https://github.com/logstash-plugins/logstash-input-file/issues/52
          sleep 50; 
          # Stop the container.
          ./stop.sh; [[ $? -eq 1 ]] && exit 1
          # Test metrics from files including input, output and errors.
          ./test/test-metrics-from-files.sh; [[ $? -eq 1 ]] && exit 1
          # Test metrics form elasticsearch including input, template and documents.
          ./test/test-metrics-from-elasticsearch.sh; [[ $? -eq 1 ]] && exit 1
          printf "\n%s\n" "+++ End test of docker container [${TEST_SAMPLE}] +++"
          # Exit case statement if run in parallel else proceed to next case.
          $CIRCLE_PARALLEL && exit 0
          ;&
        1)
          printf "\n%s\n" "+++ Begin test of [${TEST_PRODUCTION}] +++"
          printf "\n%s\n\n" "=== Prepare test and setup config and log dirs on host ==="
          export LS_DOCKER_CONTAINER="${TEST_PRODUCTION}"
          export LS_LOG="/tmp/${TEST_PRODUCTION}/log"
          export LS_CONFIG="/tmp/${TEST_PRODUCTION}/config"
          export ES_INDEX="${TEST_PRODUCTION}"
          mkdir -v -p ${LS_LOG} ${LS_CONFIG}
          cp -v -r config/* ${LS_CONFIG}/
          cp -v test/${TEST_LOG} ${LS_LOG}/
          printf "\n%s\n" "--- Prepare test completed."
          # Run the full deploy script as used in jenkins.
          ./deploy.sh; [[ $? -eq 1 ]] && exit 1
          # Test metrics from files including input, output and errors.
          ./test/metrics-from-files.sh; [[ $? -eq 1 ]] && exit 1
          # Test metrics form elasticsearch including input, template and documents.
          ./test/metrics-from-elasticsearch.sh; [[ $? -eq 1 ]] && exit 1
          printf "\n%s\n" "+++ End test of [${TEST_PRODUCTION}] +++"
          # Exit case statement if run in parallel else proceed to next case.
          $CIRCLE_PARALLEL && exit 0
          ;&
        esac
      : parallel: true
  post:
    - ? >
        case $CIRCLE_NODE_INDEX in 
        0)
          printf "\n%s\n\n" "=== Archive artifacts of [${TEST_SAMPLE}] ==="
          sudo mv -v -f "/tmp/${TEST_SAMPLE}" "${CIRCLE_ARTIFACTS}/"
          mkdir -v -p "${CIRCLE_ARTIFACTS}/${TEST_SAMPLE}/services"
          sudo cp -v "${ES_CONF}" "${ES_LOG}" $_
          # Exit case statement if run in parallel else proceed to next case.
          $CIRCLE_PARALLEL && exit 0
          ;&
        1)
          printf "\n%s\n\n" "=== Archive artifacts of [${TEST_PRODUCTION}] ==="
          sudo mv -v -f "/tmp/${TEST_PRODUCTION}" "${CIRCLE_ARTIFACTS}/"
          mkdir -v -p "${CIRCLE_ARTIFACTS}/${TEST_PRODUCTION}/services"
          sudo cp -v "${ES_CONF}" "${ES_LOG}" $_
          # Exit case statement if run in parallel else proceed to next case.
          $CIRCLE_PARALLEL && exit 0
          ;&
        esac
      : parallel: true


deployment:
  dev_actions:
    branch: dev
    commands:
      # Push image to Docker Hub.
      - docker login -u "${DOCKER_LOGIN_USERNAME}" -p "${DOCKER_LOGIN_PASSWORD}" -e "${DOCKER_LOGIN_EMAIL}"
      - docker push "${LS_DOCKER_REPO}:${LS_DOCKER_TAG}"
      # Merge tested commit into master.
      - wget -t 3 -O "/tmp/${MERGE_SCRIPT}" "${MERGE_SCRIPT_URL_PREFIX}/${MERGE_SCRIPT}" && chmod 750 "/tmp/${MERGE_SCRIPT}"
      - /tmp/${MERGE_SCRIPT} -c "${CIRCLE_SHA1}" -e "${CIRCLE_BRANCH}" -t "${GIT_UPSTREAM_BRANCH_MASTER}" -r "${GIT_UPSTREAM_URL}"
  master_actions:
    branch: master
    commands:
      # Push image to Docker Hub.
      - docker login -u "${DOCKER_LOGIN_USERNAME}" -p "${DOCKER_LOGIN_PASSWORD}" -e "${DOCKER_LOGIN_EMAIL}"
      - docker push "${LS_DOCKER_REPO}:${LS_DOCKER_TAG}"
      # Merge tested commit into stable.
      - wget -t 3 -O "/tmp/${MERGE_SCRIPT}" "${MERGE_SCRIPT_URL_PREFIX}/${MERGE_SCRIPT}" && chmod 750 "/tmp/${MERGE_SCRIPT}"
      - /tmp/${MERGE_SCRIPT} -c "${CIRCLE_SHA1}" -e "${CIRCLE_BRANCH}" -t "${GIT_UPSTREAM_BRANCH_PRODUCTION}" -r "${GIT_UPSTREAM_URL}"
  stable_actions:
    branch: stable
    commands:
      # Push image to Docker Hub.
      - docker login -u "${DOCKER_LOGIN_USERNAME}" -p "${DOCKER_LOGIN_PASSWORD}" -e "${DOCKER_LOGIN_EMAIL}"
      - docker push "${LS_DOCKER_REPO}:${LS_DOCKER_TAG}"
      # Tag with latest and push to Docker Hub.
      - docker tag "${LS_DOCKER_REPO}:${LS_DOCKER_TAG}" "${LS_DOCKER_REPO}:latest"
      - docker push "${LS_DOCKER_REPO}:latest"

general:
  artifacts:
    - "${CIRCLE_ARTIFACTS}/${LS_DOCKER_TEST_SAMPLE}"
    - "${CIRCLE_ARTIFACTS}/${LS_DOCKER_TEST_PRODUCTION}"

to-logstash/circle.yml

test:
  override:
    - ? >
        case $CIRCLE_NODE_INDEX in 
        0)
          ...
        1)
          printf "\n%s\n" "+++ Begin test of [${TEST_PRODUCTION}] +++"
          printf "\n%s\n\n" "=== Prepare test and setup config and log dirs on host ==="
          export LS_DOCKER_CONTAINER="${TEST_PRODUCTION}"
          export LS_LOG="/tmp/${TEST_PRODUCTION}/log"
          export LS_CONFIG="/tmp/${TEST_PRODUCTION}/config"
          export ES_INDEX="${TEST_PRODUCTION}"
          mkdir -v -p ${LS_LOG} ${LS_CONFIG}
          cp -v -r config/* ${LS_CONFIG}/
          cp -v test/${TEST_LOG} ${LS_LOG}/
          printf "\n%s\n" "--- Prepare test completed."
          # Run the full deploy script as used in jenkins.
          ./deploy.sh; [[ $? -eq 1 ]] && exit 1
          # Test metrics from files including input, output and errors.
          ./test/metrics-from-files.sh; [[ $? -eq 1 ]] && exit 1
          # Test metrics form elasticsearch including input, template and documents.
          ./test/metrics-from-elasticsearch.sh; [[ $? -eq 1 ]] && exit 1
          printf "\n%s\n" "+++ End test of [${TEST_PRODUCTION}] +++"
          # Exit case statement if run in parallel else proceed to next case.
          $CIRCLE_PARALLEL && exit 0
          ;&
        esac
      : parallel: true
  post:
      ...

to-logstash/circle.yml

test:
  override:
    - ? >
        case $CIRCLE_NODE_INDEX in 
        0) ...
        1) ...
        esac
      : parallel: true
  post:
    - ? >
        case $CIRCLE_NODE_INDEX in 
        0)
          printf "\n%s\n\n" "=== Archive artifacts of [${TEST_SAMPLE}] ==="
          sudo mv -v -f "/tmp/${TEST_SAMPLE}" "${CIRCLE_ARTIFACTS}/"
          mkdir -v -p "${CIRCLE_ARTIFACTS}/${TEST_SAMPLE}/services"
          sudo cp -v "${ES_CONF}" "${ES_LOG}" $_
          # Exit case statement if run in parallel else proceed to next case.
          $CIRCLE_PARALLEL && exit 0
          ;&
        1)
          printf "\n%s\n\n" "=== Archive artifacts of [${TEST_PRODUCTION}] ==="
          sudo mv -v -f "/tmp/${TEST_PRODUCTION}" "${CIRCLE_ARTIFACTS}/"
          mkdir -v -p "${CIRCLE_ARTIFACTS}/${TEST_PRODUCTION}/services"
          sudo cp -v "${ES_CONF}" "${ES_LOG}" $_
          # Exit case statement if run in parallel else proceed to next case.
          $CIRCLE_PARALLEL && exit 0
          ;&
        esac
      : parallel: true


deployment:
  dev_actions:
    branch: dev
    commands:
      # Push image to Docker Hub.
      - docker login -u "${DOCKER_LOGIN_USERNAME}" -p "${DOCKER_LOGIN_PASSWORD}" -e "${DOCKER_LOGIN_EMAIL}"
      - docker push "${LS_DOCKER_REPO}:${LS_DOCKER_TAG}"
      # Merge tested commit into master.
      - wget -t 3 -O "/tmp/${MERGE_SCRIPT}" "${MERGE_SCRIPT_URL_PREFIX}/${MERGE_SCRIPT}" && chmod 750 "/tmp/${MERGE_SCRIPT}"
      - /tmp/${MERGE_SCRIPT} -c "${CIRCLE_SHA1}" -e "${CIRCLE_BRANCH}" -t "${GIT_UPSTREAM_BRANCH_MASTER}" -r "${GIT_UPSTREAM_URL}"
  master_actions:
    branch: master
    commands:
      # Push image to Docker Hub.
      - docker login -u "${DOCKER_LOGIN_USERNAME}" -p "${DOCKER_LOGIN_PASSWORD}" -e "${DOCKER_LOGIN_EMAIL}"
      - docker push "${LS_DOCKER_REPO}:${LS_DOCKER_TAG}"
      # Merge tested commit into stable.
      - wget -t 3 -O "/tmp/${MERGE_SCRIPT}" "${MERGE_SCRIPT_URL_PREFIX}/${MERGE_SCRIPT}" && chmod 750 "/tmp/${MERGE_SCRIPT}"
      - /tmp/${MERGE_SCRIPT} -c "${CIRCLE_SHA1}" -e "${CIRCLE_BRANCH}" -t "${GIT_UPSTREAM_BRANCH_PRODUCTION}" -r "${GIT_UPSTREAM_URL}"
  stable_actions:
    branch: stable
    commands:
      # Push image to Docker Hub.
      - docker login -u "${DOCKER_LOGIN_USERNAME}" -p "${DOCKER_LOGIN_PASSWORD}" -e "${DOCKER_LOGIN_EMAIL}"
      - docker push "${LS_DOCKER_REPO}:${LS_DOCKER_TAG}"
      # Tag with latest and push to Docker Hub.
      - docker tag "${LS_DOCKER_REPO}:${LS_DOCKER_TAG}" "${LS_DOCKER_REPO}:latest"
      - docker push "${LS_DOCKER_REPO}:latest"

general:
  artifacts:
    - "${CIRCLE_ARTIFACTS}/${LS_DOCKER_TEST_SAMPLE}"
    - "${CIRCLE_ARTIFACTS}/${LS_DOCKER_TEST_PRODUCTION}"

to-logstash/circle.yml


general:
  artifacts:
    - "${CIRCLE_ARTIFACTS}/${LS_DOCKER_TEST_SAMPLE}"
    - "${CIRCLE_ARTIFACTS}/${LS_DOCKER_TEST_PRODUCTION}"

to-logstash/circle.yml

#!/bin/bash
# Test metrics of logstash files:  LS_ERRORS_FILE, LS_INPUT_FILE, LS_OUTPUT_FILE.

# Set flag for exit error.
EXIT_ERROR=0

# Path to input, output and errors.
[[ "${LS_LOG}" ]] || { echo "ERROR: LS_LOG is not set"; exit 1; }
[[ "${TEST_LOG}" ]] && LS_INPUT_PATH="${LS_LOG}/${TEST_LOG}" || { echo "ERROR: TEST_LOG is not set"; exit 1; }
[[ "${LS_INFO}" ]]  && LS_OUTPUT_PATH="${LS_LOG}/${LS_INFO}" || { echo "ERROR: LS_INFO is not set"; exit 1; }
[[ "${LS_ERROR}" ]] && LS_ERROR_PATH="${LS_LOG}/${LS_ERROR}" || { echo "ERROR: LS_ERROR is not set"; exit 1; }

#########
# Files #
#########

# The input file with esf test results should exist.
printf "\n%s\n" "=== Find logstash input ===";
test -f ${LS_INPUT_PATH} && { printf "\n%s\n\n" "--- Following input log found: ${LS_INPUT_PATH}"; cat ${LS_INPUT_PATH}; } || { printf "\n%s\n" "--- No input found."; EXIT_ERROR=1; }

# The info log with logstash events should exist.
printf "\n%s\n" "=== Find logstash output === ";
test -f ${LS_OUTPUT_PATH} && { printf "\n%s\n\n" "--- Following info log found: ${LS_OUTPUT_PATH}"; cat ${LS_OUTPUT_PATH}; } || { printf "\n%s\n" "--- No output found."; EXIT_ERROR=1; }

# The errors file with incorrectly transformed logstash events should not exist.
printf "\n%s\n" "=== Find logstash errors ===";
test -e ${LS_ERROR_PATH} && { printf "\n%s\n\n" "--- Following error log found: ${LS_ERROR_PATH}"; cat ${LS_ERROR_PATH}; EXIT_ERROR=1; } || { printf "\n%s\n" "--- No errors found."; }

...

.../test/metrics-from-files.sh

...

###########
# Metrics #
###########

# The esf test results are transformed to logstash events.
# The esf test results are enriched with jenkins env variables.

# Collect metrics.
printf "\n%s\n" "=== Test metrics from log files ==="
LS_INPUT_LINES=`wc --lines < ${LS_INPUT_PATH}`
LS_INPUT_LENGTH=`wc --max-line-length < ${LS_INPUT_PATH}`
LS_OUTPUT_LINES=`wc --lines < ${LS_OUTPUT_PATH}`
LS_OUTPUT_LENGTH=`wc --max-line-length < ${LS_OUTPUT_PATH}`

# Print metrics.
printf "\n%s\n" "--- Count of lines from input log (${LS_INPUT_LINES}) and output log (${LS_OUTPUT_LINES}) should be equal."
printf "\n%s\n" "--- Maximum length from input log (${LS_INPUT_LENGTH}) should be less than ouput log (${LS_OUTPUT_LENGTH})."

# Test metrics.
test "${LS_INPUT_LINES}" -eq "${LS_OUTPUT_LINES}" || EXIT_ERROR=1
test "${LS_INPUT_LENGTH}" -lt "${LS_OUTPUT_LENGTH}" || EXIT_ERROR=1

# Use exit error flag.
exit "${EXIT_ERROR}"

.../test/metrics-from-files.sh

...
#############
# Documents #
#############

# Fetch documents from all hosts.
[[ $LS_OUTPUT == *"elasticsearch"* || $LS_OUTPUT == *"documents"* ]] && {
  printf "\n%s\n" "=== Fetch documents from elasticsearch index [${ES_INDEX}] ==="
  ES_DOCUMENT_COUNTER=0
  for host in "${HOSTS[@]}";
  do
    printf "\n%s\n\n" "--- Following document count fetched: ${host}/${ES_INDEX}"
    ES_DOCUMENT_COUNTER=$((ES_DOCUMENT_COUNTER \
      + `curl --silent -u ${ES_USER}:${ES_PASSWORD} -XGET "${host}/${ES_INDEX}/_count?pretty" \
      | grep -E '.*count.*' | grep -E -o '[0-9]{1,}'`))
  done

  # Collect metrics.
  printf "%s\n" "=== Test metrics for elasticsearch documents ==="
  LS_INPUT_COUNT_LINES=`wc -l < ${LS_INPUT_PATH}`
  ES_DOCUMENT_COUNT_AVG=`expr ${ES_DOCUMENT_COUNTER} / ${#HOSTS[@]}`

  # Print metrics.
  printf "\n%s\n" "--- Count of lines from input log (${LS_INPUT_COUNT_LINES}) and average documents from all hosts (${ES_DOCUMENT_COUNT_AVG}) should be equal."
  
  # Test metrics.
  test "${LS_INPUT_COUNT_LINES}" -eq "${ES_DOCUMENT_COUNT_AVG}" || EXIT_ERROR=1
}

# Use exit error flag.
exit "${EXIT_ERROR}"

.../test/metrics-from-elasticsearch.sh

Part #4: Integration

Test Aggregation Workflow

#!/bin/bash

# Run puppet
/usr/bin/puppet agent --test
if [[ $? -eq 2 ]] ; then exit 0 ; fi


# Create mounted directories
if [[ -n "${ES_DATA}" && ! -d "${ES_DATA}" ]] ; then
    echo "Creating data directory ${ES_DATA} for Elasticsearch..."
    mkdir -p "${ES_DATA}"
fi
if [[ -n "${ES_LOGS}" && ! -d "${ES_LOGS}" ]] ; then
    echo "Creating log directory ${ES_LOGS} for Elasticsearch..."
    mkdir -p "${ES_LOGS}"
fi

# Run deploy script for elasticsearch cluster
export BUILD_ID=dontKillMe
/jenkins/git/to-elasticsearch/deploy.sh

Jenkins - Deploy_Elasticsearch


Setup:

- Checkout repo from Github
- Set ES_DATA, ES_LOGS


Build Steps:
#!/bin/bash
export DISPLAY=":0"

SARGUMENT=
if [[ "${STORE}" ]] ; then SARGUMENT="--store-name ${STORE}" ; fi
SDARGUMENT=
if [[ "${SHOP_DOMAIN}" ]] ; then SDARGUMENT="--shop-domain ${SHOP_DOMAIN}" ; fi
SUARGUMENT=
if [[ "${SITE_URL}" ]] ; then SUARGUMENT="--site-url http://${SITE_URL}/epages" ; fi
SSLPARGUMENT=
if [[ "${SSL_PROXY}" ]] ; then SSLPARGUMENT="--ssl-proxy ${SSL_PROXY}" ; fi
WSARGUMENT=
if [[ "${WSADMIN_PASSWORD}" ]] ; then WSARGUMENT="--soap-system-password ${WSADMIN_PASSWORD}" ; fi
RARGUMENT=
if [[ ${RETRY_TESTS} == 'true' ]]; then RARGUMENT='--retry ' ; fi
QARGUMENT=
if [[ "${RUN_QUARANTINE_TESTS}" ]] ; then QARGUMENT="--quarantine" ; fi
SKIPARGUMENT=
if [[ "${SKIPPRECONDITIONS}" ]] ; then SKIPARGUMENT="-ap 0 -sp" ; fi

if [[ -x bin/esf-epages6 ]] ; then
      echo "bin/esf-epages6 -browser firefox -groups ${TESTGROUPS} --restart-browser -shop ${SHOP} -url http://${TARGET_DOMAIN}/epages -email max.mustermann@epages.com --csv-report log/esf-report.csv ${RARGUMENT} ${SARGUMENT} ${SDARGUMENT} ${SUARGUMENT} ${SSLPARGUMENT} ${WSARGUMENT} ${QARGUMENT} ${SKIPARGUMENT}"
      bin/esf-epages6 --language en_GB -browser firefox -groups ${TESTGROUPS} --restart-browser ${RARGUMENT} -shop ${SHOP} \
      -url http://${TARGET_DOMAIN}/epages -email max.mustermann@epages.com --csv-report log/esf-report.csv \
      ${SARGUMENT} ${SDARGUMENT} ${SUARGUMENT} ${SSLPARGUMENT} ${WSARGUMENT} ${QARGUMENT} ${SKIPARGUMENT}
      EXIT_CODE_ESF="$?"
else
   exit 1
fi
...

Jenkins - Run_ESF and forward logs

if [[ $VERSION && $REPO && $ENV_TYPE && $ENV_IDENTIFIER && $ENV_OS ]] ; then
    # push the esf-test-results.json to our elasticsearch server via logstash docker container
    
    # mount dirs
    export LS_LOG="$(find ${WORKSPACE} -mindepth 3 -maxdepth 3 -name "log" -type d)"
    export LS_CONFIG="${WORKSPACE}/to-logstash/config"
    
    # logstash.conf
    export LS_INPUT="log,esf"
    export LS_OUTPUT="log,elasticsearch"
    
    # epages6
    export EPAGES_VERSION=${VERSION}
    export EPAGES_REPO_ID=${REPO}
    
    # env url to dir ".../esf/.../log"
    export ENV_URL="${BUILD_URL}artifact/esf/${LS_LOG#*/esf/}"
    
    # elasticsearch connection details
    export ES_HOSTS="[ 'host.de:9200' ]"
    export ES_USER
    export ES_PASSWORD
    
    # elasticsearch document path
    export ES_INDEX="esf-cdp-ui-tests"
    export LS_DOCKER_CONTAINER="to-logstash-run-esf-tests-${BUILD_NUMBER}"
    ${WORKSPACE}/to-logstash/deploy.sh || EXIT_CODE_LOGSTASH=1
    sudo chown -R jenkins:jenkins "${WORKSPACE}" || EXIT_CODE_LOGSTASH=1
fi

if [[ ${EXIT_CODE_ESF} -ne 0 || ${EXIT_CODE_LOGSTASH} -ne 0 ]] ; then exit 1 ; fi

Jenkins - Run_ESF and forward logs

Part #5: Clients

Test Aggregation Workflow

ESF Results for each pipeline run

Client 1: Static Page

via Lucene Request Query

Client 2: Elasticsearch Client

via Drop-Down-Menu

Client 3: Angular App

Retrospective

Result

Where did we succeed?

Result

Where did we mess up?

What does it mean for epages?

  • Abolish Release & Test Automation Team

  • COP Integration/Testing/Languages

What does it mean for us?

  • Switch to new platform with Microservices Architecture
  • Be "real" DevOps with Automation Skills

Checklist

Where to start?

People & Processes

1) Analyze your issue!

3) Use implementation techniques

2) Find common sense on technical implementation

Learnings

FROM ubuntu:15.10

ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true

RUN apt-get update && apt-get -y install sudo && \
    sudo useradd esf --shell /bin/bash --create-home && \
    sudo usermod -a -G sudo esf && \
    echo 'ALL ALL = (ALL) NOPASSWD: ALL' >> /etc/sudoers && \
    echo 'esf:secret' | chpasswd

RUN apt-get update && apt-get -y --no-install-recommends install \
      ca-certificates \
      ca-certificates-java \
      chromium-browser \
      firefox \
      openjdk-8-jre-headless \
      wget \
      xvfb && \
    apt-get -y purge firefox

# By the time the package 'ca-certificates-java' is about to be configured,
# the java command has not been set up thus leading to configuration errors.
# Therefore we call the configuration steps explicitely.
RUN /var/lib/dpkg/info/ca-certificates-java.postinst configure

RUN apt-get -y clean && \
    rm -Rf /var/lib/apt/lists/*

COPY docker-entrypoint.sh /opt/bin/docker-entrypoint.sh
RUN chmod +x /opt/bin/docker-entrypoint.sh

esf/Dockerfile

ENV SCREEN_WIDTH 1730
ENV SCREEN_HEIGHT 1600
ENV SCREEN_DEPTH 24
ENV DISPLAY :99.0

ENV FIREFOX_VERSION="46.0"
RUN wget -O /tmp/firefox-mozilla-build_${FIREFOX_VERSION}-0ubuntu1_amd64.deb "https://sourceforge.net/projects/ubuntuzilla/files/mozilla/apt/pool/main/f/firefox-mozilla-build/firefox-mozilla-build_${FIREFOX_VERSION}-0ubuntu1_amd64.deb/download" && \
    dpkg -i /tmp/firefox-mozilla-build_${FIREFOX_VERSION}-0ubuntu1_amd64.deb && \
    rm -f /tmp/firefox-mozilla-build_${FIREFOX_VERSION}-0ubuntu1_amd64.deb

#################
# Configure esf #
#################

ENV USER_HOME_DIR=/home/esf

WORKDIR ${USER_HOME_DIR}

ADD build/distributions/esf-epages6-*.tar .

USER root
RUN chown -R esf:esf esf-epages6-*
USER esf

# Create a symlink.
RUN ln -s $(basename $(find . -mindepth 1 -maxdepth 1 -name "esf-epages6*" -type d)) esf

VOLUME ${USER_HOME_DIR}/esf/log

WORKDIR ${USER_HOME_DIR}/esf

ENTRYPOINT ["/opt/bin/entry_point.sh"]

esf/Dockerfile

#!/bin/bash
export GEOMETRY="$SCREEN_WIDTH""x""$SCREEN_HEIGHT""x""$SCREEN_DEPTH"

cd "${WORKDIR}"
xvfb-run --server-args="$DISPLAY -screen 0 $GEOMETRY -ac +extension RANDR" \
  /home/esf/esf/bin/esf-epages6 $@

esf/docker-entypoint.sh

#!/bin/bash

# Exit 1 if any script fails.
set -e

############
# logstash #
############

# The LS_LOG and LS_CONFIG dirs are mandatory and mounted to DOCKER_LOG_VOL and DOCKER_CONFIG_VOL.
[[ -z "${LS_LOG}" ]] && { echo "ERROR: Logstash log directory LS_LOG is not set."; exit 1; };
[[ ! -d "${LS_LOG}" ]] && { echo "ERROR: Logstash log directory [LS_LOG=${LS_LOG}] does not exist."; exit 1; };
HOST_LOG_DIR="${LS_LOG}";
[[ -z "${LS_CONFIG}" ]] && { echo "ERROR: Logstash config directory LS_CONFIG is not set."; exit 1; };
[[ ! -d "${LS_CONFIG}" ]] && { echo "ERROR: Logstash config directory [LS_CONFIG=${LS_CONFIG}] does not exist."; exit 1; };
HOST_CONFIG_DIR="${LS_CONFIG}";

# The LS_INPUT type is mandatory and sets our logtsash input.
[[ "${LS_INPUT}" =~ ^.*(conf)|(log)|(esf).*$ ]] || {
  echo "ERROR: Logstash input [LS_INPUT=${LS_INPUT}] is not set correctly. Possible values: [conf,log,esf]"; exit 1; }

# The LS_OUTPUT types is mandatory and set our logstash output.
[[ "${LS_OUTPUT}" =~ ^.*(conf)|(verbose)|(console)|(log)|(info)|(error)|(elasticsearch)|(document)|(template).*$ ]] || {
  echo "ERROR: Logstash ouput [LS_OUTPUT=${LS_OUTPUT}] is not set correctly. Possible values: [conf,verbose,console,log,info,error,elasticsearch,document,template]"; }

# The LS_INFO and LS_ERROR log files have default names.
[[ -z "${LS_INFO}" ]] && export LS_INFO="logstash-info.json";
[[ -z "${LS_ERROR}" ]] && export LS_ERROR="logstash-error.json";

##############
# esf config #
##############

# Set esf defaults.
[[ "${LS_INPUT}" =~ ^.*(esf).*$ ]] && {
  [[ -z "${LS_ENV}" ]] && export LS_ENV="env-esf.list";
  [[ -z "${LS_CONF}" ]] && export LS_CONF="logstash-esf.conf";
  [[ -z "${LS_PATTERN}" ]] && export LS_PATTERN="*esf*.json";
  [[ -z "${ES_TEMPLATE}" ]] && export ES_TEMPLATE="template-esf.json";
  [[ -z "${ES_DOCUMENT_TYPE}" ]] && export ES_DOCUMENT_TYPE=${EPAGES_VERSION};
  [[ "${ES_DOCUMENT_TYPE}" == "" && "${LS_OUTPUT}" != "template" ]] && { echo "ERROR: EPAGES_VERSION is not set."; exit 1; }
}

# Abort if no configuration could be set.
[[ -z "${LS_ENV}" ]] &&  { echo "ERROR: LS_ENV is not set."; exit 1; }
[[ -z "${LS_CONF}" ]] &&  { echo "ERROR: LS_CONF is not set."; exit 1; }

#################
# elasticsearch #
#################

# The elasticsearch configuration.
[[ "${LS_OUTPUT}" =~ ^.*(elasticsearch)|(document)|(template).*$ ]] && {
  [[ -z "${ES_HOSTS}" ]] && { echo "ERROR: ES_HOSTS is not set."; exit 1; }
  [[ -z "${ES_USER}" ]] && { echo -e "\nINFO: ES_USER is optional and not set. Check if hosts ${ES_HOSTS} use auth."; }
  [[ -z "${ES_PASSWORD}" ]] && { echo -e "\nINFO: ES_PASSWORD is optional and not set. Check if hosts ${ES_HOSTS} use auth."; }
  [[ -z "${ES_TEMPLATE}" && "${LS_OUTPUT}" =~ ^.*(elasticsearch)|(template).*$ ]] && { echo "ERROR: ES_TEMPLATE is not set."; exit 1; }
  [[ -z "${ES_INDEX_ALIAS}" && "${LS_OUTPUT}" =~ ^.*(elasticsearch)|(template).*$ ]] && { echo -e "\nINFO: ES_INDEX_ALIAS is is optional and not set. Check if template [${ES_TEMPLATE}] needs an alias."; }
  [[ -z "${ES_INDEX}" ]] && { echo "ERROR: ES_INDEX is not set."; exit 1; }
  [[ -z "${ES_DOCUMENT_TYPE}" && "${LS_OUTPUT}" != "template" ]] && { echo "ERROR: ES_DOCUMENT_TYPE is not set."; exit 1; }
}

##########
# docker #
##########

# The docker configuration has defaults.
DOCKER_LOG_VOL="/usr/share/logstash/log" # do not change - derived from dockerfile LS_LOG_VOL!
DOCKER_CONFIG_VOL="/usr/share/logstash/config" # do not change - derived from dockerfile LS_CONFIG_VOL!
DOCKER_IMAGE_NAME="epages/to-logstash"; [[ -n "${LS_DOCKER_REPO}" ]] && DOCKER_IMAGE_NAME="${LS_DOCKER_REPO}"
DOCKER_IMAGE_TAG="wip"; [[ -n "${LS_DOCKER_TAG}" ]] && DOCKER_IMAGE_TAG="${LS_DOCKER_TAG}"
DOCKER_CONTAINER_NAME="to-logstash-indexer"; [[ -n "${LS_DOCKER_CONTAINER}" ]] && DOCKER_CONTAINER_NAME="${LS_DOCKER_CONTAINER}"

# Special flags for docker run are always used and can only be ommitted by actively disabling them.
DOCKER_RUN_REMOVE=''; [[ $LS_DOCKER_REMOVE == true ]] && DOCKER_RUN_REMOVE='-it --rm'
DOCKER_RUN_DETACH='--detach=true'; [[ "${LS_OUTPUT}" =~ ^.*(verbose)|(console).*$ ]] || [[ $LS_DOCKER_DETACH == false ]] && DOCKER_RUN_DETACH='--detach=false'
DOCKER_RUN_NETWORK='--net="host"'; [[ $LS_DOCKER_NETWORK == false ]] && DOCKER_RUN_NETWORK='' 

# Print info.
printf "\n%s\n\n" "=== Start docker container [${DOCKER_CONTAINER_NAME}] from image [${DOCKER_IMAGE_NAME}:${DOCKER_IMAGE_TAG}] ==="
[[ "${LS_PATTERN}" ]] && {
printf "%s\n" "Process logs with pattern:          ${LS_PATTERN}"; }
printf "%s\n" "Mount log directory:                ${LS_LOG}"
printf "%s\n" "Mount config directory:             ${LS_CONFIG}"
printf "%s\n" "Set logstash input types:           ${LS_INPUT}"
printf "%s\n" "Set logstash output types:          ${LS_OUTPUT}"
printf "%s\n" "Use logstash env file:              ${LS_ENV}"
printf "%s\n" "Use logstash conf file:             ${LS_CONF}"
[[ $LS_OUTPUT =~ ^.*(log)|(info).*$ ]] && {
  printf "%s\n" "Use info log file:                  ${LS_INFO}"; }
[[ $LS_OUTPUT =~ ^.*(log)|(error).*$ ]] && {
  printf "%s\n" "Use error log file:                 ${LS_ERROR}"; }
[[ $LS_OUTPUT =~ ^.*(elasticsearch)|(template).*$ ]] && {
  printf "%s\n" "Use elasticsearch template file:    ${ES_TEMPLATE}"; }
[[ $LS_OUTPUT =~ ^.*(elasticsearch)|(document)|(template).*$ ]] && {
  printf "%s\n" "Set elasticsearch hosts:            ${ES_HOSTS}"; }
[[ $LS_OUTPUT =~ ^.*(elasticsearch)|(template).*$ && "${ES_INDEX_ALIAS}" ]] && {
  printf "%s\n" "Set elasticsearch index alias:      ${ES_INDEX_ALIAS}"; }
[[ $LS_OUTPUT =~ ^.*(elasticsearch)|(document)|(template).*$ ]] && {
  printf "%s\n" "Set elasticsearch index:            ${ES_INDEX}"; }
[[ $LS_OUTPUT =~ ^.*(elasticsearch)|(document).*$ ]] && {
  printf "%s\n" "Set elasticsearch document type:    ${ES_DOCUMENT_TYPE}"; }
printf "\n%s\n\n" "--- Start configuration is applied."

# Run docker container.
docker run ${DOCKER_RUN_REMOVE} ${DOCKER_RUN_DETACH} ${DOCKER_RUN_NETWORK} \
  --env-file "${LS_CONFIG}/${LS_ENV}" \
  -v ${HOST_LOG_DIR}:${DOCKER_LOG_VOL} \
  -v ${HOST_CONFIG_DIR}:${DOCKER_CONFIG_VOL} \
  --name ${DOCKER_CONTAINER_NAME} \
  ${DOCKER_IMAGE_NAME}:${DOCKER_IMAGE_TAG} \
  logstash -f "${DOCKER_CONFIG_VOL}/${LS_CONF}"

exit $?

to-logstash/start.sh

#!/bin/bash

# Exit 1 if any script fails.
set -e

#################
# elasticsearch #
#################

# The ES_CONFIG, ES_DATA and ES_LOGS dirs are mandatory and mounted to DOCKER_CONFIG_VOL, DOCKER_DATA_VOL and DOCKER_LOGS_VOL.
[[ -z "${ES_CONFIG}" ]] && { echo "ERROR: Elasticsearch config directory $ES_CONFIG is not set."; exit 1; }
[[ ! -d "${ES_CONFIG}" ]] && { echo "Elasticsearch config directory ${ES_CONFIG} does not exist."; exit 1; }
HOST_CONFIG_DIR="${ES_CONFIG}"
[[ -z "${ES_DATA}" ]] && { echo "ERROR: Elasticsearch data directory $ES_DATA is not set."; exit 1; }
[[ ! -d "${ES_DATA}" ]] && { echo "Elasticsearch data directory ${ES_DATA} does not exist."; exit 1; }
HOST_DATA_DIR="${ES_DATA}"
[[ -z "${ES_LOGS}" ]] && { echo "ERROR: Elasticsearch logs directory $ES_LOGS is not set."; exit 1; }
[[ ! -d "${ES_LOGS}" ]] && { echo "Elasticsearch logs directory ${ES_LOGS} does not exist."; exit 1; }
HOST_LOGS_DIR="${ES_LOGS}"

# The ES_ENV file is mandatory and defines our used settings. Some settings can be set at runtime.
[[ -z "${ES_ENV}" ]] && { echo "ERROR: ES_ENV is not set."; exit 1; }
[[ -z "${ES_HEAP_SIZE}" ]] && { export ES_HEAP_SIZE="1g"; echo -e "\nINFO: ES_HEAP_SIZE is optional and not set. Default value [ES_HEAP_SIZE=${ES_HEAP_SIZE}] will be used.\n"; }

# Map ES host ports.
HOST_HTTP_PORT="9200"; [[ -n "${ES_HOST_HTTP}" ]] && HOST_HTTP_PORT="${ES_HOST_HTTP}"
HOST_TCP_PORT="9300"; [[ -n "${ES_HOST_TCP}" ]] && HOST_TCP_PORT="${ES_HOST_TCP}"

##########
# docker #
##########

# These docker settings are derived from our elasticsearch.yml and should not be changed.
ES_CONF="${ES_CONFIG}/elasticsearch.yml.j2"
DOCKER_HTTP_PORT=$( grep 'http.port:' ${ES_CONF} | awk '{print $2}')
DOCKER_TCP_PORT=$( grep 'transport.tcp.port:' ${ES_CONF} | awk '{print $2}' )
DOCKER_CONFIG_VOL=$( grep 'path.config:' ${ES_CONF} | awk '{print $2}' )
DOCKER_DATA_VOL=$( grep 'path.data:' ${ES_CONF} | awk '{print $2}' )
DOCKER_LOGS_VOL=$( grep 'path.logs:' ${ES_CONF} | awk '{print $2}' )

# The Docker container name, image name, image tag and published host ports have default values.
DOCKER_IMAGE_NAME='docker.epages.com/to-elasticsearch'; [[ -n "${ES_IMAGE_NAME}" ]] && DOCKER_IMAGE_NAME="${ES_IMAGE_NAME}"
DOCKER_IMAGE_TAG='latest'; [[ -n "${ES_IMAGE_TAG}" ]] && DOCKER_IMAGE_TAG="${ES_IMAGE_TAG}"
DOCKER_CONTAINER_NAME="to-elasticsearch-service"; [[ -n "${ES_DOCKER_CONTAINER}" ]] && DOCKER_CONTAINER_NAME="${ES_DOCKER_CONTAINER}"

# Special flags for docker run.
DOCKER_RUN_REMOVE=''; [[ $LS_DOCKER_REMOVE == true ]] && DOCKER_RUN_REMOVE='-it --rm'
DOCKER_RUN_DETACH='--detach=true'; [[ $ES_DOCKER_DETACH == false ]] && DOCKER_RUN_DETACH='--detach=false'

# Run docker container.
docker run ${DOCKER_RUN_REMOVE} ${DOCKER_RUN_DETACH} \
  --env-file "${ES_CONFIG}/${ES_ENV}" \
  -p ${HOST_HTTP_PORT}:${DOCKER_HTTP_PORT} \
  -p ${HOST_TCP_PORT}:${DOCKER_TCP_PORT} \
  -v ${HOST_CONFIG_DIR}:${DOCKER_CONFIG_VOL} \
  -v ${HOST_DATA_DIR}:${DOCKER_DATA_VOL} \
  -v ${HOST_LOGS_DIR}:${DOCKER_LOGS_VOL} \
  --name ${DOCKER_CONTAINER_NAME} \
  ${DOCKER_IMAGE_NAME}:${DOCKER_IMAGE_TAG}

exit $?

to-elasticsearch/start.sh

#!/bin/bash

# Exit 1 if any script fails.
set -e

# Add logstash as command if needed
if [[ "${1:0:1}" = '-' ]]; then
    set -- logstash "$@"
fi

# If running logstash
if [[ "$1" = 'logstash' ]]; then

    # Change the ownership of the mounted volumes to user logstash at docker container runtime
    chown -R logstash:logstash ${LS_CONFIG_VOL} ${LS_LOG_VOL}
    
    # Get verbose and log settings
    [[ "${LS_OUTPUT}" =~ ^.*(verbose).*$ ]] && VERBOSE=true || VERBOSE=false
    [[ "${LS_OUTPUT}" =~ ^.*(log)|(info)|(error).*$ ]] && LOG=true || LOG=false

    # Get LS_ENV
    {
        # Find env vars in docker
        $VERBOSE && printf "\n%s\n" "=== Find env vars defined in docker container from list [${LS_ENV}] ==="
        LS_ENV_PATH=$( find "${LS_CONFIG_VOL}" -maxdepth 3 -iname "${LS_ENV}" )
        $VERBOSE && { 
            [[ -f "${LS_ENV_PATH}" ]] \
                && { printf "\n%s\n\n" "--- Following env list found: ${LS_ENV_PATH}"; cat ${LS_ENV_PATH}; } \
                || printf "\n%s\n" "--- No env list file found."
        }
        $VERBOSE && {
            printf "\n%s\n\n" "--- Following logstash [LS_*] env vars are set in docker container:"
            printenv | grep -E '^LS_*=*.*' | sort
            printf "\n%s\n\n" "--- Following elasticsearch [ES_*] env vars are set in docker container:"
            printenv | grep -E '^ES_.*=.*' | sort
        }
    }

    # Get LS_CONF
    {
        # Render logstash conf if exists as jinja template
        $VERBOSE && printf "\n%s\n" "=== Render logstash conf [${LS_CONF}.j2] in [${LS_CONFIG_VOL}] with [${LS_ENV}] ==="
        LS_CONF_JINJA=$( find "${LS_CONFIG_VOL}" -maxdepth 3 -iname "${LS_CONF}.j2" )
        $VERBOSE && {
            [[ -f "${LS_CONF_JINJA}" ]] \
                && printf "\n%s\n\n" "--- Following logstash conf is rendered: ${LS_CONF_JINJA}" \
                || printf "\n%s\n" "--- No logstash conf as jinja template found. Nothing needed to be renderd."
        }
        [[ ${LS_CONF_JINJA} ]] && { 
          $VERBOSE \
            && python ${JINJA_SCRIPT} --verbose -f "${LS_ENV_PATH}" -e "LS_CONFIG_VOL" "LS_LOG_VOL" -t "${LS_CONF_JINJA}" \
            || python ${JINJA_SCRIPT} -f "${LS_ENV_PATH}" -e "LS_CONFIG_VOL" "LS_LOG_VOL" -t "${LS_CONF_JINJA}"
        }

        # Find logstash conf
        $VERBOSE && printf "\n%s\n" "=== Find logstash conf [${LS_CONF}] in dir [${LS_CONFIG_VOL}] ==="
        LS_CONF_PATH=$( find "${LS_CONFIG_VOL}" -maxdepth 3 -iname "${LS_CONF}" )
        $VERBOSE && {
            [[ ${LS_CONF_PATH} ]] \
                && { printf "\n%s\n\n" "--- Following logstash conf found: ${LS_CONF_PATH}"; cat ${LS_CONF_PATH}; } \
                || printf "\n\n%s" "--- No logstash conf found."
        }
    }

    # Get ES_TEMPLATE if should be pushed
    [[ "${LS_OUTPUT}" =~ ^.*(elasticsearch)|(template).*$ ]] && {

        # Render elasticsearch template if exists as jinja template
        $VERBOSE && printf "\n\n%s\n" "=== Render elasticsearch template [${ES_TEMPLATE}.j2] in [${LS_CONFIG_VOL}] with [${LS_ENV}] ==="
        ES_TEMPLATE_JINJA=$( find "${LS_CONFIG_VOL}" -maxdepth 3 -iname "${ES_TEMPLATE}.j2" )
        $VERBOSE && {
            [[ ${ES_TEMPLATE_JINJA} ]] \
                && printf "\n%s\n\n" "--- Following elasticsearch template is rendered: ${ES_TEMPLATE_JINJA}" \
                || printf "\n%s\n" "--- No elasticsearch template as jinja template found. Nothing needed to be renderd."
        }
        [[ ${ES_TEMPLATE_JINJA} ]] && {
            $VERBOSE \
            && python ${JINJA_SCRIPT} --verbose -f "${LS_ENV_PATH}" -t "${ES_TEMPLATE_JINJA}" \
            || python ${JINJA_SCRIPT} -f "${LS_ENV_PATH}" -t "${ES_TEMPLATE_JINJA}"
        }

        # Find elasticsearch template
        $VERBOSE && printf "\n%s\n" "=== Find elasticsearch template [${ES_TEMPLATE}] in dir [${LS_CONFIG_VOL}] ==="
        ES_TEMPLATE_PATH=$( find "${LS_CONFIG_VOL}" -maxdepth 3 -iname "${ES_TEMPLATE}" )
        $VERBOSE && { 
            [[ ${ES_TEMPLATE_PATH} ]] \
                && { printf "\n%s\n\n" "--- Following elasticsearch template found: ${ES_TEMPLATE_PATH}"; cat ${ES_TEMPLATE_PATH}; } \
                || printf "\n\n%s" "--- No elasticsearch template found."
        }
    }

    # Test connection to elasticsearch hosts
    [[ "${LS_OUTPUT}" =~ ^.*(elasticsearch)|(document)|(template).*$ ]] && {

        # Print info
        $VERBOSE && printf "\n\n%s\n" "=== Test connection to elasticsearch hosts ${ES_HOSTS} ==="

        # Set ES_HOSTS as hosts array
        declare -a HOSTS=$( echo ${ES_HOSTS} | tr '[]' '()' | tr ',' ' ' )

        # Test each host
        NOT_AVAILABLE=false
        for host in "${HOSTS[@]}";
        do
            ES_CLUSTER=$( curl --silent --retry 5 -u ${ES_USER}:${ES_PASSWORD} -XGET "${host}?pretty" )
            if [[ $(echo "$ES_CLUSTER" | grep "name" | wc -l) -gt 0 ]]; then
                $VERBOSE && { printf "\n%s\n\n" "--- Following elasticsearch host reached: ${host}"; echo "$ES_CLUSTER"; }
            elif [[ $( echo "$ES_CLUSTER" | grep "OK" | wc -l ) -eq 1 ]]; then
                echo "ERROR: Following elasticsearch host requires correct auth credentials: ${host}" | ( $LOG && gosu logstash tee -a "${LS_LOG_VOL}/${LS_ERROR}" ) 
                NOT_AVAILABLE=true
            else
                echo "ERROR: Following elasticsearch host is currently not available: ${host}" | ( $LOG && gosu logstash tee -a "${LS_LOG_VOL}/${LS_ERROR}" ) 
                NOT_AVAILABLE=true
            fi
        done

        # Exit if any host cannot be reached
        $NOT_AVAILABLE && { echo "ERROR: Aborting start of logstash agent with logstash conf [${LS_CONF}]" | ( $LOG &&  gosu logstash tee -a "${LS_LOG_VOL}/${LS_ERROR}" ); exit 1; }
    }

    
    # Start logstash agent
    $VERBOSE && printf "\n\n%s\n\n" "=== Start logstash agent with logstash conf [${LS_CONF}] ==="
    set -- gosu logstash "$@"

fi # running logstash

# As argument is not related to logstash,
# then assume that user wants to run his own process,
# for example a `bash` shell to explore this image
exec "$@"

to-logstash/docker-entrypoint.sh

Common Mistakes

Sources

Related Articles

 

Docker

Logstash

 

Plugins

Elasticsearch

Dockerfile

Continuous Delivery Pipeline Case Study

By Benjamin Nothdurft

Continuous Delivery Pipeline Case Study

  • 976