Microservices
Pipeline




Real World Example

Basic Design of a Microservices Pipeline

Pipeline Stages and Roles

Microservice Docker
Common Practices

  • Store Dockerfile inside of each project
  • Create own base images within separate image 
  • Use build tool integration to generate and push Docker Images to Registry (e.g. Gradle, Maven)
  • Use Docker Compose for testing locally on dev machine
  • Use more powerful Orchestration Tools, like Swarm and Kubernetes for production

In-container Java Development

/ Build Tool Support

Examples:

Plugins:

Versioning of Microservices

Versioning of Microservices

= is reflected in build microservice task of gradle

Handling failures

Pipeline with updates to services 1 to 3

 

Handling failures

Fast Lane approach

 

Handling failures

Pipeline after fastlane

Wiremock

class WiremockConventionPlugin implements Plugin<Project> {

    private static final String STUB_DIRECTORY = 'build/wiremock-stubs'

    private static final String PULL_BASE_IMAGE_TASK = 'pullWiremockBaseDockerImage'
    private static final String CREATE_SERVICE_DOCKERFILE_TASK = 'createWiremockServiceDockerfile'
    private static final String BUILD_SERVICE_IMAGE_TASK = 'buildWiremockServiceDockerImage'
    private static final String PUSH_SERVICE_IMAGE_TASK = 'pushWiremockServiceDockerImage'
    private static final String REMOVE_SERVICE_IMAGE_TASK = 'removeWiremockServiceDockerImage'
    private static final String REMOVE_BASE_IMAGE_TASK = 'removeWiremockBaseDockerImage'
    private static final String PUBLISH_WIREMOCK_TASK = 'publishWiremock'

    private static final String DEFAULT_TASK_GROUP = 'Wiremock'

    @Override
    void apply(Project project) {

        project.apply(plugin: DockerRemoteApiPlugin)

        def dockerImageName = "epages/ng-wiremock"
        def dockerImageTagLatest = "latest"
        def dockerImageTagService = project.name
        def gitCommit = System.env.GIT_COMMIT

        project.extensions.docker.with {
            if (System.env.DOCKER_HOST) {
                url = "$System.env.DOCKER_HOST".replace("tcp", "https")
                if (System.env.DOCKER_CERT_PATH) {
                    certPath = new File(System.env.DOCKER_CERT_PATH)
                }
            } else {
                url = 'unix:///var/run/docker.sock'
            }
            registryCredentials {
                if (System.env.DOCKER_REGISTRY_URL) {
                    url = System.env.DOCKER_REGISTRY_URL
                }
                if (System.env.DOCKER_REGISTRY_USERNAME) {
                    username = System.env.DOCKER_REGISTRY_USERNAME
                }
                if (System.env.DOCKER_REGISTRY_PASSWORD) {
                    password = System.env.DOCKER_REGISTRY_PASSWORD
                }
                if (System.env.DOCKER_REGISTRY_EMAIL) {
                    email = System.env.DOCKER_REGISTRY_EMAIL
                }
            }
        }

        project.task(PULL_BASE_IMAGE_TASK, type: DockerPullImage) {
            description = 'Pulls our wiremock base docker image'
            group = DEFAULT_TASK_GROUP
            repository = "$dockerImageName"
            tag = "$dockerImageTagLatest"
        }

        project.task(CREATE_SERVICE_DOCKERFILE_TASK) << {
            description = 'Creates a new dockerfile for the generation of the layer with the wiremock stubs.'
            group = DEFAULT_TASK_GROUP

            new File(STUB_DIRECTORY).mkdirs()

            def dockerfile = new File("$STUB_DIRECTORY/Dockerfile")
            dockerfile.createNewFile()
            dockerfile.text = "FROM epages/ng-wiremock\nCOPY . /home/wiremock"
        }

        project.task(BUILD_SERVICE_IMAGE_TASK, type: DockerBuildImage) {
            description = 'Build a new wiremock service image.'
            group = DEFAULT_TASK_GROUP
            dependsOn project.tasks."$CREATE_SERVICE_DOCKERFILE_TASK"
            dependsOn project.tasks."$PULL_BASE_IMAGE_TASK"
            inputDir = project.file(STUB_DIRECTORY)
            if (gitCommit) {
                labels = ["git-commit": gitCommit]
            }
            tag = "$dockerImageName:$dockerImageTagService"
        }

        project.task(PUSH_SERVICE_IMAGE_TASK, type: DockerPushImage) {
            description = 'Push the docker image to your docker repository. All tags are included.'
            group = DEFAULT_TASK_GROUP
            dependsOn project.tasks."$BUILD_SERVICE_IMAGE_TASK"
            imageName = "$dockerImageName"
            tag = "$dockerImageTagService"
        }

        project.task(REMOVE_SERVICE_IMAGE_TASK, type:DockerRemoveImage) {
            description = 'Remove the wiremock service image from the local filesystem.'
            group = DEFAULT_TASK_GROUP
            imageId = "$dockerImageName:$dockerImageTagService"
        }

        project.task(REMOVE_BASE_IMAGE_TASK, type:DockerRemoveImage) {
            description = 'Remove the wiremock base image from the local filesystem.'
            group = DEFAULT_TASK_GROUP
            imageId = "$dockerImageName:$dockerImageTagLatest"
        }

        project.task(PUBLISH_WIREMOCK_TASK) {
            description = 'Push and remove wiremock images.'
            group = DEFAULT_TASK_GROUP
            dependsOn project.tasks."$PUSH_SERVICE_IMAGE_TASK"
            dependsOn project.tasks."$REMOVE_SERVICE_IMAGE_TASK"
            dependsOn project.tasks."$REMOVE_BASE_IMAGE_TASK"
        }

        project.tasks."$PULL_BASE_IMAGE_TASK".mustRunAfter(project.tasks.intTest)

        project.tasks."$REMOVE_SERVICE_IMAGE_TASK".mustRunAfter(project.tasks."$PUSH_SERVICE_IMAGE_TASK")
        project.tasks."$REMOVE_BASE_IMAGE_TASK".mustRunAfter(project.tasks."$PUSH_SERVICE_IMAGE_TASK")

    }
}

Other Docker Scenarios

  • Docker for Developer Tooling:
    • Provision your laptop with Ansible in Docker
    • Start presentation from within a Container
  • Docker for IT Infrastructure Tasks:
    • CI-Servers (Build & Continuous Delivery, e.g. Jenkins, CruiseControl)
    • Mockserver (e.g. Wiremock)
    • Testing (e.g. Selenium-UI-TestSuite)
  • Docker for Applications/Core Services:
    • Microservices (e.g. Spring Boot / .Net App)
    • Message Broker (e.g. RabbitMQ)
    • Database (SQL)

Discussion

Outakes

Use Cases:

@Rule
public GenericContainer dslContainer = new GenericContainer(
    new ImageFromDockerfile()
            .withFileFromString("folder/someFile.txt", "hello")
            .withFileFromClasspath("test.txt", "mappable-resource/test-resource.txt")
            .withFileFromClasspath("Dockerfile", "mappable-dockerfile/Dockerfile"))
new GenericContainer(
        new ImageFromDockerfile()
                .withDockerfileFromBuilder(builder -> {
                        builder
                                .from("alpine:3.2")
                                .run("apk add --update nginx")
                                .cmd("nginx", "-g", "daemon off;")
                                .build(); }))
                .withExposedPorts(80);

Testcontainers Examples


package org.testcontainers.junit;

import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import org.junit.Test;
import org.testcontainers.containers.JdbcDatabaseContainer;
import org.testcontainers.containers.PostgreSQLContainer;

import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;

import static org.rnorth.visibleassertions.VisibleAssertions.assertEquals;

/**
 * @author richardnorth
 */
public class SimplePostgreSQLTest {

    @Test
    public void testSimple() throws SQLException {
        try (PostgreSQLContainer postgres = new PostgreSQLContainer<>()) {
            postgres.start();

            ResultSet resultSet = performQuery(postgres, "SELECT 1");

            int resultSetInt = resultSet.getInt(1);
            assertEquals("A basic SELECT query succeeds", 1, resultSetInt);
        }
    }

    @Test
    public void testExplicitInitScript() throws SQLException {
        try (PostgreSQLContainer postgres = new PostgreSQLContainer<>()
                .withInitScript("somepath/init_postgresql.sql")) {
            postgres.start();

            ResultSet resultSet = performQuery(postgres, "SELECT foo FROM bar");

            String firstColumnValue = resultSet.getString(1);
            assertEquals("Value from init script should equal real value", "hello world", firstColumnValue);
        }
    }

    private ResultSet performQuery(JdbcDatabaseContainer container, String sql) throws SQLException {
        HikariConfig hikariConfig = new HikariConfig();
        hikariConfig.setJdbcUrl(container.getJdbcUrl());
        hikariConfig.setUsername(container.getUsername());
        hikariConfig.setPassword(container.getPassword());

        HikariDataSource ds = new HikariDataSource(hikariConfig);
        Statement statement = ds.getConnection().createStatement();
        statement.execute(sql);
        ResultSet resultSet = statement.getResultSet();
        resultSet.next();
        return resultSet;
    }
}
import org.junit.Rule;
import org.junit.Test;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.RemoteWebDriver;
import org.testcontainers.containers.BrowserWebDriverContainer;

import java.io.File;

import static org.rnorth.visibleassertions.VisibleAssertions.assertTrue;
import static org.testcontainers.containers.BrowserWebDriverContainer.VncRecordingMode.RECORD_ALL;

/**
 * Simple example of plain Selenium usage.
 */
public class SeleniumContainerTest {

    @Rule
    public BrowserWebDriverContainer chrome = new BrowserWebDriverContainer()
                                                    .withDesiredCapabilities(DesiredCapabilities.chrome())
                                                    .withRecordingMode(RECORD_ALL, new File("target"));

    @Test
    public void simplePlainSeleniumTest() {
        RemoteWebDriver driver = chrome.getWebDriver();

        driver.get("https://wikipedia.org");
        WebElement searchInput = driver.findElementByName("search");

        searchInput.sendKeys("Rick Astley");
        searchInput.submit();

        WebElement otherPage = driver.findElementByLinkText("Rickrolling");
        otherPage.click();

        boolean expectedTextFound = driver.findElementsByCssSelector("p")
                .stream()
                .anyMatch(element -> element.getText().contains("meme"));

        assertTrue("The word 'meme' is found on a page about rickrolling", expectedTextFound);
    }
}
import com.mycompany.cache.Cache;
import com.mycompany.cache.RedisBackedCache;
import org.junit.Before;
import org.junit.Rule;
import org.junit.Test;
import org.testcontainers.containers.GenericContainer;
import redis.clients.jedis.Jedis;

import java.util.Optional;

import static org.rnorth.visibleassertions.VisibleAssertions.*;

/**
 * Integration test for Redis-backed cache implementation.
 */
public class RedisBackedCacheTest {

    @Rule
    public GenericContainer redis = new GenericContainer("redis:3.0.6")
                                            .withExposedPorts(6379);
    private Cache cache;

    @Before
    public void setUp() throws Exception {
        Jedis jedis = new Jedis(redis.getContainerIpAddress(), redis.getMappedPort(6379));

        cache = new RedisBackedCache(jedis, "test");
    }

    @Test
    public void testFindingAnInsertedValue() {
        cache.put("foo", "FOO");
        Optional<String> foundObject = cache.get("foo", String.class);

        assertTrue("When an object in the cache is retrieved, it can be found",
                        foundObject.isPresent());
        assertEquals("When we put a String in to the cache and retrieve it, the value is the same",
                        "FOO",
                        foundObject.get());
    }

    @Test
    public void testNotFindingAValueThatWasNotInserted() {
        Optional<String> foundObject = cache.get("bar", String.class);

        assertFalse("When an object that's not in the cache is retrieved, nothing is found",
                foundObject.isPresent());
    }
}

Running Spring Boot Apps

on Docker Windows Containers with Ansible

Docker Engine

Docker Engine

A Client-Server Application

A Docker Daemon runs on a Docker Host and manages Docker objects (images, containers etc.).

docker run -it --name my-first-container alpine:3.4 \
    /bin/echo "Hello World"

docker Client

docker
docker --help
docker COMMAND --help

# help

# "Hello World" run in a Alpine container

# version and info

docker version
docker info
docker inspect my-first-container

Tips

on Docker Commands

docker search $IMAGE_NAME
docker search --filter stars=3 busybox
docker search --filter is-automated busybox
docker search --filter is-offical busybox
docker search --filter "is-offical=true" --filter "stars=3" --no-trunc busybox

# pull or push an image

docker pull $IMAGE_NAME[:$TAG]
docker pull $IMAGE_NAME:$@DIGEST
docker push $IMAGE_NAME:$TAG
docker push $REPO/$IMAGE_NAME:$TAG
docker push $REGISTRY:$PORT/REPO/$IMAGE_NAME:$TAG
docker login [$REGISTRY_HOSTNAME]
docker login -u $USERNAME -p $PASSWORD -e $EMAIL
docker logout [$REGISTRY_HOSTNAME]

# search for images

# login and logout to Docker Hub or other Registry

docker images
docker images --no-trunc
docker images --digests
docker images --filter "dangling=true"
docker images --filter "before=image1"
docker images --filter "since=image3"
docker images --format "table {{.ID}}\t{{.Repository}}\t{{.Tag}}"

# build an image from a Dockerfile

docker build -t $IMAGE_NAME:$TAG $DIR
docker build -t $IMAGE_NAME:$TAG -f $DOCKERFILE
docker build -t $IMAGE_NAME:$TAG -t $IMAGE_NAME:$TAG .    
docker tag $ID          $REPO/$NAME:$TAG
docker tag $NAME        $REPO/$NAME:$TAG
docker tag $NAME:$TAG   $REPO/$NAME:$TAG
docker tag $ID          $REGISTRY_HOSTNAME:$PORT/$REPO/$NAME:$TAG

# list images

# tag images

# rmi = remove images

docker rmi $ID
docker rmi $NAME:$TAG
docker rmi $(docker images -f "dangling=true" -q)
docker export CONTAINER > latest.tar
docker export --output="latest.tar" CONTAINER

# history of an image

docker history $IMAGE:$TAG

# export filesystem as tar (without VOL)

# import filesystem from tar (without VOL)

docker import http://example.com/exampleimage.tgz
docker import /path/to/exampleimage.tgz
cat exampleimage.tgz | docker import --message "New image imported from tarball" \
  - exampleimagelocal:new
docker commit $CONTAINER_ID $REPO/$IMAGE:$TAG
docker commit --change "ENV key=value" $ID $REPO/$IMAGE:$TAG
docker inspect -f "{{ .Config.Env }}" $ID    # to check
docker commit --change='CMD ["apachectl", "-DFOREGROUND"]' \
    -c "EXPOSE 80" $ID $REPO/$IMAGE:$TAG

# commit = create a new image from container changes

# save / load an image as/from tar

docker save busybox > busybox.tar
docker save --output ubuntu.tar ubuntu:lucid ubuntu:saucy
docker load < busybox.tar.gz
docker load --input ubuntu.tar
docker run --rm \
    --env-file "$PATH_TO_ENV_FILE" \
    -e "key=value" \
    -p $HOST_HTTP_PORT:$DOCKER_HTTP_PORT \
    -p $HOST_TCP_PORT:$DOCKER_TCP_PORT \
    -v $HOST_DATA_DIR:$DOCKER_DATA_VOL \
    -v $HOST_CONFIG_DIR:$DOCKER_CONFIG_VOL \    
    --name $CONTAINER_NAME \
     $IMAGE_NAME:$TAG [$COMMAND]

# run a container (= pull/create/start)

# create a container from an image

docker create --name $CONTAINTER_NAME $IMAGE_NAME:$TAG
// options are the same as at docker run
docker stop ${ID}
docker stop ${CONTAINTER_NAME}
docker start ${CONTAINTER_NAME}
docker start -a -i ${ID}

    -a attach STDOUT/STDERR
    -i attach STDIN

# start and stop a container

docker ps
docker ps -a
docker ps --filter status=paused
docker ps --filter "label=color=blue"
docker ps --filter "name=ubun"
docker ps --filter ancestor=ubuntu
docker ps --filter volume=/data --format "table {{.ID}}\t{{.Mounts}}"
docker ps --filter network=net1

# ps = show containers

docker top CONTAINER [ps OPTIONS]

# top = display processes in a container

docker kill CONTAINER [CONTAINER...]
docker kill -s SIGTERM CONTAINER

# kill one or more containers

# rm = remove containers

docker rm $ID
docker rm /$LINK
docker rm $(docker ps -a -q)
docker rm -v redis
docker logs CONTAINER
docker logs --follow CONTAINER
docker logs --tail 10 CONTAINER
docker logs --since 1h30m CONTAINER

# logs of a running container

# attach to a running container shell (PID 1)

# exec = run a command in running container

docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
docker exec -it CONTAINER bash
docker exec -d CONTAINER touch /tmp/file
docker update --cpu-shares 512 -m 300M CONTAINER1 CONTAINER2

# update config of a running container

docker attach CONTAINER             # CTRL-c for SIGKILL or CTRL-p CTRL-q to leave
docker attach --no-stdin CONTAINER
docker port CONTAINER

# list port mappings of a running container

Dockerfile

Dockerfile

Instructions

FROM <IMAGE_NAME>:<IMAGE_TAG>

MAINTAINER <NAME> <SURNAME> "name.surname@company.com"

LABEL <KEY>=<VALUE>

ARG <NAME>[=<DEFAULT VALUE>]

ENV <VAR>=<VALUE> \
    <VAR>=<VALUE>

RUN <SHELL_COMMAND> \
    <SHELL_COMMAND>

WORKDIR <PATH>

USER <NAME>

ADD ["<DIR>", "<FILE>", "<URL>", "<TAR>", "<CONTAINER_DEST_DIR>"]

COPY <HOST_DIR/FILE> <CONTAINER_DEST_DIR> // * ? file wildcards, relative/absolute dest path 

VOLUME ["<MOUNT_PATH_DIR>", "<MOUNT_PATH_DIR>"]

EXPOSE <PORT>

ONBUILD [INSTRUCTION]

ENTRYPOINT ["<PROGRAM>"]

CMD ["<PARAM1>", "<PARAM2>"]
FROM java:8

VOLUME /tmp

ADD spring-boot-docker-0.1.0.jar app.jar

RUN bash -c 'touch /app.jar'

ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

Other Dockerfiles within a CI/CD Pipeline Context

- Case Study -

FROM logstash:2.1.0

# Install packages

RUN apt-get update && apt-get install -y \
    curl build-essential \
    python-setuptools && \
    easy_install Jinja2 && \
    apt-get -y clean && \
    rm -rf /var/lib/apt/lists/*

ENV JINJA_SCRIPT="render_jinja_template.py"
COPY util/${JINJA_SCRIPT} /
RUN chown logstash:logstash /${JINJA_SCRIPT} && \
    chmod +x ${JINJA_SCRIPT}

# Configure Logstash

ENV LS_CONFIG_VOL="/usr/share/logstash/config" \
    LS_LOG_VOL="/usr/share/logstash/log"

RUN mkdir -p ${LS_CONFIG_VOL} ${LS_LOG_VOL}
COPY config/ ${LS_CONFIG_VOL}/
RUN chown -R logstash:logstash ${LS_CONFIG_VOL} ${LS_LOG_VOL}
VOLUME ["${LS_CONFIG_VOL}", "${LS_LOG_VOL}"]

RUN rm /docker-entrypoint.sh
COPY docker-entrypoint.sh /
RUN chown logstash:logstash /docker-entrypoint.sh && \
    chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]

CMD ["logstash", "agent"]

docker-entrypoint.sh

#!/bin/bash

# Exit 1 if any script fails.
set -e

# Add logstash as command if needed
if [[ "${1:0:1}" = '-' ]]; then
    set -- logstash "$@"
fi

# If running logstash
if [[ "$1" = 'logstash' ]]; then

    # Change the ownership of the mounted volumes to user logstash at docker container runtime
    chown -R logstash:logstash ${LS_CONFIG_VOL} ${LS_LOG_VOL}

    # Get LS_ENV
    LS_ENV_PATH=$( find "${LS_CONFIG_VOL}" -maxdepth 3 -iname "${LS_ENV}" )

    # Get LS_CONF
    {
        # Render logstash conf if exists as jinja template
        printf "\n%s\n" "=== Render logstash conf [${LS_CONF}.j2] in [${LS_CONFIG_VOL}] with [${LS_ENV}] ==="
        LS_CONF_JINJA=$( find "${LS_CONFIG_VOL}" -maxdepth 3 -iname "${LS_CONF}.j2" )
        [[ ${LS_CONF_JINJA} ]] && python ${JINJA_SCRIPT} --verbose -f "${LS_ENV_PATH}" -e "LS_CONFIG_VOL" "LS_LOG_VOL" -t "${LS_CONF_JINJA}"
    }

    # Get ES_TEMPLATE if pushed
    [[ "${LS_OUTPUT}" =~ ^.*(elasticsearch)|(template).*$ ]] && {

        # Render elasticsearch template if exists as jinja template
        printf "\n\n%s\n" "=== Render elasticsearch template [${ES_TEMPLATE}.j2] in [${LS_CONFIG_VOL}] with [${LS_ENV}] ==="
        ES_TEMPLATE_JINJA=$( find "${LS_CONFIG_VOL}" -maxdepth 3 -iname "${ES_TEMPLATE}.j2" )
        [[ ${ES_TEMPLATE_JINJA} ]] && python ${JINJA_SCRIPT} --verbose -f "${LS_ENV_PATH}" -t "${ES_TEMPLATE_JINJA}"

    }

    # Test connection to elasticsearch hosts
    [[ "${LS_OUTPUT}" =~ ^.*(elasticsearch)|(document)|(template).*$ ]] && {

        # Print info
        printf "\n\n%s\n" "=== Test connection to elasticsearch hosts ${ES_HOSTS} ==="

        # Get log settings
        [[ "${LS_OUTPUT}" =~ ^.*(log)|(info)|(error).*$ ]] && LOG=true || LOG=false

        # Set ES_HOSTS as hosts array
        declare -a HOSTS=$( echo ${ES_HOSTS} | tr '[]' '()' | tr ',' ' ' )

        # Test each host
        NOT_AVAILABLE=false
        for host in "${HOSTS[@]}";
        do
            ES_CLUSTER=$( curl --silent --retry 5 -u ${ES_USER}:${ES_PASSWORD} -XGET "${host}?pretty" )
            if [[ $(echo "$ES_CLUSTER" | grep "name" | wc -l) -gt 0 ]]; then
                printf "\n%s\n\n" "--- Following elasticsearch host reached: ${host}"; echo "$ES_CLUSTER";
            elif [[ $( echo "$ES_CLUSTER" | grep "OK" | wc -l ) -eq 1 ]]; then
                echo "ERROR: Following elasticsearch host requires correct auth credentials: ${host}" | ( $LOG && gosu logstash tee -a "${LS_LOG_VOL}/${LS_ERROR}" )
                NOT_AVAILABLE=true
            else
                echo "ERROR: Following elasticsearch host is currently not available: ${host}" | ( $LOG && gosu logstash tee -a "${LS_LOG_VOL}/${LS_ERROR}" )
                NOT_AVAILABLE=true
            fi
        done

        # Exit if any host cannot be reached
        $NOT_AVAILABLE && { echo "ERROR: Aborting start of logstash agent with logstash conf [${LS_CONF}]" | ( $LOG &&  gosu logstash tee -a "${LS_LOG_VOL}/${LS_ERROR}" ); exit 1; }
    }


    # Start logstash agent
    printf "\n\n%s\n\n" "=== Start logstash agent with logstash conf [${LS_CONF}] ==="
    set -- gosu logstash "$@"

fi # running logstash

# As argument is not related to logstash,
# then assume that user wants to run his own process,
# for example a `bash` shell to explore this image
exec "$@"

start.sh

#!/bin/bash

# Exit 1 if any script fails.
set -e

############
# logstash #
############

HOST_LOG_DIR="${LS_LOG}";
HOST_CONFIG_DIR="${LS_CONFIG}";

###################
# logstash config #
###################

# Set defaults.
export LS_ENV="env.list";
export LS_CONF="logstash-demo.conf";
export LS_PATTERN="*demo*.json";
export LS_INFO="logstash-info.json";
export LS_ERROR="logstash-error.json";
export ES_TEMPLATE="template-demo.json";
export ES_DOCUMENT_TYPE=${VERSION};
export ES_INDEX_ALIAS="log-demo";

#################
# elasticsearch #
#################

# The elasticsearch configuration.
[[ "${LS_OUTPUT}" =~ ^.*(elasticsearch)|(document)|(template).*$ ]] && {
  [[ -z "${ES_HOSTS}" ]] && { echo "ERROR: ES_HOSTS is not set."; exit 1; }
  [[ -z "${ES_USER}" ]] && { echo -e "\nINFO: ES_USER is optional and not set. Check if hosts ${ES_HOSTS} use auth."; }
  [[ -z "${ES_PASSWORD}" ]] && { echo -e "\nINFO: ES_PASSWORD is optional and not set. Check if hosts ${ES_HOSTS} use auth."; }
  [[ -z "${ES_TEMPLATE}" && "${LS_OUTPUT}" =~ ^.*(elasticsearch)|(template).*$ ]] && { echo "ERROR: ES_TEMPLATE is not set."; exit 1; }
  [[ -z "${ES_INDEX_ALIAS}" && "${LS_OUTPUT}" =~ ^.*(elasticsearch)|(template).*$ ]] && { echo -e "\nINFO: ES_INDEX_ALIAS is optional and not set. Check if template [${ES_TEMPLATE}] needs an alias."; }
  [[ -z "${ES_INDEX}" ]] && { echo "ERROR: ES_INDEX is not set."; exit 1; }
  [[ -z "${ES_DOCUMENT_TYPE}" && "${LS_OUTPUT}" != "template" ]] && { echo "ERROR: ES_DOCUMENT_TYPE is not set."; exit 1; }
}

##########
# docker #
##########

# The docker configuration has defaults.
DOCKER_LOG_VOL="/usr/share/logstash/log" # do not change - derived from dockerfile LS_LOG_VOL!
DOCKER_CONFIG_VOL="/usr/share/logstash/config" # do not change - derived from dockerfile LS_CONFIG_VOL!
DOCKER_IMAGE_NAME="dataduke/logstash-demo"; [[ -n "${LS_DOCKER_REPO}" ]] && DOCKER_IMAGE_NAME="${LS_DOCKER_REPO}"
DOCKER_IMAGE_TAG="latest"; [[ -n "${LS_DOCKER_TAG}" ]] && DOCKER_IMAGE_TAG="${LS_DOCKER_TAG}"
DOCKER_CONTAINER_NAME="logstash-indexer"; [[ -n "${LS_DOCKER_CONTAINER}" ]] && DOCKER_CONTAINER_NAME="${LS_DOCKER_CONTAINER}"

# Special flags for docker run are always used and can only be ommitted by actively disabling them.
DOCKER_RUN_REMOVE=''; [[ $LS_DOCKER_REMOVE == true ]] && DOCKER_RUN_REMOVE='-it --rm'
DOCKER_RUN_DETACH='--detach=true'; [[ "${LS_OUTPUT}" =~ ^.*(verbose)|(console).*$ ]] || [[ $LS_DOCKER_DETACH == false ]] && DOCKER_RUN_DETACH='--detach=false'
DOCKER_RUN_NETWORK='--net="host"'; [[ $LS_DOCKER_NETWORK == false ]] && DOCKER_RUN_NETWORK=''

# Print info.
printf "\n%s\n\n" "=== Start docker container [${DOCKER_CONTAINER_NAME}] from image [${DOCKER_IMAGE_NAME}:${DOCKER_IMAGE_TAG}] ==="
printf "%s\n" "Process logs with pattern:          ${LS_PATTERN}"
printf "%s\n" "Mount log directory:                ${LS_LOG}"
printf "%s\n" "Mount config directory:             ${LS_CONFIG}"
printf "%s\n" "Set logstash input types:           ${LS_INPUT}"
printf "%s\n" "Set logstash output types:          ${LS_OUTPUT}"
printf "%s\n" "Use logstash env file:              ${LS_ENV}"
printf "%s\n" "Use logstash conf file:             ${LS_CONF}"
printf "%s\n" "Use info log file:                  ${LS_INFO}"
printf "%s\n" "Use error log file:                 ${LS_ERROR}"
printf "%s\n" "Use elasticsearch template file:    ${ES_TEMPLATE}"
printf "%s\n" "Set elasticsearch hosts:            ${ES_HOSTS}"
printf "%s\n" "Set elasticsearch index alias:      ${ES_INDEX_ALIAS}"
printf "%s\n" "Set elasticsearch index:            ${ES_INDEX}"
printf "%s\n" "Set elasticsearch document type:    ${ES_DOCUMENT_TYPE}"
printf "\n%s\n\n" "--- Start configuration is applied."

# Run docker container.
docker run ${DOCKER_RUN_REMOVE} ${DOCKER_RUN_DETACH} ${DOCKER_RUN_NETWORK} \
  --env-file "${LS_CONFIG}/${LS_ENV}" \
  -v ${HOST_LOG_DIR}:${DOCKER_LOG_VOL} \
  -v ${HOST_CONFIG_DIR}:${DOCKER_CONFIG_VOL} \
  --name ${DOCKER_CONTAINER_NAME} \
  ${DOCKER_IMAGE_NAME}:${DOCKER_IMAGE_TAG} \
  logstash -f "${DOCKER_CONFIG_VOL}/${LS_CONF}"

exit $?

circle.yml

machine:
  pre:
    # Configure elasticsearch circle service.
    - sudo cp -v "/home/ubuntu/logstash-demo/test/service-elasticsearch.yml" "/etc/elasticsearch/elasticsearch.yml"; cat $_
  hosts:
    elasticsearch.circleci.com: 127.0.0.1
  services:
    - elasticsearch
    - docker
  environment:
    # Circle run tests with parallelism.
    CIRCLE_PARALLEL: true
    # Tests use dedicated docker containers, log directories and elasticsearch indexes.
    TEST_SAMPLE: "logstash-test-process-sample"
    TEST_PRODUCTION: "logstash-test-deploy-production"
    # Tests use same demo log file.
    TEST_LOG: "log-demo.json"
    # Docker run options are set to detach to background and share network addresses from host to container.
    LS_DOCKER_REMOVE: false
    LS_DOCKER_DETACH: true
    LS_DOCKER_NETWORK: true
    # Docker build image.
    LS_DOCKER_REPO: "dataduke/logstash-demo"
    LS_DOCKER_TAG: ${CIRCLE_BRANCH//\//-}
    # Logstash input and output types, info and error log files.
    LS_INPUT: "log"
    LS_OUTPUT: "log,elasticsearch"
    LS_INFO: "logstash-info.json"
    LS_ERROR: "logstash-error.json"
    # Logstash event metadata used for ES_DOCUMENT_TYPE.
    VERSION: "1.23"
    # Elasticsearch is needed for integration test.
    ES_TEMPLATE: "template-demo.json"
    ES_INDEX_ALIAS: "log-demo"
    ES_HOSTS: "[ 'elasticsearch.circleci.com:9200' ]"
    ES_CONF: "/etc/elasticsearch/elasticsearch.yml"
    ES_LOG: "/var/log/elasticsearch/elasticsearch.log"
    # Git merge script is needed for auto-merging dev to master branch.
    MERGE_SCRIPT: merge.sh
    GIT_UPSTREAM_URL: "git@github.com:dataduke/logstash-demo.git"
    GIT_UPSTREAM_BRANCH_MASTER: "master"

dependencies:
  cache_directories:
    - "~/docker"
  override:
    # Docker environment used.
    - docker info
    # Load cached images, if available.
    - if [[ -e ~/docker/image.tar ]]; then docker load --input ~/docker/image.tar; fi
    # Build our image.
    - ./build.sh
    # Save built image into cache.
    - mkdir -v -p ~/docker; docker save "${LS_DOCKER_REPO}:${LS_DOCKER_TAG}" > ~/docker/image.tar
    # Make sure circle project parallelism is set to at least 2 nodes.
    - |
        if [[ "${CIRCLE_NODE_TOTAL}" -eq "1" ]]; then {
          echo "Parallelism [${CIRCLE_NODE_TOTAL}x] needs to be 2x to fasten execution time."
          echo "You also need to set our circle env CIRCLE_PARALLEL [${CIRCLE_PARALLEL}] to true."
        }; fi


test:
  override:
    - ? >
        case $CIRCLE_NODE_INDEX in
        0)
          printf "\n%s\n" "+++ Begin test of docker container [${TEST_SAMPLE}] +++"
          printf "\n%s\n\n" "=== Prepare test and setup config and log dirs on host ==="
          export LS_DOCKER_CONTAINER="${TEST_SAMPLE}"
          export LS_LOG="/tmp/${TEST_SAMPLE}/log"
          export LS_CONFIG="/tmp/${TEST_SAMPLE}/config"
          export ES_INDEX="${TEST_SAMPLE}"
          mkdir -v -p ${LS_LOG} ${LS_CONFIG}
          cp -v -r config/* ${LS_CONFIG}/
          cp -v test/${TEST_LOG} ${LS_LOG}/
          printf "\n%s\n" "--- Prepare test completed."
          # Fire up the container
          ./start.sh; [[ $? -eq 1 ]] && exit 1
          # Sleep is currently needed as file input is handeld as a data stream
          # see: https://github.com/logstash-plugins/logstash-input-file/issues/52
          sleep 50;
          # Stop the container.
          ./stop.sh; [[ $? -eq 1 ]] && exit 1
          # Test metrics from files including input, output and errors.
          ./test/test-metrics-from-files.sh; [[ $? -eq 1 ]] && exit 1
          # Test metrics form elasticsearch including input, template and documents.
          ./test/test-metrics-from-elasticsearch.sh; [[ $? -eq 1 ]] && exit 1
          printf "\n%s\n" "+++ End test of docker container [${TEST_SAMPLE}] +++"
          # Exit case statement if run in parallel else proceed to next case.
          $CIRCLE_PARALLEL && exit 0
          ;&
        1)
          printf "\n%s\n" "+++ Begin test of [${TEST_PRODUCTION}] +++"
          printf "\n%s\n\n" "=== Prepare test and setup config and log dirs on host ==="
          export LS_DOCKER_CONTAINER="${TEST_PRODUCTION}"
          export LS_LOG="/tmp/${TEST_PRODUCTION}/log"
          export LS_CONFIG="/tmp/${TEST_PRODUCTION}/config"
          export ES_INDEX="${TEST_PRODUCTION}"
          mkdir -v -p ${LS_LOG} ${LS_CONFIG}
          cp -v -r config/* ${LS_CONFIG}/
          cp -v test/${TEST_LOG} ${LS_LOG}/
          printf "\n%s\n" "--- Prepare test completed."
          # Run the full deploy script as used in jenkins.
          ./deploy.sh; [[ $? -eq 1 ]] && exit 1
          # Test metrics from files including input, output and errors.
          ./test/test-metrics-from-files.sh; [[ $? -eq 1 ]] && exit 1
          # Test metrics form elasticsearch including input, template and documents.
          ./test/test-metrics-from-elasticsearch.sh; [[ $? -eq 1 ]] && exit 1
          printf "\n%s\n" "+++ End test of [${TEST_PRODUCTION}] +++"
          # Exit case statement if run in parallel else proceed to next case.
          $CIRCLE_PARALLEL && exit 0
          ;&
        esac
      : parallel: true
  post:
    - ? >
        case $CIRCLE_NODE_INDEX in
        0)
          printf "\n%s\n\n" "=== Archive artifacts of [${TEST_SAMPLE}] ==="
          sudo mv -v -f "/tmp/${TEST_SAMPLE}" "${CIRCLE_ARTIFACTS}/"
          mkdir -v -p "${CIRCLE_ARTIFACTS}/${TEST_SAMPLE}/services"
          sudo cp -v "${ES_CONF}" "${ES_LOG}" $_
          # Exit case statement if run in parallel else proceed to next case.
          $CIRCLE_PARALLEL && exit 0
          ;&
        1)
          printf "\n%s\n\n" "=== Archive artifacts of [${TEST_PRODUCTION}] ==="
          sudo mv -v -f "/tmp/${TEST_PRODUCTION}" "${CIRCLE_ARTIFACTS}/"
          mkdir -v -p "${CIRCLE_ARTIFACTS}/${TEST_PRODUCTION}/services"
          sudo cp -v "${ES_CONF}" "${ES_LOG}" $_
          # Exit case statement if run in parallel else proceed to next case.
          $CIRCLE_PARALLEL && exit 0
          ;&
        esac
      : parallel: true


deployment:
  dev_actions:
    branch: dev
    commands:
      # Push image to Docker Hub.
      - docker login -u "${DOCKER_LOGIN_USERNAME}" -p "${DOCKER_LOGIN_PASSWORD}" -e "${DOCKER_LOGIN_EMAIL}"
      - docker push "${LS_DOCKER_REPO}:${LS_DOCKER_TAG}"
      # Merge tested commit into master.
      - ./util/${MERGE_SCRIPT} -c "${CIRCLE_SHA1}" -e "${CIRCLE_BRANCH}" -t "${GIT_UPSTREAM_BRANCH_MASTER}" -r "${GIT_UPSTREAM_URL}"
  master_actions:
    branch: master
    commands:
      # Push image to Docker Hub.
      - docker login -u "${DOCKER_LOGIN_USERNAME}" -p "${DOCKER_LOGIN_PASSWORD}" -e "${DOCKER_LOGIN_EMAIL}"
      - docker push "${LS_DOCKER_REPO}:${LS_DOCKER_TAG}"
      - docker tag "${LS_DOCKER_REPO}:${LS_DOCKER_TAG}" "${LS_DOCKER_REPO}:latest"
      - docker push "${LS_DOCKER_REPO}:latest"

general:
  artifacts:
    - "${CIRCLE_ARTIFACTS}/${LS_DOCKER_TEST_SAMPLE}"
    - "${CIRCLE_ARTIFACTS}/${LS_DOCKER_TEST_PRODUCTION}"

24 Tips

for Writing Dockerfiles

1. Containers should be ephemeral

2. Use a .dockerignore file

3. Use small base images

4. Use tags

5. Group common operations

6. Avoid installing unnecessary packages

7. Run only one process per container

8. Minimize the number of layers

9. Sort mult-line arguments and indent 4 spaces:

RUN apt-get update && apt-get install --yes \
    cvs \
    git \
    mercurial \
    subversion

Best Practices

Dockerfile: Guidelines

Best Practices

Dockerfile: Guidelines

10. Build Cache

  • CACHING: Use whenever possible. Saves time.
  • DISABLE: docker build --no-cache=true -t NAME:TAG .

  • CHECKSUMS: For ADD and COPY the contents of the file(s) in the image are examined and a checksum is calculated for each file. During the cache lookup, the checksum is compared against the checksum in the existing images. Cache is invalid if anything has changed (besides file access dates)!
  • NO CACHE LOOKUP: All other commands are not evaluted on a file level to determine a cache match/hit. Just the command string itself is used to find a match when processing files updated in the container, e.g. RUN apt-get -y update.

Best Practices

Dockerfile: Instructions

1. FROM: use current official Repositories,

    e.g. Debian is tightly controlled and kept minimal: 150 mb.

 

2. RUN: split long or complex RUN statements on multiple lines separated

 

 

 

 

3. Avoid RUN apt-get upgrade or dist-upgrade

     as many of the “essential” packages from the base images

     won’t upgrade inside an unprivileged container.

RUN command-1 \
    command-2 \
    command-3

Best Practices

Dockerfile: Instructions

4. RUN apt-get update

  • CACHE BUSTING: Always combine RUN apt-get update && apt-get install -y .... Using apt-get update alone in a RUN statement causes caching issues and subsequent apt-get install instructions fail.

 

 

 

 

 

  • VERSION PINNING: forces the build to retrieve a particular version regardless of what’s in the cache. new versions cause a cache bust of apt-get update.

 

FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y curl
FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y curl ngnix
RUN apt-get update && apt-get install 
    package-foo=1.3.* \
    s3cmd=1.1.* \

Best Practices

Dockerfile: Instructions

5. CMD

  • alway use this format:

 

 

 

 

 

  • do only rarely use CMD [“param”, “param”] in conjunction with ENTRYPOINT unless you/your users are familiar with ENTRYPOINT
CMD ["executable", "param1", "param2"…]
CMD ["apache2","-DFOREGROUND"]
CMD ["perl", "-de0"]
CMD ["python"]
CMD ["php", "-a"]

Best Practices

Dockerfile: Instructions

6. EXPOSE

  • use the common, traditional port for your application, e.g.

 

 

 

  • For container linking, Docker provides environment variables for the path from the recipient container back to the source (ie, MYSQL_PORT_3306_TCP)
EXPOSE 80 # Apache 
EXPOSE 27017 # MongoDB

Best Practices

Dockerfile: Instructions

7. ENV

  • Update path to ensure commands work:

 

 

  • Provide needed env vars for services eg. Postgres PGDATA.
  • Use for version numbers and pathes (like constant vars):
ENV PATH /usr/local/nginx/bin:$PATH
CMD [“nginx”]
ENV PG_MAJOR 9.3
ENV PG_VERSION 9.3.4
RUN curl -SL http://example.com/postgres-$PG_VERSION.tar.xz \
    | tar -xJC /usr/src/postgress && …

Best Practices

Dockerfile: Instructions

8. COPY

  • Prefer COPY more transparent than ADD
  • COPY only supports the basic copying of local files into the container, while ADD has some features (like local-only tar extraction and remote URL support) that are not immediately obvious.

  • FEWER CACHE INVALIDATIONS: Reuse multiple COPY steps individually. Ensures that each step’s build cache is only invalidated (forcing the step to be re-run) if a file changes.

COPY requirements.txt /tmp/
RUN pip install --requirement /tmp/requirements.txt
COPY . /tmp/

Best Practices

Dockerfile: Instructions

9. ADD

  • TAR AUTO-EXTRACTION: 

 

  • Because image size matters, using ADD to fetch packages from remote URLs is strongly discouraged. You should use curl or wget!
# Bad
ADD http://example.com/big.tar.xz /usr/src/things/
RUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things
RUN make -C /usr/src/things all 

# Good
RUN mkdir -p /usr/src/things \
    && curl -SL http://example.com/big.tar.xz \
    | tar -xJC /usr/src/things \
    && make -C /usr/src/things all
ADD rootfs.tar.xz /

Best Practices

Dockerfile: Own experience

1. Use fixed version for base image

2. Prefer offical base images

3. Use gosu for easy-stepdown from root

4. Define own entrypoint if needed

5. Write integration tests, e.g. with CircleCI

 

More Tips: [1] [2] [3] [4]

Some Tips

on Docker Slogans

"Battery's included, but replaceable!"

> Docker as modular Framework with given parts that can be changed or extendend

> Many Plugins!
 

Microservices Pipeline

By Benjamin Nothdurft

Microservices Pipeline

  • 926