Running Azure DevOps container agents on OpenShift

Intro

Although a lot of companies are moving towards the cloud, for many the on premise data centers are still important, either as an isolated, secure environment for data and work that should not leave the intranet, or as hybrid solutions (for example, data on premise, processing in the cloud).

This doesn't prevent modern software development, it means that in most cases new software is written for containers. Data centers run a docker management layer like Kubernetes or OpenShift, enabling not just running containers but also maintaining the entire virtual infrastructure with load balancing and network provisioning capacity.

For in house development, this might also mean that you will want to run your own Azure DevOps pipeline agents. For Kubernetes, this is simple, but for OpenShift there are a couple of things you need to take into account.

Disclaimer I am not in any way or form an OpenShift specialist, the following observations came because I needed a build agent on an OpenShift (4) cluster and these were the issues I had to resolve. These are quite possibly not the best solutions, so feel free to contribute to the Github repo.

 User permissions

Contrary to simple Docker containers, OpenShift does not run the images as root. This causes a couple of issues because some of the tools installed with an Azure DevOps agent assume there is a /home with some configurations and OpenShift normally assigns a random user.

This is where the uid_endpoint script comes in. On instancing the image, this checks who the current user is and provides a profile by adding the user to /etc/passwd.

#!/bin/sh
if ! whoami &> /dev/null; then
 if [ -w /etc/passwd ]; then
 echo "${USER_NAME:-default}:x:$(id -u):0:${USER_NAME:-default} user:${HOME}:/sbin/nologin" >> /etc/passwd
fi
fi
exec "$@"

Run once

One of the advantages of running builds on a container compared to a server is that images can run once, then get cleaned before the next request comes in. For this, we need to run so called ephemeral agents, which can be done by running the AgentService.js with the --once option and some cleanup afterwards.
For this, change the start.sh to include the following:

### rest of the start.sh file ###
 
cleanup() {
  if [ -e config.sh ]; then
    print_header "Cleanup. Removing Azure Pipelines agent..."

    ./config.sh remove --unattended \
      --auth PAT \
      --token $(cat "$AZP_TOKEN_FILE")
  fi
}

# `exec` the node runtime so it's aware of TERM and INT signals
# AgentService.js understands how to handle agent self-update and restart
exec ./externals/node/bin/node ./bin/AgentService.js interactive --once & wait $!

# We expect the above process to exit when it runs once,
# so we now run a cleanup process to remove this agent
# from the pool
cleanup

 Proxy and tooling

Because an empty Ubuntu container will not allow you to build much, a set of tools can be deployed on the agent. Usually, when you are running an isolated agent like this, this is also behind a corporate firewall, possibly with an internal certificate authority.

Proxy, root CA and Java

To handle this, first the proxy is set:
 
# If your company uses a proxy:
ENV http_proxy=http://company.proxy.url:8080/
ENV https_proxy=http://company.proxy.url:8080/
ENV no_proxy=.company.internal.net 
 
The company root CA needs to be added . To enable this, ca-certificates are needed so first an apt-get install is done (and Java is installed along the way), then the certificates are added. The can be converted to PEM if in a different format.
 
# If your company uses its own Root Certificate Authority
WORKDIR /certtemp
RUN for CERT in "http://company.pki.url/ROOTCA.crt" \
                "http://company.pki.url/ADDITIONAL-ROOTCA.crt" \
                "http://company.pki.url/ETC.crt" \
        do curl ${CERT} --output /certtemp/$(basename ${CERT}); \
            openssl x509 -in /certtemp/$(basename ${CERT}) -inform DER -out /usr/local/share/ca-certificates/$(basename ${CERT}); \
        done
RUN update-ca-certificates

.Net Core

The different versions of .Net  Core are added by registering the Microsoft repository, then using the apt-get mechanism to install.

# Add Microsoft debian repo
RUN curl https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb --output packages-microsoft-prod.deb
RUN dpkg -i packages-microsoft-prod.deb

# Install dotnet core 2.1, 3.1 and 5.0
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
        powershell \
        apt-transport-https \
        dotnet-sdk-5.0 \
        dotnet-sdk-3.1 \
        dotnet-sdk-2.1

Node.js & Typescript

Node.js basically follows the same script as .Net Core. A reference to the repository is added, then it is installed through apt-get.
 
# Add Node.js 14.x LTS
RUN curl -sL https://deb.nodesource.com/setup_lts.x | bash -

# Install nodejs, npm and typescript
RUN apt-get update \
&& apt-get install -y nodejs \
        node-typescript

Building and deploying to OpenShift

With the Docker image and scripts needed to run the agent container, now all that is needed is to create a pipeline that calls OpenShift and creates a build and deployment pod.
For this, a yaml pipeline can be used along with manifest files and pipeline variables. The manifest files tell OpenShift how to build and deploy the container, the yaml pipeline does the orchestration within a Azure DevOps agent.
The pipeline basically merges each manifest with the relevant variable values from the pipeline, then tells OpenShift to execute these, creating a build and deployment.
For access to OpenShift from the Azure DevOps pipeline agent, use an OpenShift service account access token.

Example

A full example of the above can be found on GitHub: https://github.com/fgiele/openshift-azure-devops-agents
 

Comments

Popular posts from this blog

Using Azure Devops Service Connections in dashboard widgets

NuGet Release and Pre-Release pipeline