We’ve discussed why continuous integration is important, what makes a good CI tool and we’ve learned how to use TeamCity to build, test and deploy a “dockerized” .NET Core Application.
As Jenkins is one of the most popular CI tools on the market with over a thousand plugins, in this article, we are going to set up a CI pipeline for the same .NET Core Application.
So before we get right into it here are few highlights of what’s to come ahead.
We are going to use Pipeline as a code plugin to create our Jenkins job. The cool part of using this plugin is that our entire Jenkins job configuration can be created, updated and version-controlled along with the rest of our source code. Along with that, we will be using the magical powers of docker and docker-compose to set up our entire CI infrastructure out of thin air!
Exciting, right?
We recommend that you follow along with us and get your hands dirty as its much more fun!
Go ahead and fork the docker-series repo and switch over to docker-series-continuous-integration-jenkins-end branch.
Here’s what we are going to learn this time:
- The High-Level Flowchart of CI Pipeline Using Jenkins
- Setting up CI Infrastructure-As-Code Using Docker
- Creating a Pipeline-As-A-Code Job in Jenkins
- Conclusion
NOTE: Unlike other parts of this series, this part is using .NET Core 2.0 SDK. Make sure to change the base images according to your needs.
Let’s dive right into it.
The High-Level Flowchart of CI Pipeline Using Jenkins
Let’s see what a “very” high-level view of our Continuous Integration (CI) Pipeline using Jenkins looks like:
The flow diagram is easy to understand however, there are few things to note,
- You may wonder as to where the code compilation and unit-test steps are? Well, the docker run step performs both the code compile and the application publish!
- The dotted line marks the boundary of our CI tool, Jenkins, in this example
- Having the docker push after a successful run of the “Integration Tests” ensures that only tested application is promoted to the next region, another very important rule of Continuous Integration (CI)
Setting up CI Infrastructure-As-Code Using Docker
Let’s get into the fun stuff now. Setting up our Jenkins infrastructure!
Although there are many things to love about the Jenkins docker image there is something that is..… Let’s say “a little inconvenient” ?
Let’s see what that little thing is.
When we run the following docker command:
docker run -d -p 8080:8080 -p 50000:50000 jenkins/jenkins
Jenkins welcomes us with the “Setup Wizard Screen” which requires us to enter the “InitialAdminPassword” located under the Jenkins_Home directory defaulted to /var/jenkins_home/secrets/initialAdminPassword
. The password is displayed in the start-up logs as well:
It’s highly recommended to follow these steps to secure the Jenkins Server, however, in this example, we will be skipping this step by disabling it and creating a new user during run time.
Let’s get to it, shall we?
Setting up a Custom Docker Image for Our Jenkins Master
Let’s look at the below Dockerfile:
# Starting off with the Jenkins base Image FROM jenkins/jenkins:latest # Installing the plugins we need using the in-built install-plugins.sh script RUN /usr/local/bin/install-plugins.sh git matrix-auth workflow-aggregator docker-workflow blueocean credentials-binding # Setting up environment variables for Jenkins admin user ENV JENKINS_USER admin ENV JENKINS_PASS admin # Skip the initial setup wizard ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false # Start-up scripts to set number of executors and creating the admin user COPY executors.groovy /usr/share/jenkins/ref/init.groovy.d/ COPY default-user.groovy /usr/share/jenkins/ref/init.groovy.d/ VOLUME /var/jenkins_home
We start with the Jenkins/Jenkins base image and install the plugins that we require.
Line# 12 is the run-time JVM parameter that needs to be passed in to disable the “Setup Wizard”
Line# 15 and 16 is to provide the container with initial start-up scripts to set the Jenkins executors and for creating the Jenkins admin user.
Let’s build this image and keep it ready. We will get back to it right after we build our agent!
docker build -t jenkins-master .
Configuring a “Dockerized” Build Agent for Compiling Our Code
As for the Jenkins build agent, we will make it “auto-attaching” to the Jenkins master using JLNP.
Here is what the agent’s Dockerfile looks like:
FROM ubuntu:16.04 # Install Docker CLI in the agent RUN apt-get update && apt-get install -y apt-transport-https ca-certificates RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D RUN echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable" > /etc/apt/sources.list.d/docker.list RUN apt-get update && apt-get install -y docker-ce --allow-unauthenticated RUN apt-get update && apt-get install -y openjdk-8-jre curl python python-pip git RUN easy_install jenkins-webapi # Get docker-compose in the agent container RUN curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose && chmod +x /usr/local/bin/docker-compose RUN mkdir -p /home/jenkins RUN mkdir -p /var/lib/jenkins # Start-up script to attach the slave to the master ADD slave.py /var/lib/jenkins/slave.py WORKDIR /home/jenkins ENV JENKINS_URL "http://jenkins" ENV JENKINS_SLAVE_ADDRESS "" ENV JENKINS_USER "admin" ENV JENKINS_PASS "admin" ENV SLAVE_NAME "" ENV SLAVE_SECRET "" ENV SLAVE_EXECUTORS "1" ENV SLAVE_LABELS "docker" ENV SLAVE_WORING_DIR "" ENV CLEAN_WORKING_DIR "true" CMD [ "python", "-u", "/var/lib/jenkins/slave.py" ]
Let’s look at some important steps in the above file,
Line# 13 takes care of downloading and installing “docker-compose” in our build agent
Line# 18 takes care of adding the magical start-up python script responsible for attaching to our master as a build agent!
Now, who likes to go through a 90 line python script? Not me ?! To make things easier let’s look at this simple flowchart to understand this!
The “wait for the master” logic is going to come in very handy when we wrap the master and slave into a docker-compose file. The depends_on:
tag in docker-compose doesn’t serve as well, as the jenkins master takes more time to be fully up and running than what docker-compose estimates it to be. Thus, it’s added intelligence to our slave.
Let’s build this container and warp our Jenkins infrastructure into a docker-compose file!
Building an Image and Exposing the Docker Daemon to the Agent
docker build -t jenkins-slave .
version: '3.1' services: jenkins: container_name: jenkins ports: - '8080:8080' - '50000:50000' image: localhost:5000/jenkins jenkins-slave: container_name: jenkins-slave restart: always environment: - 'JENKINS_URL=http://jenkins:8080' image: localhost:5000/jenkins-slave volumes: - /var/run/docker.sock:/var/run/docker.sock # Expose the docker daemon in the container - /home/jenkins:/home/jenkins # Avoid mysql volume mount issue depends_on: - jenkins
The Line# 17 is significant here as the mounted volume will help fix the docker in docker volume mount issue. Its covered well in our previous article here
Great!
Now its time to launch our Jenkins Infrastructure with the docker-compose up command
docker-compose -f .\docker-compose.ci.yml up
Here are the docker-compose logs for the container start-up:
Label 1 indicates that Jenkins container has started however, it’s not fully up and running. Thus, our wait logic coming in handy.
And finally, our agent has successfully connected to our Jenkins master. We should be able to access the Jenkins web UI at port 8080 of our localhost!
And guess what, no initial startup wizard too!
Here, we log in with the initial admin user credentials which are admin/admin. After that Jenkins takes us to the dashboard where we can see our docker build agent ready to take on some build tasks!
Creating a Pipeline-As-A-Code Job in Jenkins
Let’s start by creating a “New Item” and save it as a pipeline job as shown below:
Next step is to update the job configurations with, Job description, SCM url and the branch as shown below:
Writing a Jenkins File
The Jenkins file is a groovy based script that lists the different stages and steps of the Jenkins build. The benefits of this approach over using a freestyle job revolves mainly around flexibility and the ability to be able to version control.
Let’s discuss a little more about these 2 points,
Flexibility:
Usually, a freestyle job will be created to accomplish a specific task in your CI pipeline, it could be to compile our code, run integration tests or deploy our application.
However, a true CI includes all those 3 steps and chains them together in a sequential or parallel manner. This is what we call a “Pipeline”.
It is possible to achieve chaining by using freestyle jobs but at the end of the day, it’s not very convenient for a single Application. The pipeline would consist of a bunch of freestyle jobs connected in an upstream-downstream fashion. Communicating amongst these jobs for eg: sharing variables, custom statuses can be a nightmare.
All these problems go away with Pipeline Jobs!
Version Control your job configurations:
As previously mentioned, the Jenkinsfile is just a groovy script thus, it can be stored, edited and version-controlled along with the rest of the application code!
Before we go ahead and start writing our Jenkinsfile, let’s visualize the steps we need to build and publish this application:
The Jenkins pipeline syntax generator helps us a lot in building our Jenkinsfile line by line. Here are some of the examples:
Here is what our Jenkinsfile looks like:
node('docker') { stage 'Checkout' checkout scm stage 'Build & UnitTest' sh "docker build -t accountownerapp:B${BUILD_NUMBER} -f Dockerfile ." sh "docker build -t accountownerapp:test-B${BUILD_NUMBER} -f Dockerfile.Integration ." stage 'Integration Test' sh "docker-compose -f docker-compose.integration.yml up --force-recreate --abort-on-container-exit" sh "docker-compose -f docker-compose.integration.yml down -v" }
Let’s break it break it down here:
Line#1: The node
keyword is used to select the build agent
Line 4, 5, 9: The stage
keyword is used to define the stages in our build
Dynamic Build Versions
One of the things that we have fast-forwarded is “tokenizing” image versions with the Jenkins build number. Jenkins exposes BUILD_NUMBER
as an environment variable amongst others 🙂
Each new build, auto-increments the version. To support this, the docker-compose.integration.yml
file is also “tokenized” in the same fashion.
version: '3.1' services: db: image: mysql:5.7 environment: MYSQL_RANDOM_ROOT_PASSWORD: 1 MYSQL_DATABASE: accountowner MYSQL_USER: dbuser MYSQL_PASSWORD: dbuserpassword DEBUG: 1 volumes: - dbdata:/var/lib/mysql - ./_MySQL_Init_Script:/docker-entrypoint-initdb.d restart: always accountownerapp: depends_on: - db image: "accountownerapp:B${BUILD_NUMBER}" build: context: . integration: depends_on: - accountownerapp image: "accountownerapp:test-B${BUILD_NUMBER}" build: context: . dockerfile: Dockerfile.Integration environment: - TEAMCITY_PROJECT_NAME volumes: dbdata:
The following link shows all the Jenkins Environment Variables:
http:///env-vars.html/
The rest of the steps are just shell scripts.
Let’s build our Jenkins job!
Building the Application
The “Build Now” link triggers a new build:
And here we have our first successful job! The following two snapshots show the successful job and the logs of the Build & UnitTest stage:
Running Tests and Publishing Reports in Jenkins
We have successfully executed tests in the previous build. One of the benefits of using Jenkins is the post-build report publishing feature which helps us collate our test results and publish them as HTML reports!
We will be using the MS Test plugin to publish reports but, before that, there are two problems we need to address.
Problem 1: The container stores the results of the tests that it executes within itself.
Problem 2: Even if we publish the report, how are we going to make it available outside the application container and for Jenkins MS test plugin to read from?
Let’s tackle them one by one,
Solution 1: let’s update the Dockerfile to publish the results and store them in a folder within the container
FROM microsoft/aspnetcore-build as build-image WORKDIR /home/app COPY ./*.sln ./ COPY ./*/*.csproj ./ RUN for file in $(ls *.csproj); do mkdir -p ./${file%.*}/ && mv $file ./${file%.*}/; done RUN dotnet restore COPY . . RUN dotnet test --verbosity=normal --results-directory /TestResults/ --logger "trx;LogFileName=test_results.xml" ./Tests/Tests.csproj RUN dotnet publish ./AccountOwnerServer/AccountOwnerServer.csproj -o /publish/ FROM microsoft/aspnetcore WORKDIR /publish COPY --from=build-image /publish . COPY --from=build-image /TestResults /TestResults ENV TEAMCITY_PROJECT_NAME = ${TEAMCITY_PROJECT_NAME} ENTRYPOINT ["dotnet", "AccountOwnerServer.dll"]
Line # 13 Additional command line parameters are added to publish the logs and to store it under the
/TestResults
folder.
Note that the test results folder is still within the container
Solution 2: Here we will be using some docker magic to copy the results out of the container
We can effectively use the docker cp
command to copy content out of the container, however, it requires the running container. Not a big deal, we can use some shell script to tackle that.
Here is the updated Jenkinsfile with a dedicated stage to Publish test results:
node('docker') { stage 'Checkout' checkout scm stage 'Build & UnitTest' sh "docker build -t accountownerapp:B${BUILD_NUMBER} -f Dockerfile ." sh "docker build -t accountownerapp:test-B${BUILD_NUMBER} -f Dockerfile.Integration ." stage 'Pusblish UT Reports' containerID = sh ( script: "docker run -d accountownerapp:B${BUILD_NUMBER}", returnStdout: true ).trim() echo "Container ID is ==> ${containerID}" sh "docker cp ${containerID}:/TestResults/test_results.xml test_results.xml" sh "docker stop ${containerID}" sh "docker rm ${containerID}" step([$class: 'MSTestPublisher', failOnError: false, testResultsFile: 'test_results.xml']) stage 'Integration Test' sh "docker-compose -f docker-compose.integration.yml up --force-recreate --abort-on-container-exit" sh "docker-compose -f docker-compose.integration.yml down -v" }
The new stage consists of shell steps to run the container, copy the test results back to the build agent and publish the report. Let’s go ahead and execute this!
The Jenkins build page shows the published test-results,
Conclusion
Wow!
Here we are at the conclusion of our article. Integrating Jenkins is almost seamless with any existing project-lifecycle due to the abundant library of plugins and free documentation all over the internet. What we’ve seen here is just a small portion of what Jenkins has to offer as a CI tool.
In this article, we focused mostly on Jenkins as a CI tool, we haven’t changed our application code much except the docker file update to accommodate the Test Results. The Continuous Integration with TeamCity and Docker article covers the addition of integration tests in greater detail. Do read it to get the complete picture.
The entire project is available under docker-series GitHub repository under the docker-series-continuous-integration-jenkins-end branch. Feel free to go through it and ask for any help under the comments section.
Thank you, great content as always by the code maze team 🙂
Hi Rabindranath,
Got this error while running Jenkins slave docker file
error: Could not find suitable distribution for Requirement.parse(‘jenkins-webapi’)
Also i have attached error screenshot as well
Can you please suggest.
Thanks in advance
Any Solution to this probelm
Hi @ravindranath_barathy:disqus
You mentioned “dockerfile: Dockerfile.Integration” in docker-compose.integration.yml however there is no reference of that in your article (Dockerfile.Integration)? what does that contain? where can I find that file?
docker build -t accountownerapp:test-B3 -f Dockerfile.Integration .
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /var/jenkins_home/workspace/accountowner_app/Dockerfile.Integration: no such file or directory
——-
UPDATE:
I’ve copied the Dockerfile.Integration from your TeamCity article but it still fails for me. Do you have the complete coding and files where I can download and try out the build?
—
Entering stage Integration Test
Proceeding
[Pipeline] sh
+ docker-compose -f docker-compose.integration.yml up –force-recreate –abort-on-container-exit
/var/jenkins_home/workspace/accountowner_app@tmp/durable-e81d4152/script.sh: 1: /var/jenkins_home/workspace/accountowner_app@tmp/durable-e81d4152/script.sh: docker-compose: not found
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
—-
Hi, I am getting the below error while do docker-compose
jenkins-slave | Cleaned up working directory.
jenkins-slave | Traceback (most recent call last):
jenkins-slave | File “/var/lib/jenkins/slave.py”, line 80, in
jenkins-slave | slave_create(slave_name, os.getcwd(), os.environ[‘SLAVE_EXECUTORS’], os.environ[‘SLAVE_LABELS’])
jenkins-slave | File “/var/lib/jenkins/slave.py”, line 26, in slave_create
jenkins-slave | j = Jenkins(os.environ[‘JENKINS_URL’], os.environ[‘JENKINS_USER’], os.environ[‘JENKINS_PASS’])
jenkins-slave | File “build/bdist.linux-x86_64/egg/jenkins.py”, line 574, in __init__
jenkins-slave | File “build/bdist.linux-x86_64/egg/jenkins.py”, line 616, in crumb_header
jenkins-slave | File “build/bdist.linux-x86_64/egg/jenkins.py”, line 605, in crumb
jenkins-slave | File “build/bdist.linux-x86_64/egg/jenkins.py”, line 559, in json
jenkins-slave | File “/usr/local/lib/python2.7/dist-packages/requests-2.22.0-py2.7.egg/requests/models.py”, line 940, in raise_for_status
jenkins-slave | raise HTTPError(http_error_msg, response=self)
jenkins-slave | requests.exceptions.HTTPError: 401 Client Error: Invalid password/token for user: admin for url: http://jenkins:8080/crumbIssuer/api/json
Any help much appreciated.
Thanks
HI,
I am getting below error, looks like the slave is not connecting to the master..
jenkins-slave | Traceback (most recent call last):
jenkins-slave | File “/var/lib/jenkins/slave.py”, line 80, in
jenkins-slave | slave_create(slave_name, os.getcwd(), os.environ[‘SLAVE_EXECUTORS’], os.environ[‘SLAVE_LABELS’])
jenkins-slave | File “/var/lib/jenkins/slave.py”, line 27, in slave_create
jenkins-slave | j.node_create(node_name, working_dir, num_executors = int(executors), labels = labels, launcher = NodeLaunchMethod.JNLP)
jenkins-slave | File “build/bdist.linux-x86_64/egg/jenkins.py”, line 785, in node_create
jenkins-slave | File “build/bdist.linux-x86_64/egg/jenkins.py”, line 436, in create
jenkins-slave | jenkins.JenkinsError: create “docker-slave-b0b3b5654850” failed
jenkins-slave exited with code 1
Hello Berbara, Thanks for your comment. I believe you are asking about the following python script.
https://github.com/CodeMazeBlog/docker-series/blob/docker-series-continuous-integration-jenkins-end/jenkins-docker/slave/slave.py
Hi Ravindranath,
After docker update fro 17 to 18 version and master and slave images rebuit docker-compose fails to start jenkins-slave container with the the following description
jenkins-slave | http://jenkins:8080/jnlpJars/slave.jar
jenkins-slave | Downloaded Jenkins slave jar.
jenkins-slave | Cleaned up working directory.
jenkins-slave | Traceback (most recent call last):
jenkins-slave | File “/var/lib/jenkins/slave.py”, line 80, in
jenkins-slave | slave_create(slave_name, os.getcwd(), os.environ[‘SLAVE_EXECUTORS’], os.environ[‘SLAVE_LABELS’])
jenkins-slave | File “/var/lib/jenkins/slave.py”, line 26, in slave_create
jenkins-slave | j = Jenkins(os.environ[‘JENKINS_URL’], os.environ[‘JENKINS_USER’], os.environ[‘JENKINS_PASS’])
jenkins-slave | File “build/bdist.linux-x86_64/egg/jenkins.py”, line 574, in __init__
jenkins-slave | File “build/bdist.linux-x86_64/egg/jenkins.py”, line 616, in crumb_header
jenkins-slave | File “build/bdist.linux-x86_64/egg/jenkins.py”, line 605, in crumb
jenkins-slave | File “build/bdist.linux-x86_64/egg/jenkins.py”, line 559, in json
jenkins-slave | File “/usr/local/lib/python2.7/dist-packages/requests-2.21.0-py2.7.egg/requests/models.py”, line 940, in raise_for_status
jenkins-slave | raise HTTPError(http_error_msg, response=self)
jenkins-slave | requests.exceptions.HTTPError: 401 Client Error: Invalid password/token for user: admin for url: http://jenkins:8080/crumbIssuer/api/json
jenkins-slave exited with code 1
as I understand Jenkins initial password can not be read meanwile it’s properly generated
*************************************************************
jenkins-master |
jenkins-master | Jenkins initial setup is required. An admin user has been created and a password generated.
jenkins-master | Please use the following password to proceed to installation:
jenkins-master |
jenkins-master | b0b71579d894481c92ba5865292db953
jenkins-master |
jenkins-master | This may also be found at: /var/jenkins_home/secrets/initialAdminPassword
jenkins-master |
jenkins-master | *************************************************************
Could you please advise how the issue might be resolved?
Thank you in advance and best regards
Albert
Hello Albert, Thanks for the comment, by looking at the logs I suspect that the environment variables are not set in the docker container for the Jenkins master build
Refer this link : As per line 5 and 6 the admin user credentials must be set to be referred by default-user.groovy for creating the initial admin user
As per this line, the initial setup wizard must not appear but, looking at the logs that you have shared that doesn’t seem to be the case(Thus confirming my suspicion)
Here, is my recommended steps for further troubleshooting, perform a
docker exec containerID sh
to inspect the environment variables and check if they are set correctly. Let me know what you find 🙂Hello Ravindranath,
When I execute “docker run jenkins-master”, then in container “docker exec -it 07128195ee85 sh” these parameters are properly seen JENKINS_PASS=admin JENKINS_USER=admin
After “docker-compose -f .docker-compose.ci.yml up” command execution two containers are running
CONTAINER ID IMAGE STATUS PORTS NAMES
c0f7caf37338 jenkins-slave Restarting (1) 7 seconds ago jenkins-slave
3dfe91dd2249 jenkins Up 3 minutes 0.0.0.0:8080->8080/tcp, 0.0.0.0:50000->50000/tcp jenkins-master
And in container “jenkins-master” that is running image “jenkins” these parameter are not exist.
At this point I’ve noticed that image “jenkins” was not mentione and build before.
So in line “8 image: localhost:5000/jenkins” instead of “lenkins” I used “jenkins-master” image. And now it woks properly).
Thank you very much for such helpful and really useful post and assistance!
Best regards,
Albert
Hey Ravi,
Very informative post. This is really what I wanted for my current project.
Thank you very much. Keep it going!
Hello Ratan,Thanks for your feedback!
it fails from first code piece
COPY failed: stat /var/snap/docker/common/var-lib-docker/tmp/docker-builder653425281/executors.groovy: no such file or directory
Hello Berk,
When you perform the
docker build
makes sure that you provide the right context, which mean you build from the right folder. In this example where you are building docker master, run it from the following folder`project root/jenkins-docker/master
. The executors.groovy is at the same level as the docker file. Let me know how that goes.Hi ravindra,
I am getting the same issue as Berk even if I am running from dockerfile location (/Documents/java-docker/jenkin-image/master).
COPY failed: stat /var/lib/docker/tmp/docker-builder273368188/executors.groovy: no such file or directory
Hello there,
Faced same issue, resolved by adding at the same level of Dockerfile these two files:
executors.groovy
default-user.groovy
Hi Ravindranath, your tutorials is great, thanks a lot!
I’m trying to “get my hands dirty” as you suggested, but have couple of issues as shown below:
(1). in the Dockerfile for jenkins master, where can I get files: “executors.groovy” and “default-user.groovy”?
(2). in the Dockerfile for jenkins slave, could you point to me where I may find the file “slave.py”?
Hey Jeff, Thanks for your comment!
Question 1:
Setting some context here: You can create a Groovy script file $JENKINS_HOME/init.groovy, or any .groovy file in the directory $JENKINS_HOME/init.groovy.d/ (Source: https://wiki.jenkins.io/display/JENKINS/Post-initialization+script)
Here are some really good example of init scripts: https://github.com/visualphoenix/jenkins-init-scripts
Just make sure you have a copy of these scripts under init.groovy.d/ folder in your GH repo and in Dockerfile run a COPY step to copy them under your JENKINS_HOME (Default is /var/lib/jenkins)
Question 2:
The file is available here: https://github.com/CodeMazeBlog/docker-series/blob/docker-series-continuous-integration-jenkins-end/jenkins-docker/slave/slave.py
I’m following your tutorial to get Jenkins and Docker running. I have followed every step as described but when I get to the point to run “docker-compose -f ./docker-compose.ci.yml up”, I get the following error:
Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: connect: connection refused
I have tried searching on the web for possible solution but can’t find any answer.
Besides this issue, I think your tutorials a very vivid.
Hello Hafiz, Thanks for the comment. I believe the docker-compose.ci.yml file is pointing to the local registry. Please build the jenkins master and slave images on your own following : http://34.65.74.140/ci-jenkins-docker/#JenkinsMaster and update the docker-compose.ci.yml file as below,
version: ‘3.1’
services:
jenkins:
container_name: jenkins-master
ports:
– ‘8080:8080’
– ‘50000:50000’
image: jenkins
jenkins-slave:
container_name: jenkins-slave
restart: always
environment:
– ‘JENKINS_URL=http://jenkins:8080’
image: jenkins-slave
volumes:
– /var/run/docker.sock:/var/run/docker.sock
depends_on:
– jenkins