README,doc: added next-level-CI and NWL-CI documentation

adapted README accordingly to build the documentation.

Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>
This commit is contained in:
Marc Mattmüller 2023-04-18 16:04:00 +02:00
parent 34eae1d78d
commit 0e39db0e35
9 changed files with 1436 additions and 9 deletions

1
.gitignore vendored Normal file
View File

@ -0,0 +1 @@
doc/out

View File

@ -1,23 +1,53 @@
# NetModule Wireless Linux CI/CD Repository
This repository contains all necessary jobs for the CI/CD environment of the
NetModule Wireless Linux (NWL).
## Content
This repository holds the jobs for the NWL as declarative pipelines
(multibranch). The directory ``jobs`` holds the main parts:
This repository holds the documentation for the CI environment and the jobs for
the NWL as declarative pipelines (multibranch):
* Jenkinsfile_Build
* doc
- a pipeline building a NWL yocto target
- the documentation of the work for the NWL CI environment
* Jenkinsfile_Common
* jobs
- a collection of commonly used functions, so that duplicated code can be
avoided
- Jenkinsfile_Build
+ a pipeline building a NWL yocto target
- Jenkinsfile_Common
+ a collection of commonly used functions, so that duplicated code can be
avoided
## Marginal Notes
This repository does NOT cover the setup of the Jenkins instance.
## Building the Documentation
The documentation bases on sphinx and is written in reStructuredText format. To
build the documenation you need to install sphinx first:
```bash
sudo apt install python3-sphinx
sudo pip3 install cloud-sptheme
```
Within the directory ``doc`` you can use make as follows:
```bash
# entering doc:
cd doc
# clean and build the documentation:
make clean
make html
# open the generated documentation in the browser:
xdg-open out/html/index.html
cd ..
```

225
doc/Makefile Normal file
View File

@ -0,0 +1,225 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = out
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) src
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) src
.PHONY: help
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " epub3 to make an epub3"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
@echo " dummy to check syntax errors of document sources"
.PHONY: clean
clean:
rm -rf $(BUILDDIR)/*
.PHONY: html
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
.PHONY: dirhtml
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
.PHONY: singlehtml
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
.PHONY: pickle
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
.PHONY: json
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
.PHONY: htmlhelp
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
.PHONY: qthelp
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/NetModuleBeldenCoreOS.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/NetModuleBeldenCoreOS.qhc"
.PHONY: applehelp
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
.PHONY: devhelp
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/NetModuleBeldenCoreOS"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/NetModuleBeldenCoreOS"
@echo "# devhelp"
.PHONY: epub
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
.PHONY: epub3
epub3:
$(SPHINXBUILD) -b epub3 $(ALLSPHINXOPTS) $(BUILDDIR)/epub3
@echo
@echo "Build finished. The epub3 file is in $(BUILDDIR)/epub3."
.PHONY: latex
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
.PHONY: latexpdf
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: latexpdfja
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: text
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
.PHONY: man
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
.PHONY: texinfo
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
.PHONY: info
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
.PHONY: gettext
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
.PHONY: changes
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
.PHONY: linkcheck
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
.PHONY: doctest
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
.PHONY: coverage
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
.PHONY: xml
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
.PHONY: pseudoxml
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
.PHONY: dummy
dummy:
$(SPHINXBUILD) -b dummy $(ALLSPHINXOPTS) $(BUILDDIR)/dummy
@echo
@echo "Build finished. Dummy builder generates no files."

View File

@ -0,0 +1,15 @@
div.sphinxsidebar {
width: 3.5in;
}
div.bodywrapper {
margin: 0 0 0 3.5in;
}
div.document {
max-width: 18in;
}
div.related {
max-width: 18in;
}

29
doc/src/conf.py Normal file
View File

@ -0,0 +1,29 @@
# Configuration file for the Sphinx documentation builder.
#
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
project = 'NetModule Wireless Linux CI/CD'
copyright = '2023, Marc Mattmüller'
author = 'Marc Mattmüller'
release = '0.1'
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
extensions = ['sphinx.ext.autodoc','sphinx.ext.viewcode','sphinx.ext.todo']
templates_path = ['_templates']
exclude_patterns = []
# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
html_theme = 'cloud'
html_static_path = ['_static']
html_css_files = ["theme_overwrites.css"]

38
doc/src/index.rst Normal file
View File

@ -0,0 +1,38 @@
.. NetModule Wireless Linux CI/CD documentation master file, created by
sphinx-quickstart on Tue Apr 18 13:04:26 2023.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to NetModule Wireless Linux CI/CD's documentation!
==========================================================
This documentation provides an overview about the work of the CI/CD for the
NetModule Wireless Linux.
Content
*******
.. toctree::
:maxdepth: 2
:caption: Next-Level CI
:glob:
nextlevel-ci/*
.. toctree::
:maxdepth: 2
:caption: NWL CI Setup
:glob:
setup/*
Indices and tables
******************
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -0,0 +1,611 @@
.. _nextLevelCiCd:
**************************
NetModule Next-Level CI/CD
**************************
Foreword
########
The past half year collaborating with HAC, the NRSW- and OEM-team showed that some parts of the CI might be outdated for
the needs coming towards us. E.g. the single Jenkins Controller might be replaced with a multiple Jenkins Controller
using built-in nodes for each area like NRSW, OEM, etc.
Using a dockerized environment supports this approach. The first steps done for the CoreOS showed that there should be
an automatation to set up a new instance. Therefore a first brainstorming meeting was held the 7th of March 2023. The
output of it and its continuation is reflected in the sections below.
Rough Overview of the discussion
################################
Development Areas
*****************
The way development take place differs and can be split into these two groups:
* Yocto
- to build an image
- to build a SDK (software development kit)
- to build a package out of the image
- one repository including several submodules distributed over several SCMs
- recommended to build:
+ the base operating system, like CoreOS
+ the final product software image
- blackbox testing of the output
* Application
- to build an application running on the base operating system (in user space)
- needs:
+ the SDK
+ a unit test framework
+ rootfs set up (e.g. for library development)
- one repository or including submodules but on same SCM
- white- and blackbox testing possible using the unit test framework
There are several ways for bringing an application into a release image:
* yocto recipe populating a (pre-)built binary (or multiple binaries) into the rootfs
- advantages:
+ less effort maintaining the recipe
+ one binary can easily be picked and replaced on a target hardware (e.g. development/debugging)
+ feedback on the CI is much faster as the unit tests can be used as pass/fail indicator
+ a CI job builds the binary (leads to better overview where a failure is coming from)
+ merge requests are made on application level using its unit tests (only mergeable when passing)
* with this approach a lot of issues can be eliminated as only passing unit tests are merged and brought
into the continuous integration path
+ with this approach there is most likely a workflow for test driven development (TDD) available, this makes
debugging much faster and the design is more robust and better testable
* a cool benefit with TDD, you can easily mock some layers and build an exhibition version to demonstrate
new features without installing a full blown infrastructure
.. note::
**Be aware of a conscious polarization**
We count the year 2022/2023 and therefore it is a must to develop applications for an embedded Linux
test driven. If you try to find argument against TDD you may ask yourself about what you would expect
when you are buying a high-end product like the one we are selling?
- disadvantages:
+ it is highly recommended that the SDK version matches with the target release
+ the yocto recipe must pick the binary from somewhere on the infrastructure
+ you need to take care about permission settings when picking and replacing a binary
+ there are much more CI jobs to maintain
- additional information:
+ when using NetModule's internal GitLab instance, the GitLab CI can be used for the unit tests and mergings.
With it no further Jenkins CI job is necessary.
* yocto recipe (as currently used) building the application and puts it into the rootfs
- advantages:
+ the application is built with the environment set up by yocto (e.g. versions); no need of a SDK
+ the meta-layer, where the application recipe is in, is much more flexible to share (especially when outside
of the company/infrastructure)
- disadvantages:
+ more effort maintinging the recipe (source code hashes, etc)
+ additional step necessary to indicate the unit test results of the application
+ yocto must be launched to build an application (CI perspective)
+ longer latency at merge request to get the flag mergeable or not
.. important::
Do not forget the CI environment when thinking of reproducible builds.
When you build a release on your CI, then the CI has as well a specific state such as Jenkins version, plug-in
versions, certain set of security patches, etc. Over time those versions and the entire environment are changing.
Thus, the CI environment needs as well be tagged as you are tagging your release sources.
Open questions
**************
Released Base Operating System, how further?
============================================
Let's assume the CoreOS acts as base operating system and is now in a version X.Y.Z released. How do we go further?
* Do we use eSDK to develop the application for the base operating system and to build the product images?
* Do we continue without Yocto, e.g. by just using the SDK?
These questions are important as the sstate-cache, the downloads etc. can be shared for further usage.
What about Visualization?
=========================
For the OEM Linux we used to use Grafana for the visualization. This is another instance in a CI/CD environment. There
are as well some questions about what is going on with the logs during the tests of a product. Shall those be
visualized, e.g. like using something like ELK-Stack? Good to know: GitLab provides visualization support.
Which SCM Provider shall be used?
=================================
Currently it is unclear if Belden is following the Atlassian approach to use cloud based services, i.e. keeping
bitbucket but within the cloud. Right at the moment we have the following SCM providers:
* gitea
* gitlab internal (NetModule)
* gitlab public
* bitbucket
* SVN (NetModule NRSW)
The meanings differ a lot regarding SCM. Nevertheless, the OEM Linux team in CHBE decided as well to move to bitbucket.
What if Atlassian is stopping the support for non-cloud based instances? This is an open question which influences as
well the CI infrastructure. Why that? Well, actually I have not seen îf the current bitbucket version provides a
built-in CI service. GitLab does and NetModule has an instance which is maintained. This built-in CI might be used in
the application development where unit tests can be run on the built-in CI for merge requests. This leads that on the
continuous integration path the unit tests are at least passing. You see, there is an influence of the SCM provider to
the CI environment.
Releasing a Software
********************
When it comes to a software release then you must consider of tagging as well your CI environment. If you need to
reproduce a released version, you must make sure that you use the same CI environment as it was released. Until now this
was not the case. Just think about all the Jenkins updates, the plugin updates, server updates, etc. In the past we have
faced such an issue where a plugin was updated/changed. A former pipeline could not be built anymore because the used
command was removed.
So when it comes to a next-level CI environment using docker, we can tag the environment as well and just re-start it in
the tagged version to rebuild an image.
Setup Using Docker
******************
Each Jenkins controller instance might be set up as docker image stack like shown as follows:
.. code-block::
------------------------
| Jenkins Controller |<-----------------------------------------------------------
------------------------ |
| Git |<-- mounting a volume to clone the used repositories to ----
------------------------
| Artifactory |<-- mounting a volume for content
------------------------
| Webserver |<-- mounting a volume for content (webserver (ngninx) as reverse proxy)
------------------------
| Base OS |
------------------------
By using an Ansible Playbook this stack can be set up connected to the active directory and with all needed credentials
to fulfill the build jobs.
With this container stack the access is clearly defined and each container is independent from the others. Each
container stack contains its own webserver and artifactory, meaning specific defined URL. Additionally there are no
interferiences between the different teams, e.g. let's assume the NRSW team needs to fix a security relevant bug and
needs to reproduce a specific version. In this case the NRSW needs to bring the CI environment into that state as it was
when the software of concern was released. With a single Jenkins controller mode this would affect the OEM Linux team as
well.
For NetModule's point of view there would be finally two Jenkins container stacks available, one for the NRSW- and one
for the OEM Linux team.
Setup of First Prototype
########################
This section holds everything about the setup of the first prototype.
Intended Setup Process
**********************
The following simplified diagram shows the intended process of setting up a jenkins instance:
.. code-block::
+------------------------------+ +-----------------------------------+
o-->| Start the Ansible Playbook |---->| Copy necessary Conent to Server |
+------------------------------+ +-----------------------------------+
|
v
+--------------------------------------------+
| Setup Volumes, directories & environment |
+--------------------------------------------+
|
v
+--------------------------------------------+
| Connect to the Server |
+--------------------------------------------+
|
v
+--------------------------------------------+
| Start using docker-compose |
+--------------------------------------------+
|
o
.. note::
The diagram above assumes that a server is already set up.
Intended docker-composition
***************************
The following pseudo-code shows how the jenkins docker stack is composed:
.. code-block::
version: '3.8'
services:
jenkins:
image: repo.netmodule.com/core-os/ci-cd/jenkins-coreos:latest
container_name: jenkins
hostname: jenkins
extra_hosts:
- "host.docker.internal:192.168.1.70"
healthcheck:
test: ["CMD","bash","-c","curl --head http://localhost:8080 && exit 0 || exit 1"]
interval: 5s
timeout: 3s
retries: 3
start_period: 2m
restart: unless-stopped
ports:
- 8080:8080
- 50000:50000
networks:
- jenkins_net
environment:
- TZ=Europe/Zurich
- COMPOSE_PROJECT_NAME=jenkins_controller
- CASC_JENKINS_CONFIG=/var/jenkins_conf/cicd.yaml
- A_SSH_PRIVATE_FILE_PATH=/var/jenkins_home/.ssh/ed25519-secrets
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $PWD/jenkins_home:/var/jenkins_home
- $PWD/jcasc:/var/jenkins_conf
- $PWD/secrets/pw:/run/secrets
- $PWD/secrets/.ssh:/var/jenkins_home/.ssh
- $PWD/secrets/.cacerts:/var/jenkins_home/.cacerts
- $PWD/data:/var/jenkins_home/data
nginx:
image: nginx:stable-alpine
container_name: nginx
hostname: nginx
extra_hosts:
- "host.docker.internal:192.168.1.70"
restart: unless-stopped
environment:
- TZ=Europe/Zurich
ports:
- 80:80
- 443:443
networks:
- jenkins_net
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $PWD/nginx_html:/var/www/nginx/html
- $PWD/nginx_config/default.conf:/etc/nginx/conf.d/default.conf
# https mode: so far it does not work for self-signed cert
#- $PWD/nginx_config/nginx_example_local.conf:/etc/nginx/conf.d/default.conf
#- $PWD/certs:/etc/nginx/certs
#- $PWD/dhparams.pem:/etc/nginx/dhparams.pem
nexus3:
image: sonatype/nexus3:3.49.0
container_name: nexus3
extra_hosts:
- "host.docker.internal:192.168.1.70"
restart: unless-stopped
ports:
- 8081:8081
networks:
- jenkins_net
environment:
- NEXUS_CONTEXT=nexus
- TZ=Europe/Zurich
volumes:
- $PWD/nexus-data/:/nexus-data/
secrets:
bindadpw:
file: $PWD/secrets/pw/bindadpw
sshkeypw:
file: $PWD/secrets/pw/sshkeypw
networks:
jenkins_net:
driver: bridge
And we rely on the jenkins service setup as done for the CoreOS:
.. code-block::
jenkins:
systemMessage: "Jenkins Controller"
scmCheckoutRetryCount: 3
mode: NORMAL
labelString: "jenkins-controller"
numExecutors: 8
securityRealm:
activeDirectory:
domains:
- name: "netmodule.intranet"
servers: "netmodule.intranet:3268"
site: "NTMCloudGIA"
bindName: "cn=svc-ldap-ci,ou=Service,ou=Users,ou=NetModule,dc=netmodule,dc=intranet"
bindPassword: "${bindadpw}"
tlsConfiguration: JDK_TRUSTSTORE
groupLookupStrategy: "AUTOMATIC"
removeIrrelevantGroups: false
customDomain: true
cache:
size: 500
ttl: 600
startTls: true
internalUsersDatabase:
jenkinsInternalUser: "jenkins"
# local:
# allowsSignup: false
# users:
# - id: admin
# password: ${adminpw:-passw0rd}
# securityRealm:
# local:
# allowsSignup: false
# users:
# - id: admin
# password: ${adminpw:-passw0rd}
# - id: developer
# password: ${developerpw:-builder}
authorizationStrategy:
globalMatrix:
permissions:
- "USER:Overall/Administer:admin"
- "GROUP:Overall/Read:authenticated"
- "GROUP:Agent/Build:authenticated"
- "GROUP:Job/Read:authenticated"
- "GROUP:Job/Build:authenticated"
- "GROUP:Job/Cancel:authenticated"
- "GROUP:Job/Workspace:authenticated"
- "GROUP:Run/Replay:authenticated"
- "GROUP:Run/Delete:authenticated"
crumbIssuer: "standard"
security:
GlobalJobDslSecurityConfiguration:
useScriptSecurity: true
queueItemAuthenticator:
authenticators:
- global:
strategy:
specificUsersAuthorizationStrategy:
userid: build_user
credentials:
system:
domainCredentials:
- credentials:
- basicSSHUserPrivateKey:
scope: GLOBAL
id: git_credentials
# need to keep this username for the first run
username: build_user
usernameSecret: true
passphrase: "${sshkeypw}"
description: "SSH passphrase with private key file for git access"
privateKeySource:
directEntry:
privateKey: "${readFile:${A_SSH_PRIVATE_FILE_PATH}}"
- usernamePassword:
scope: GLOBAL
id: nexus_credentials
username: build_user
usernameSecret: true
password: "${somepw}"
description: "Username/Password Credentials for Nexus artifactory"
unclassified:
location:
url: http://<server-hostname>:8080
adminAddress: Mr Jenkins <no-reply@netmodule.com>
tool:
git:
installations:
- name: Default
home: "git"
jobs:
- script: >
multibranchPipelineJob('doc') {
displayName('10. Build Documentation')
description('Builds the Documentation of the CI/CD')
factory {
workflowBranchProjectFactory {
scriptPath('pipelines/Jenkinsfile_Documentation')
}
}
orphanedItemStrategy {
discardOldItems {
numToKeep(5)
}
}
branchSources {
git {
id('build-doc')
remote('git@gitlab.com:netmodule/core-os/cicd.git')
credentialsId('git_credentials')
includes('develop release*')
}
}
}
Comparison to the HAC CI
#########################
This section describes the differences of the concept above and the one at HAC after having a sync meeting with the
guardians.
Situation at HAC
****************
As already known the CI at HAC is constructed with docker containers. But how do they handle the infrastructure if it
comes to a use case where they need to reproduce an already release version. Find the situation at HAC as follows:
* the CI infrastructure bases on the help of the IT department
- new infrastructure like new physical machines and their setup is done by the IT department
- they restore parts from backups if necessary
* dockerfiles describe the docker containers used for building software releases
- AFAIK, the images are pushed to the HAC docker registry
* some infrastructure parts refer directly to the images on docker hub without pushing them to the HAC docker registry
* they use self-created scripts to orchestrate build instances, e.g. creating and starting new instances
* depending on the age of the release to reproduce a bunch of manual steps are needed to rebuild it
* there is already a good state of tracking the versions of a software release and CI infrastructure
- tickets are already open to optimize this version breakdown
* no ansible playbook used
Differences between HAC CI and the Proposal
********************************************
The following list shows the biggest difference of the proposal and the current HAC CI.
* Bring-up of new instances: ansible playbook versus self-created scripts
* General usage of an ansible playbook
- with a playbook the setup of the infrastructure is as well versionized (git repository which can be tagged)
- less dependencies to the IT department
* one dedicated infrastructure part, e.g. web server, artifactory
- all the different CI chains depend on this dedicated infrastructure part
- tracing all the dependencies when releasing a software increases very much over time
- replication on another network is more difficult as the dedicated infrastructure part needs to be realized too
- better encapsulation if the web server and artifactory is part of the instance compound
* docker images pushed to company internal docker registry
- for a proper tracking of versions and reproduction of an already released version, the sources need to be in the
companies network, i.e. all used docker images need to be available on the company internal docker registry
- with the availability of the images in the company docker registry the versioning is guaranteed as the the docker
files refer to an image residing in an accessbile registry and do not depend on the docker hub.
.. note::
Maybe there are some more differences but at the current point these are the most important ones.
Conclusion
***********
After the discussion about the differences and due to the case that versioning is already in the focus of the HAC CI, we
decided to not build a docker compound as stated in the section `Setup of First Prototype`_. We try to bring up an
instance on the HAC CI but with an interface so that the CI jobs can be managed by the teams itself to not disturb the
heavily lodaded HAC CI team too much.
So the further documentation is done in :ref:`nwlCiCd`
Sandbox Section ;-D
###################
some links:
* https://www.howtoforge.com/how-to-setup-nginx-as-a-reverse-proxy-for-apache-on-debian-11/
* https://www.supereasy.com/how-to-configure-nginx-as-a-https-reverse-proxy-easily/
* https://xavier-pestel.medium.com/how-to-manage-docker-compose-with-ansible-c08933ba88a8
* https://stackoverflow.com/questions/62452039/how-to-run-docker-compose-commands-with-ansible/62452959#62452959
* https://plugins.jenkins.io/ansible/
* https://www.ansible.com/blog
* https://k3s.io/
* https://www.dev-insider.de/kubernetes-cluster-mit-einem-klick-einrichten-a-1069489/
* https://adamtheautomator.com/ansible-kubernetes/
* http://web.archive.org/web/20190723112236/https://wiki.jenkins.io/display/JENKINS/Jenkins+behind+an+NGinX+reverse+proxy
a docker-compose file as look-up example:
.. code-block::
services:
postgres:
image: 'postgres:latest'
redis:
image: 'redis:latest'
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- '3050:80'
api:
build:
dockerfile: Dockerfile.dev
context: ./server
volumes:
- /app/node_modules
- ./server:/app
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /app/node_modules
- ./client:/app
worker:
build:
dockerfile: Dockerfile.dev
context: ./worker
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
volumes:
- /app/node_modules
- ./worker:/app
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf
upstream client {
server client:3000;
}
upstream api {
server api:5000;
}
server {
listen 80;
location / {
proxy_pass http://client;
}
location /sockjs-node {
proxy_pass http://client;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

478
doc/src/setup/nwl-ci.rst Normal file
View File

@ -0,0 +1,478 @@
.. _nwlCiCd:
************************************
NetModule Wireless Linux (NWL) CI/CD
************************************
Foreword
########
Time is running and thus it was decided to use the current available infastructure means, see :ref:`nextLevelCiCd` for
more details.
Please note that a Next-Level CI/CD as stated in :ref:`nextLevelCiCd` is neither cancelled nor away from the table. It
is currently just not the right time to force this way and thus a rational decision. For the future there is still
potential about using the best of the current CI solutions of Belden and NetModule and this Next-Level CI/CD idea.
For now the NetModule Wireless Linux CI/CD shall be started on the infrastructure provided by HAC. The following
sections descibe the work on this CI/CD instance.
Getting Started
###############
Tobias Hess set up a new VM as server where this CI/CD instance can be set up. He prepared the VM manually by installing
necessary tools, setting up a mail relay using EXIM4 and adding the SSH keys for accessing the bitbucket repositories.
.. note::
This manual work might be refelcted in an ansible playbook so that this can be automated.
Mailing from command line works as follows (NOTE: apparently you need to be root for this):
.. code-block:: bash
# general email address
echo "Test" | mail <email-address>
# using the created aliases
echo "Test" | mail root
The VM acting as CI/CD server is accessible as follows:
* IP = ``10.115.101.98``
* Users (ask me or Tobias Hess for the passwords):
- root
- user
Overview
********
There are several repositories involved in this CI infrastructure, please find here the most important ones:
* `build-admin <https://bitbucket.gad.local/projects/INET-CI/repos/build-admin>`_
- contains configuration items in xml format
- keys and certificates for signing and service access
- scripts like the manage script for creating/starting/stopping an instance:
.. code-block:: bash
# example for the hilcos platform:
./manage.sh --image=ci.gad.local:5000/env-ci-hilcos:latest --branch=release/hilcos/10/12-exotec \
--name=hilcos_10_12 --platform=hilcos \
--config=/home/administrator/work/ci/instances/hilcos/release/hilcos/10/12-exotec/config/config.xml \
--revision=10.12.5000 -maintainer=TeamFlamingo create
* `build-docker <https://bitbucket.gad.local/projects/INET-CI/repos/build-docker>`_
- **NOTE:** This repository is over 12GB because some toolchain tarballs are included
- contains the files for the docker images
- scripts to build the docker images
- holds jenkins scripts seeding jobs
* `build-pipeline <https://bitbucket.gad.local/projects/INET-CI/repos/build-pipeline>`_
- contains the build pipeline in the HAC CI system
* `build-env <https://bitbucket.gad.local/projects/INET-CI/repos/build-env>`_
- contains class objects for the scripted pipelines
- sets up a Jenkins Library
Workflow Understood by mma
==========================
As far as I understood the dependencies in the CI of HAC, the workflow looks as follows:
#. Manually set up a physical or virtual server
- installing basic tools like docker as the most important one
- add the needed users and its used credentials for the CI environment
- preparing the server itself in a manner to be ready for hooking it in the HAC environment
#. Manually preparation of the CI instance
- preparing the server with the structure needed for hooking the instance in the HAC CI system, e.g.
+ credentials for the docker instance(s)
+ etc.
- preparing the CI instance properties
+ checking out the right branches (build-docker, evtl. build-admin)
+ database and its environment
+ etc.
#. Unless available build the docker compound in the desired composition
#. Calling the manage script with create (repository: build-admin)
#. Bringing the instance up with the manage script
With this workflow and the seen dependencies I see two potential ways to step in for the NWL:
#. within *build-admin* in the configuration file
- create a branch for NWL
- adding another repository than build-pipeline
#. within *build-docker* in the jenkins seed job
- create a branch from *feature/ci/core_os* for NWL, e.g. *feature/ci/nwl*
- adapt the seee.groovy script with the needed multibranch pipelines
The next section descibes the proposed and chosen way.
Proposed Hook to Step In
************************
Branching the *build-admin* project to bring declarative pipelines into the HAC CI infrastructure (which then are
maintained by the developer team) seems **not** to be the best way because this is the common base for all CI projects.
In addition thinking into the future where potentially all CI projects might be integrated into one Belden CI
infrastructure this way neither seems to be right. Thus, I propose the second way:
#. Implement a hook in the Jenkins CI seed job residing in the repository **build-docker**
#) branching from *feature/ci/core_os* for the NWL, e.g. *feature/ci/nwl*
#) adapt the *seed.groovy* script with the needed multibranch pipelines
#. The seed job points to a multibranch pipeline in a NWL CI repository:
- `Repository <https://bitbucket.gad.local/projects/NM-NSP/repos/nwl-ci>`_
- Clone with Git: ``git clone ssh://git@bitbucket.gad.local:7999/nm-nsp/nwl-ci.git``
Implementation of Proposed Hook
*******************************
An adpation in the Jenkins CI seed job points to a multibranch pipeline for the NWL instead of the usual build pipeline.
Additionally a draft pipeline for the NWL is committed.
There was an open question if there is a docker image for the CoreOS instance in the HAC docker registry. The sync
meeting between HAC and NM regarding CoreOS clarified this question with the statement "no currently there is not as it
is still not final and the docker registry is moved to another instance". This means we need to create a docker image
locally on the server and start with this one.
Well, let's create and start a NWL Jenkins instance...
On `build-docker <https://bitbucket.gad.local/projects/INET-CI/repos/build-docker>`_ we fork the branch
*feature/ci/core_os* and name the new branch **feature/ci/nwl**. All changes for the NWL CI are made on this branch. See
the following sections.
Brining Up the Instance
=======================
Before we can start the instance we need to create a docker image so that it is available locally on the physical
server.
.. note::
The docker registry is currently moved to another location and thus it is not recommended to push this first trial
to the registry.
Building the CI Docker Image
----------------------------
The Guardians recommend to use a user **without** root priviledges. Thus, we need to add our user to the docker group:
.. code-block:: bash
# Log into the server as root
ssh root@10.115.101.98
usermod -a -G docker user
# Log out
.. note::
Several question could be clarified after a short meeting with Arne Kaufmann.
We build the docker image with user *user*. But as the jenkins-ci image layer uses the *nofmauth* repository it was
not possible to build it on the server itself. The reason seems to be that the SSH key of user@netmodule-03 has no
access to the *nofmauth* repo but is somehow added to bitbucket. Arne is trying to find the location where the
user's SSH key is added.
Arne recommended to build the docker image locally on the developer machine and then transfer it to the server.
Building locally did not work out of the box. There was an issue with the DNS when building the docker image:
.. code-block:: bash
=> ERROR [stage-0 6/10] RUN --mount=type=ssh mkdir -p /var/lib/ci/libs && cd /var/lib/ci/libs && mkdir -p -m 0700 ~/.ssh && ss 0.3s
------
> [stage-0 6/10] RUN --mount=type=ssh mkdir -p /var/lib/ci/libs &&
cd /var/lib/ci/libs && mkdir -p -m 0700 ~/.ssh &&
ssh-keyscan -p 7999 bitbucket.gad.local >> ~/.ssh/known_hosts &&
git clone ssh://git@bitbucket.gad.local:7999/inet-ci/nofmauth.git &&
rm ~/.ssh/known_hosts && nofmauth/lib && ./buildLib.sh && mkdir -p WEB-INF/lib && mv nofmauth.jar WEB-INF/lib &&
jar --update --file /usr/share/jenkins/jenkins.war WEB-INF/lib/nofmauth.jar && cd /var/lib/ci && rm -rf libs:
#0 0.240 getaddrinfo bitbucket.gad.local: Name or service not known
#0 0.244 getaddrinfo bitbucket.gad.local: Name or service not known
#0 0.247 getaddrinfo bitbucket.gad.local: Name or service not known
#0 0.251 getaddrinfo bitbucket.gad.local: Name or service not known
#0 0.255 getaddrinfo bitbucket.gad.local: Name or service not known
Thus, the the build.sh got a new docker build argument according commit
`fb26e9 <https://bitbucket.gad.local/projects/INET-CI/repos/build-docker/commits/fb26e99023ecad0d212711940f7c8c0105b28d8c>`_.
.. note::
The master or stable branch holds the newly changed jenkins plugin installation solution. If there are any issues
the branch needs to be rebased with the master/stable.
Let's build the NWL docker images locally on our machine assuming that the repository *build-docker* is cloned, the
branch *feature/ci/nwl* checked out and available:
.. code-block:: bash
# Enter the build-docker directory:
cd ~/belden/build-docker
# Enable the ssh-agent with your private SSH key which should only be used to clone the nofmauth repo and
# should not go into the docker image. Finally check if the key is loaded:
eval `ssh-agent`
ssh-add ~/.ssh/id_rsa
ssh-add -L
ssh-rsa ********************************
# Build the docker image:
DOCKER_BUILDKIT=1 ./build.sh nwl 0.1.0
**********************************************************
** Building basic-os image
**********************************************************
[+] Building 1.4s (18/18) FINISHED
...
=> => naming to docker.io/library/basic-os
**********************************************************
** Building nwl image
**********************************************************
[+] Building 0.0s (6/6) FINISHED
...
=> => naming to docker.io/library/nwl
**********************************************************
** Building jenkins image
**********************************************************
...
[+] Building 0.1s (19/19) FINISHED
...
=> => naming to docker.io/library/nwl-jenkins
**********************************************************
** Building jenkins-ci image
**********************************************************
[+] Building 1.3s (17/17) FINISHED
...
=> => naming to docker.io/library/nwl-jenkins-ci
**********************************************************
** Building env-ci image
**********************************************************
[+] Building 0.0s (6/6) FINISHED
...
=> => naming to docker.io/library/nwl-env-ci
**********************************************************
** Building klocwork image
**********************************************************
[+] Building 0.0s (18/18) FINISHED
...
=> => naming to docker.io/library/nwl-klocwork
Done!
# Overview of the created images:
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
nwl-klocwork latest 17e59d7f36fd 2 hours ago 7.18GB
nwl-env-ci latest 0d053988863b 2 hours ago 1.99GB
nwl-jenkins-ci latest c4298f02759e 2 hours ago 1.99GB
nwl-jenkins latest d6d06c06c790 3 hours ago 1.72GB
nwl latest 924de047f0bf 3 hours ago 1.6GB
basic-os latest d20d08843c00 3 hours ago 739MB
For completeness we transfer all those images to the server by using two potential ways:
#. Using pipes directly:
.. code-block:: bash
docker save <image>:<tag> | bzip2 | pv | ssh <user>@<host> docker load
#. Using multiple steps when there are issues with the VPN:
.. code-block:: bash
docker save -o <path for generated tar file> <image>:<tag>
rsync -avzP -e ssh <path for generated tar file> <user>@<10.115.101.98>:/home/user/
docker load -i /home/user/<file>
After transferring the docker images to the server we check if they are listed:
.. code-block:: bash
user@netmodule-03:~/build-docker$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
nwl-klocwork latest 17e59d7f36fd 2 hours ago 7.18GB
nwl-env-ci latest 0d053988863b 2 hours ago 1.99GB
nwl-jenkins-ci latest c4298f02759e 2 hours ago 1.99GB
nwl-jenkins latest d6d06c06c790 3 hours ago 1.72GB
nwl latest 924de047f0bf 3 hours ago 1.6GB
basic-os latest d20d08843c00 3 hours ago 739MB
.. _basicoswarning:
Switching to Debian as Basic OS
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. note::
After running a first pipeline in this instance, after the work documented in `Starting the CI Instance`_ I detected
the yocto warning ``WARNING: Host distribution "ubuntu-20.04" ...``. As in NetModule the servers and developer
machines base on Debian, I created new images based on Debian 11. Additionally I tagged all the images accordingly
to have a proper overview:
.. code-block:: bash
user@netmodule-03:~/work/ci$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
nwl-env-ci 0.1.1 e89036df18a3 58 minutes ago 2.19GB
nwl-env-ci latest e89036df18a3 58 minutes ago 2.19GB
nwl-jenkins-ci 0.1.1 9fbb8eeaa717 2 hours ago 2.19GB
nwl-jenkins-ci latest 9fbb8eeaa717 2 hours ago 2.19GB
nwl-jenkins 0.1.1 1f6b2c0d644a 2 hours ago 1.94GB
nwl-jenkins latest 1f6b2c0d644a 2 hours ago 1.94GB
nwl 0.1.1 a30655b9de0e 2 hours ago 1.82GB
nwl latest a30655b9de0e 2 hours ago 1.82GB
basic-os 0.1.1 fc2ea6009615 2 hours ago 823MB
basic-os latest fc2ea6009615 2 hours ago 823MB
nwl-klocwork 0.1.0 17e59d7f36fd 24 hours ago 7.18GB
nwl-klocwork latest 17e59d7f36fd 24 hours ago 7.18GB
nwl-env-ci 0.1.0 0d053988863b 24 hours ago 1.99GB
nwl-jenkins-ci 0.1.0 c4298f02759e 24 hours ago 1.99GB
nwl-jenkins 0.1.0 d6d06c06c790 25 hours ago 1.72GB
nwl 0.1.0 924de047f0bf 25 hours ago 1.6GB
basic-os 0.1.0 d20d08843c00 25 hours ago 739MB
Starting the CI Instance
------------------------
.. note::
The *build-admin* repository does neither have a branch for CoreOS nor holds a directory with the keys for it. The
directory ``~/work/ci`` on the server was prepared by the Guardians. This preparation holds keys for the CoreOS
residing in the subdirectory keys/coreos.
To have at least on the server a separation, we copy the keys of the CoreOS and use them for NWL.
.. code-block:: bash
# Change the directory to
cd ~/work/ci
# Separating NWL from CoreOS by copying its keys (those keys are already set up):
cp -R keys/coreos keys/nwl
So far we have everything set up to start the instance using ``manage.sh``. The arguments are explained as follows:
* image
- the docker image to take
* branch
- the branch of the NWL repository to build
- the branch of the repository where the jenkins file is located
+ This one here can be omitted as we use the hook over *seed.groovy*
* name
- the name of the instance
* config
- the configuration XML to use --> currently we do not have changes as we use the hook over *seed.groovy*
* platform
- keep in mind that this argument defines as well the directory for the keys
* revision
- revision of the container (build version) - Note: for HiOs this parameter is read for the release version
* maintainer
- the team which is in charge for this instance
With all those information we give now a try to launch this instance:
.. code-block:: bash
# create the instance:
./manage.sh --image=nwl-env-ci:latest --branch=main \
--name=nwl_0_1 --platform=nwl \
--config=/home/user/work/ci/config/config.xml \
--revision=0.1.0 --maintainer=TeamCHBE create
Creating new instance...
Done!
# check the entry:
./manage.sh -p
+---------+----------------------------+-------+---------+--------+----------+------------+----------+-------------------+--------------+---------+
| name | host | port | status | branch | revision | maintainer | platform | image | container | display |
+---------+----------------------------+-------+---------+--------+----------+------------+----------+-------------------+--------------+---------+
| nwl_0_1 | netmodule-03.tcn.gad.local | 32780 | running | main | 0.1.0 | TeamCHBE | nwl | nwl-env-ci:latest | 60726aec0ebc | NULL |
+---------+----------------------------+-------+---------+--------+----------+------------+----------+-------------------+--------------+---------+
.. note::
Currently there is the LDAP password missing in jenkins configuration XML (jenkins.xml), thus Jenkins does not start
properly.
To continue testing the instance start-up, I disabled the LDAP configuration.
Let's enter the newly created instance in the `browser <https://10.115.101.98:32780/>`_.
|coreOsCiChain|
As mentioned in :ref:`basicoswarning` all the images are rebuilt by basing on a Debian 11. To be sane with versioning
and to clean the instances the previous instance was destroyed with ``./manage.sh --name=nwl_0_1 destroy``. The newly
created images are tagged with version *0.1.1*.
Let's create the new instance and bring it up:
.. code-block:: bash
# create the instance:
./manage.sh --image=nwl-env-ci:0.1.1 --branch=main \
--name=nwl_0_1_1 --platform=nwl \
--config=/home/user/work/ci/config/config.xml \
--revision=0.1.1 --maintainer=TeamCHBE create
Creating new instance...
Done!
# check the entry:
./manage.sh -p
+-----------+----------------------------+-------+---------+--------+----------+------------+----------+------------------+--------------+---------+
| name | host | port | status | branch | revision | maintainer | platform | image | container | display |
+-----------+----------------------------+-------+---------+--------+----------+------------+----------+------------------+--------------+---------+
| nwl_0_1_1 | netmodule-03.tcn.gad.local | 32780 | running | main | 0.1.1 | TeamCHBE | nwl | nwl-env-ci:0.1.1 | 0eb450fc827a | NULL |
+-----------+----------------------------+-------+---------+--------+----------+------------+----------+------------------+--------------+---------+
mma Tasks
**********
These are the tasks:
* [x] build on the server locally the docker compound
* [x] start the instance
* [x] test the build pipelines
* [ ] implement LDAP connection?
.. |coreOsCiChain| image:: ./media/nwl-ci-jenkins-dashboard.png
:width: 700px