Merge: Documentation, NWL build pipeline, Ansible Research Results

This commit holds an intermediate state with the following key points:
*  Having a NWL build pipeline integrated in the infrastructure of HAC
   - NOTE: the pipelines can be triggered as anonymous but that's it
           we do not have the permissions to do anything on this
           Jenkins Controller
*  The results of the research about ansible playbooks and AWX
*  The documentation of all the work done like
   - Thoughts about the next-level CI/CD at NetModule
   - CI use-cases and Recommendations
     + workflow of the different teams
   - CI pipelines within Bitbucket
     + similar to GitLab CI as used at NetModule
   - Research about Ansible Playbook
   - NWL CI CI setup on the HAC infrastructure
     + first setup

Squashed commit of the following:

commit 974b77234d
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 13 14:12:19 2023 +0200

    doc/researchAnsible: added sections build docker image and conclusion

    With this commit we set up a machine until building a docker image on
    it. The next step would be to start the docker container or the
    docker compound.

    It is as well possible to run the playbooks without having an AWX
    instance but AWX gives you more overview about the things that are
    available and happening. E.g. you can set up schedules for re-
    occurring jobs. For reasons of completness an example to launch
    a playbook from command line is added in the conclusion section.

    Additionally I added a section about combining playbooks in one
    common playbook which might be useful to have really a single
    button press for setting up a new machine.

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 72a8b96ea0
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 13 13:34:01 2023 +0200

    playbook/build-docker: do not gather facts and register build of docker

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit b25d6f32d3
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 13 13:20:51 2023 +0200

    playbook/build-docker: using shell as the docker module is not available

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 29e807db1b
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 13 13:00:44 2023 +0200

    playbook,docker: added playbook to build a docker image

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 35e28eb1a3
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 13 12:37:48 2023 +0200

    playbooks/clone-repo: added condition when repo is already cloned

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 1960187318
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 13 11:52:58 2023 +0200

    doc/researchAnsible: added section to clone a git repository

    AWX uses separate ssh keys to access the host. The host itself uses
    its own ssh keys to access bitbucket. The added section shows a way
    how to handle such a condition.

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 262c560f38
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 13 11:51:22 2023 +0200

    doc/researchAnsible: added snippet reconfiguring docker network

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 8b56069a38
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 13 11:30:00 2023 +0200

    playbooks/clone-repo: refactored the playbook

    removed the logs to increase the security

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 15732a2cf7
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 13 11:14:09 2023 +0200

    playbooks/clone-repo: fixed repo url (forgotten during refactoring)

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 14b51efb5d
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 13 10:58:33 2023 +0200

    playbooks/clone-repo: 1 task for cloning, checking out and updating

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 0b66f54f97
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 13 10:50:47 2023 +0200

    playbooks/clone-repo: changed creating auto ssh add file and its path

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit fcceaca96e
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 13 10:43:05 2023 +0200

    playbooks/clone-repo: make auto ssh add file executable

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 2438809884
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 13 10:33:20 2023 +0200

    playbooks/clone-repo: using shell commands to clone repo

    by using shell commands we have more flexibility to clone the repo
    using specific ssh keys.
    Additionally we provide the passphrase for the ssh key using the
    AXW vault.

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 4d9f64f3dc
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 13 08:06:49 2023 +0200

    playbooks/clone-repo: replaces ip with name of bitbucket

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 9920d1d9b4
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Mon Jun 12 17:52:19 2023 +0200

    inventory: switching the host to a server in the HAC network

    The created server for the conceptional work of the CoreOS CI is
    currently available but not much used. Thus, I switched the host
    of the inventory to this server. With it it should be possible to
    clone a repository, build and launch a docker image.

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 3216c8d0f6
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Mon Jun 12 12:00:23 2023 +0200

    inventory,playbook/conf-docker-net: changed the VM ip and used sudo

    the IP of my VM is adapted due to my smartphone acting as gateway
    and DHCP server. Thus I made the work easier and change the IP in
    the inventory.
    For the docker network reconfiguration I used sudo so that we can
    use the priviledged mode

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 3027f8e284
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 6 16:27:21 2023 +0200

    playbooks: added configure docker network playbook first version

    per default docker uses IP 172.17.0.0/16 which conflicts with the
    test network at NetModule. Hence, we reconfigure it to
    192.168.5.1/24

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit ef3fd030ba
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 6 14:29:16 2023 +0200

    doc/ciWithinBitbucket: added link to keep in mind

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 4a7633f845
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 6 14:28:39 2023 +0200

    doc/researchAnsible: added section for creating ssh keypairs

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit f691f5206c
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Jun 6 13:48:22 2023 +0200

    playbooks;inventory: added variable to production.yaml and added create-ssh-key playbook

    The production inventory got a new variable called host_name which we use in the
    newly added playbook creating an SSH key.
    The playbook holds the no_log tag for keeping the execution parts more secure.

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 70d033bde7
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Mon Jun 5 13:23:50 2023 +0200

    doc: added the installed Jenkins plugins on the NWL instance

    additionally, I added the ones which are installed on the
    NetModule Jenkins instance.

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit c304cf9954
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Mon Jun 5 13:18:49 2023 +0200

    doc: added new chapter with title CI within Bitbucket

    the topic about using the Git server (Bitbucket or GitLab) for
    user-space applications has driven me into that direction.
    Unfortunately there is a lack of information and permission to
    move on with this topic. Thus, I postponed it until I get some
    news :-)

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit a9a720a364
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue May 30 16:53:20 2023 +0200

    doc: added CI use-cases

    visualized the different use-cases for the CI environment
    including the TDD workflow.

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 565493b9de
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Mon May 22 16:09:42 2023 +0200

    playbook/clone-repo: replaced bitbucket name with ip

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 8837554aba
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Mon May 22 11:06:24 2023 +0200

    playbooks/clone-repo: removed prompting but added std output

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit bce9b6c45f
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Mon May 22 10:59:28 2023 +0200

    playbooks/clone-repo: fixed indentation

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 470a77f787
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Mon May 22 10:53:34 2023 +0200

    playbooks/clone-repo: added missing syntax element for prompting

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 850396ebc3
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Mon May 22 10:38:11 2023 +0200

    playbooks: added playbook to clone a repository

    the repository url shall be prompted to keep it more flexible.

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit ae29034593
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue May 16 14:22:12 2023 +0200

    doc/researchAnsible: added part about basic pkg installation with sudo

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit c37bd6f380
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Mon May 15 18:18:19 2023 +0200

    playbooks,inventory: added playbook installing basic tools on a VM

    As soon as a virtual machine is ready, we install docker as it gives
    the most flexbility for our CI and the docker images are versionized.
    Hence, we install docker as main installation package.
    We assume that the VM holds the following users:
    - root
    - superuser (added to sudoers)
    - user

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 9e6a6d7c8d
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Mon May 15 09:52:33 2023 +0200

    doc/researchAnsible: added job template for getting started playbook

    Documented the job template to get the information residing in
    the playbook getting_started/os-rls.yml

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit c22a7a2f38
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue May 9 17:02:50 2023 +0200

    doc/researchAnsible: added sections for project and inventories

    this commit holds the information about how to synchronize a git
    project containing ansible playbooks and hosts with AWX.

    NOTE: getting the hosts of the inventroy file is not that obvious,
          hence it is worth to read it.

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 6a824507ed
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue May 9 15:19:43 2023 +0200

    ansible.cfg,os-rls.yml: added configuration for files and fixed var

    fixed variable in playbook os-rls.yml and added config file for
    the ansible file paths.

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit ea59c88fe0
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue May 9 13:30:31 2023 +0200

    inventory/production: add data type ending

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 616c35f9b1
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue May 9 13:26:40 2023 +0200

    inventroy/production: reformatted the content

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 777d708883
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue May 9 13:22:00 2023 +0200

    inventory: removed subdirectory and renamed ci to production

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit cc1c338e09
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue May 9 12:25:16 2023 +0200

    researchAnsible: updated with setup of latest AWX version

    The latest version of AWX requires a Kubernetes Cluster. This
    commit holds the update of the page and shows both installation
    methodes (directly with docker and the latest version with a
    minikube).

    Additionally, I added a new section setting up a virtual machine
    for test purpose. With it another section for accessing such
    machines over SSH.

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 118b75408d
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue May 9 12:02:29 2023 +0200

    inventroy,playbooks: arranged files in subdirectories

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 8b9d480e26
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Mon May 8 17:14:21 2023 +0200

    inventory: added IP address of newly created VM

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit add58c392f
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue May 2 13:25:52 2023 +0200

    inventory: renamed inventory

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 3dfabb5b3d
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue May 2 13:14:31 2023 +0200

    playbooks,collections: renamed to .yml and added collection requirements

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 740a647460
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue May 2 12:33:30 2023 +0200

    doc,ansible: documented setup of AWX, added structure for a first playbook

    Documented the setup of an AWX instance using docker-compose.
    Added a first playbook including inventory

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit d4f00bf431
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Mon May 1 12:04:54 2023 +0200

    doc/nwl-ci: added parts about CI adaptions

    the nightly timer triggers the job without target parameter. Thus,
    the job checks if it is a nightly build and takes a default target
    which is the clearfog one.

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit a41f1b1148
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Mon May 1 09:11:19 2023 +0200

    jobs/build: use default target when job is triggered by timer

    In a nightly build the job parameter TARGET stays on "selected...".
    Thus, a check verifies if the job is triggered by a timer and then
    takes the default target which is the first one in the file
    'nwlTargets'.
    With it the description name of the build gets a postfix 'nightly'.

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 27c7777f79
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Apr 25 16:12:31 2023 +0200

    jobs/common: removed settings for sstate- and equiv server

    currently the environment is unclear and is set to ToDo

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit dd0c8c871c
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Apr 25 15:15:04 2023 +0200

    doc: updated changes due to permission restriction

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 3e52ea97ed
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Apr 25 14:35:11 2023 +0200

    jobs/common: set nwl image for CI

    with the current Merge Request there are now 2 images for the NWL.
    One is specially for the CI.

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 0e39db0e35
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Apr 18 16:04:00 2023 +0200

    README,doc: added next-level-CI and NWL-CI documentation

    adapted README accordingly to build the documentation.

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 34eae1d78d
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Apr 18 10:09:58 2023 +0200

    README: adapted readme with a bit more details

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit c3056f4cb5
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Apr 18 09:14:23 2023 +0200

    jobs/common: removed residing parallel configuration for auto.conf

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit ca4b22b136
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Apr 18 09:13:47 2023 +0200

    jobs/build: added missing removing of pre clone directory

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit eb19873605
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Apr 18 08:47:12 2023 +0200

    jobs: adding git credentials and finalized pre-node step

    need to clone the repository to load the list of the buid
    targets in pre-node step.

    added a git credential ID getter to clone repositories.

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 4251812565
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Apr 18 08:18:25 2023 +0200

    jobs/Build: fixed apostrophes when setting the job description

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

commit 5b22271083
Author: Marc Mattmüller <marc.mattmueller@netmodule.com>
Date:   Tue Apr 11 16:33:29 2023 +0200

    jobs: added first draft of NWL build pipeline

    this commit holds additionally a common jenkins file and a
    file containing the targets (machines) to build. Latter is
    used for the drop-down parameter in the build job.

    Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>

Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>
This commit is contained in:
Marc Mattmüller 2023-06-13 14:27:19 +02:00
parent e15a4abbf3
commit 2bfbbdb0e0
34 changed files with 3609 additions and 7 deletions

1
.gitignore vendored Normal file
View File

@ -0,0 +1 @@
doc/out

View File

@ -1,12 +1,61 @@
# Netmodule Wireless Linux CI/CD Repository
# NetModule Wireless Linux CI/CD Repository
This repository contains all necessary jobs for the CI/CD environment of the
NetModule Wireless Linux (NWL).
This repository contains all necessary parts for the CI/CD environment.
Further information will follow after starting the work. The current
idea is to...
## Content
This repository holds the documentation for the CI environment and the jobs for
the NWL as declarative pipelines (multibranch):
* create an jenkins (docker) instance on the CI infrastructure of HAC
* with it a seed job will then create the needed jobs for the NWL CI
* doc
So far, let's move on :-D
- the documentation of the work for the NWL CI environment
* jobs
- Jenkinsfile_Build
+ a pipeline building a NWL yocto target
- Jenkinsfile_Common
+ a collection of commonly used functions, so that duplicated code can be
avoided
* inventory
- Ansible inventory with all managed hosts/devices
* playbooks
- Ansible playbooks
## Marginal Notes
This repository does NOT cover the setup of the Jenkins instance.
## Building the Documentation
The documentation bases on sphinx and is written in reStructuredText format. To
build the documenation you need to install sphinx first:
```bash
sudo apt install python3-sphinx
sudo pip3 install cloud-sptheme
```
Within the directory ``doc`` you can use make as follows:
```bash
# entering doc:
cd doc
# clean and build the documentation:
make clean
make html
# open the generated documentation in the browser:
xdg-open out/html/index.html
cd ..
```

4
ansible.cfg Normal file
View File

@ -0,0 +1,4 @@
[defaults]
inventory = inventory/production.yaml
collections_paths = collections
roles_path = roles

View File

@ -0,0 +1,4 @@
---
collections:
- ansible.posix
- community.general

225
doc/Makefile Normal file
View File

@ -0,0 +1,225 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = out
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) src
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) src
.PHONY: help
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " epub3 to make an epub3"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
@echo " dummy to check syntax errors of document sources"
.PHONY: clean
clean:
rm -rf $(BUILDDIR)/*
.PHONY: html
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
.PHONY: dirhtml
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
.PHONY: singlehtml
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
.PHONY: pickle
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
.PHONY: json
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
.PHONY: htmlhelp
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
.PHONY: qthelp
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/NetModuleBeldenCoreOS.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/NetModuleBeldenCoreOS.qhc"
.PHONY: applehelp
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
.PHONY: devhelp
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/NetModuleBeldenCoreOS"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/NetModuleBeldenCoreOS"
@echo "# devhelp"
.PHONY: epub
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
.PHONY: epub3
epub3:
$(SPHINXBUILD) -b epub3 $(ALLSPHINXOPTS) $(BUILDDIR)/epub3
@echo
@echo "Build finished. The epub3 file is in $(BUILDDIR)/epub3."
.PHONY: latex
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
.PHONY: latexpdf
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: latexpdfja
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: text
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
.PHONY: man
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
.PHONY: texinfo
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
.PHONY: info
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
.PHONY: gettext
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
.PHONY: changes
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
.PHONY: linkcheck
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
.PHONY: doctest
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
.PHONY: coverage
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
.PHONY: xml
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
.PHONY: pseudoxml
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
.PHONY: dummy
dummy:
$(SPHINXBUILD) -b dummy $(ALLSPHINXOPTS) $(BUILDDIR)/dummy
@echo
@echo "Build finished. Dummy builder generates no files."

View File

@ -0,0 +1,15 @@
div.sphinxsidebar {
width: 3.5in;
}
div.bodywrapper {
margin: 0 0 0 3.5in;
}
div.document {
max-width: 18in;
}
div.related {
max-width: 18in;
}

29
doc/src/conf.py Normal file
View File

@ -0,0 +1,29 @@
# Configuration file for the Sphinx documentation builder.
#
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
project = 'NetModule Wireless Linux CI/CD'
copyright = '2023, Marc Mattmüller'
author = 'Marc Mattmüller'
release = '0.1'
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
extensions = ['sphinx.ext.autodoc','sphinx.ext.viewcode','sphinx.ext.todo']
templates_path = ['_templates']
exclude_patterns = []
# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
html_theme = 'cloud'
html_static_path = ['_static']
html_css_files = ["theme_overwrites.css"]

38
doc/src/index.rst Normal file
View File

@ -0,0 +1,38 @@
.. NetModule Wireless Linux CI/CD documentation master file, created by
sphinx-quickstart on Tue Apr 18 13:04:26 2023.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to NetModule Wireless Linux CI/CD's documentation!
==========================================================
This documentation provides an overview about the work of the CI/CD for the
NetModule Wireless Linux.
Content
*******
.. toctree::
:maxdepth: 2
:caption: Next-Level CI
:glob:
nextlevel-ci/*
.. toctree::
:maxdepth: 2
:caption: NWL CI Setup
:glob:
setup/*
Indices and tables
******************
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -0,0 +1,102 @@
.. _ciUseCases:
********************************
CI Use-Cases and Recommendations
********************************
Introduction
############
There was a request to visualize workflows of different use-cases depending on the working items at Belden/HAC/NM. This
request was as well forced due to my recommendation to develop test driven. But first of all we look at some locations
for development:
* the core of the operating system (kernel space)
* the user space level with well defined API
Even if I still think that test driven development in the kernel space is beneficial, I focus on two categories:
* development of the operating system Linux using Yocto
- most parts of the CoreOS
- some parts of BIL and/or NWL
* development of applications running in the user space of Linux (all applications having a well defined Hardware
abstraction layer like the Linux API or POSIX)
- I guess the bigger part of the features of the Belden/HAC/NM products
**As a matter of course it depends on the build environment you want to use.** Yocto is a powerful build framework and
provides the functionality to build the complete software image package with operating system and all its user space
applications and/or the operation system only. But it provides as well of a software development kit including a
toolchain and rootfs.
This means as a user space application developer **you do not have to launch Yocto** to build your application. Why am I
pointing that out. In software development you want to speed-up your development and preferably you do not want to
switch your IDE to build the application. Thus you may want a toolchain and rootfs installed on your local development
environment and develop your application without waiting too long for feedback on your changes, right?
That's why the CI environment differs for these different use-cases.
CI using Yocto
##############
The selected build framework is Yocto which is well known at NetModule's due to their OEM Linux. Yocto was decided to be
used for CoreOS, BIL (Belden Industrial Linux) and NWL (Netmodule Wireless Linux). Hence a proposed CI chain looks like
this:
|ciYoctoBased|
The regression tests need to be as elaborated to not break the defined features provided by CoreOS, BIL or NWL. This is
not only a test to see whether the operating system is booting into the newly created version or not. There are a lot of
services that need to start successfully to fulfill the defined features for CoreOS, BIL and/or NWL. To understand this
better let's have a look at the boot scenario that the operating system itself is booting but a specific services like
network manager crashes at start-up. This would be an unwanted behaviour. If we would just cover the boot-up and not the
essential funtion, a pull/merge request verification makes no sense here.
The extended test block was introduced to trigger more elaborated tests on request and to not prolong a pull/merge
request. This extended tests I would trigger as the asynchronuous ones known at HAC.
CI Creating a Release Package or Complete Software Package
##########################################################
.. note::
There are different possibilities/mechanisms to build a release- or complete software package.
The following recommendation uses Yocto to build a release- and/or a complete software package for a product.
|ciSwPkg|
This chain is pretty similar to the one in the previous section except that there are additional blocks like tagging and
deploying to ROW (rest of world).
CI User-Space Application
#########################
As mentioned in the `Introduction`_ a CI using Yocto for a user-space application may take too long for a feedback.
Additionally with unit tests you may introduce very dedicated tests which are in real environments hardly achievable. In
a year 2023 it is unimaginable to not develop test driven. Hence I added the workflow again here:
|userSpaceAppDev|
After having the feature fully implemented, the developer commits the code to the feature branch and creates a
pull/merge request. This triggers the CI environment based on the units tests of the application as shown below. If you
want to enhance your CI for this application you may deploy the binary to a target and launch tests on the target.
|ciUserSpaceApp|
Gitlab and as well Bitbucket are able to launch CI pipelines for a repository. I recommend to use such pipelines for the
application CI.
If it comes to a software package for a product where multiple applications like the example above are included, the CI
chaine as descibes in `CI Creating a Release Package or Complete Software Package`_ is recommended.
.. |ciYoctoBased| image:: ./media/ci-yocto-based.png
:width: 800px
.. |ciSwPkg| image:: ./media/ci-completePkg-Releasing.png
:width: 800px
.. |ciUserSpaceApp| image:: ./media/ci-userspace-app.png
:width: 800px
.. |userSpaceAppDev| image:: ./media/userspace-app-development.png
:width: 800px

View File

@ -0,0 +1,47 @@
.. _ciInBitbucket:
******************************
CI Pipelines withing Bitbucket
******************************
Introduction
############
As mentioned in :ref:`ciUseCases` it makes more sense to develop user-space application test driven and use these unit
tests as indicator for merge/pull requests. NetModule used the CI capabilities for a library and small user-space
applications already within GitLab and Bitbucket provides a similar feature. Thus, this chapter shows the work of using
the Bitbucket CI feature for a specific project.
Getting Started
###############
So far I understood we would need to link Bitbucket and Jenkins. Therefore Jenkins needs an additional plugin:
* atlassian-bitbucket-server-integration
- `web docu <https://plugins.jenkins.io/atlassian-bitbucket-server-integration/>`_
.. note::
From the Jenkins plugin documentation I have found this:
Bitbucket Server 6.0 to 7.3 are also supported, but they're not recommended. This is because some plugin features
are not available when using these versions. Instead, we recommend using Bitbucket Server 7.4+. With 7.0+ you can
make use of pull request triggers for jobs. With 7.4+ you can set up an Application Link to have access to all
plugin features.
We would need to configure the plugin on Jenkins:
* Adding the Bitbucket instance details including a HTTP access token
At the current moment I am not sure about how to proceed. I have no admin permissions on the Jenkins instance I am using
on the HAC infrastructure and on Bitbucket I cannot see the pipeline setting maybe as well due to a lack of permissions.
Thus, I postpone this topic until I get some news :-)
Links mybe to keep in mind
**************************
Bitbucket CI/CD pipelines:
* `Integrated CI/CD <https://confluence.atlassian.com/bitbucketserver0721/integrated-ci-cd-1115666724.html>`_

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 160 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 159 KiB

View File

@ -0,0 +1,611 @@
.. _nextLevelCiCd:
**************************
NetModule Next-Level CI/CD
**************************
Foreword
########
The past half year collaborating with HAC, the NRSW- and OEM-team showed that some parts of the CI might be outdated for
the needs coming towards us. E.g. the single Jenkins Controller might be replaced with a multiple Jenkins Controller
using built-in nodes for each area like NRSW, OEM, etc.
Using a dockerized environment supports this approach. The first steps done for the CoreOS showed that there should be
an automatation to set up a new instance. Therefore a first brainstorming meeting was held the 7th of March 2023. The
output of it and its continuation is reflected in the sections below.
Rough Overview of the discussion
################################
Development Areas
*****************
The way development take place differs and can be split into these two groups:
* Yocto
- to build an image
- to build a SDK (software development kit)
- to build a package out of the image
- one repository including several submodules distributed over several SCMs
- recommended to build:
+ the base operating system, like CoreOS
+ the final product software image
- blackbox testing of the output
* Application
- to build an application running on the base operating system (in user space)
- needs:
+ the SDK
+ a unit test framework
+ rootfs set up (e.g. for library development)
- one repository or including submodules but on same SCM
- white- and blackbox testing possible using the unit test framework
There are several ways for bringing an application into a release image:
* yocto recipe populating a (pre-)built binary (or multiple binaries) into the rootfs
- advantages:
+ less effort maintaining the recipe
+ one binary can easily be picked and replaced on a target hardware (e.g. development/debugging)
+ feedback on the CI is much faster as the unit tests can be used as pass/fail indicator
+ a CI job builds the binary (leads to better overview where a failure is coming from)
+ merge requests are made on application level using its unit tests (only mergeable when passing)
* with this approach a lot of issues can be eliminated as only passing unit tests are merged and brought
into the continuous integration path
+ with this approach there is most likely a workflow for test driven development (TDD) available, this makes
debugging much faster and the design is more robust and better testable
* a cool benefit with TDD, you can easily mock some layers and build an exhibition version to demonstrate
new features without installing a full blown infrastructure
.. note::
**Be aware of a conscious polarization**
We count the year 2022/2023 and therefore it is a must to develop applications for an embedded Linux
test driven. If you try to find argument against TDD you may ask yourself about what you would expect
when you are buying a high-end product like the one we are selling?
- disadvantages:
+ it is highly recommended that the SDK version matches with the target release
+ the yocto recipe must pick the binary from somewhere on the infrastructure
+ you need to take care about permission settings when picking and replacing a binary
+ there are much more CI jobs to maintain
- additional information:
+ when using NetModule's internal GitLab instance, the GitLab CI can be used for the unit tests and mergings.
With it no further Jenkins CI job is necessary.
* yocto recipe (as currently used) building the application and puts it into the rootfs
- advantages:
+ the application is built with the environment set up by yocto (e.g. versions); no need of a SDK
+ the meta-layer, where the application recipe is in, is much more flexible to share (especially when outside
of the company/infrastructure)
- disadvantages:
+ more effort maintinging the recipe (source code hashes, etc)
+ additional step necessary to indicate the unit test results of the application
+ yocto must be launched to build an application (CI perspective)
+ longer latency at merge request to get the flag mergeable or not
.. important::
Do not forget the CI environment when thinking of reproducible builds.
When you build a release on your CI, then the CI has as well a specific state such as Jenkins version, plug-in
versions, certain set of security patches, etc. Over time those versions and the entire environment are changing.
Thus, the CI environment needs as well be tagged as you are tagging your release sources.
Open questions
**************
Released Base Operating System, how further?
============================================
Let's assume the CoreOS acts as base operating system and is now in a version X.Y.Z released. How do we go further?
* Do we use eSDK to develop the application for the base operating system and to build the product images?
* Do we continue without Yocto, e.g. by just using the SDK?
These questions are important as the sstate-cache, the downloads etc. can be shared for further usage.
What about Visualization?
=========================
For the OEM Linux we used to use Grafana for the visualization. This is another instance in a CI/CD environment. There
are as well some questions about what is going on with the logs during the tests of a product. Shall those be
visualized, e.g. like using something like ELK-Stack? Good to know: GitLab provides visualization support.
Which SCM Provider shall be used?
=================================
Currently it is unclear if Belden is following the Atlassian approach to use cloud based services, i.e. keeping
bitbucket but within the cloud. Right at the moment we have the following SCM providers:
* gitea
* gitlab internal (NetModule)
* gitlab public
* bitbucket
* SVN (NetModule NRSW)
The meanings differ a lot regarding SCM. Nevertheless, the OEM Linux team in CHBE decided as well to move to bitbucket.
What if Atlassian is stopping the support for non-cloud based instances? This is an open question which influences as
well the CI infrastructure. Why that? Well, actually I have not seen îf the current bitbucket version provides a
built-in CI service. GitLab does and NetModule has an instance which is maintained. This built-in CI might be used in
the application development where unit tests can be run on the built-in CI for merge requests. This leads that on the
continuous integration path the unit tests are at least passing. You see, there is an influence of the SCM provider to
the CI environment.
Releasing a Software
********************
When it comes to a software release then you must consider of tagging as well your CI environment. If you need to
reproduce a released version, you must make sure that you use the same CI environment as it was released. Until now this
was not the case. Just think about all the Jenkins updates, the plugin updates, server updates, etc. In the past we have
faced such an issue where a plugin was updated/changed. A former pipeline could not be built anymore because the used
command was removed.
So when it comes to a next-level CI environment using docker, we can tag the environment as well and just re-start it in
the tagged version to rebuild an image.
Setup Using Docker
******************
Each Jenkins controller instance might be set up as docker image stack like shown as follows:
.. code-block::
------------------------
| Jenkins Controller |<-----------------------------------------------------------
------------------------ |
| Git |<-- mounting a volume to clone the used repositories to ----
------------------------
| Artifactory |<-- mounting a volume for content
------------------------
| Webserver |<-- mounting a volume for content (webserver (ngninx) as reverse proxy)
------------------------
| Base OS |
------------------------
By using an Ansible Playbook this stack can be set up connected to the active directory and with all needed credentials
to fulfill the build jobs.
With this container stack the access is clearly defined and each container is independent from the others. Each
container stack contains its own webserver and artifactory, meaning specific defined URL. Additionally there are no
interferiences between the different teams, e.g. let's assume the NRSW team needs to fix a security relevant bug and
needs to reproduce a specific version. In this case the NRSW needs to bring the CI environment into that state as it was
when the software of concern was released. With a single Jenkins controller mode this would affect the OEM Linux team as
well.
For NetModule's point of view there would be finally two Jenkins container stacks available, one for the NRSW- and one
for the OEM Linux team.
Setup of First Prototype
########################
This section holds everything about the setup of the first prototype.
Intended Setup Process
**********************
The following simplified diagram shows the intended process of setting up a jenkins instance:
.. code-block::
+------------------------------+ +-----------------------------------+
o-->| Start the Ansible Playbook |---->| Copy necessary Conent to Server |
+------------------------------+ +-----------------------------------+
|
v
+--------------------------------------------+
| Setup Volumes, directories & environment |
+--------------------------------------------+
|
v
+--------------------------------------------+
| Connect to the Server |
+--------------------------------------------+
|
v
+--------------------------------------------+
| Start using docker-compose |
+--------------------------------------------+
|
o
.. note::
The diagram above assumes that a server is already set up.
Intended docker-composition
***************************
The following pseudo-code shows how the jenkins docker stack is composed:
.. code-block::
version: '3.8'
services:
jenkins:
image: repo.netmodule.com/core-os/ci-cd/jenkins-coreos:latest
container_name: jenkins
hostname: jenkins
extra_hosts:
- "host.docker.internal:192.168.1.70"
healthcheck:
test: ["CMD","bash","-c","curl --head http://localhost:8080 && exit 0 || exit 1"]
interval: 5s
timeout: 3s
retries: 3
start_period: 2m
restart: unless-stopped
ports:
- 8080:8080
- 50000:50000
networks:
- jenkins_net
environment:
- TZ=Europe/Zurich
- COMPOSE_PROJECT_NAME=jenkins_controller
- CASC_JENKINS_CONFIG=/var/jenkins_conf/cicd.yaml
- A_SSH_PRIVATE_FILE_PATH=/var/jenkins_home/.ssh/ed25519-secrets
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $PWD/jenkins_home:/var/jenkins_home
- $PWD/jcasc:/var/jenkins_conf
- $PWD/secrets/pw:/run/secrets
- $PWD/secrets/.ssh:/var/jenkins_home/.ssh
- $PWD/secrets/.cacerts:/var/jenkins_home/.cacerts
- $PWD/data:/var/jenkins_home/data
nginx:
image: nginx:stable-alpine
container_name: nginx
hostname: nginx
extra_hosts:
- "host.docker.internal:192.168.1.70"
restart: unless-stopped
environment:
- TZ=Europe/Zurich
ports:
- 80:80
- 443:443
networks:
- jenkins_net
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $PWD/nginx_html:/var/www/nginx/html
- $PWD/nginx_config/default.conf:/etc/nginx/conf.d/default.conf
# https mode: so far it does not work for self-signed cert
#- $PWD/nginx_config/nginx_example_local.conf:/etc/nginx/conf.d/default.conf
#- $PWD/certs:/etc/nginx/certs
#- $PWD/dhparams.pem:/etc/nginx/dhparams.pem
nexus3:
image: sonatype/nexus3:3.49.0
container_name: nexus3
extra_hosts:
- "host.docker.internal:192.168.1.70"
restart: unless-stopped
ports:
- 8081:8081
networks:
- jenkins_net
environment:
- NEXUS_CONTEXT=nexus
- TZ=Europe/Zurich
volumes:
- $PWD/nexus-data/:/nexus-data/
secrets:
bindadpw:
file: $PWD/secrets/pw/bindadpw
sshkeypw:
file: $PWD/secrets/pw/sshkeypw
networks:
jenkins_net:
driver: bridge
And we rely on the jenkins service setup as done for the CoreOS:
.. code-block::
jenkins:
systemMessage: "Jenkins Controller"
scmCheckoutRetryCount: 3
mode: NORMAL
labelString: "jenkins-controller"
numExecutors: 8
securityRealm:
activeDirectory:
domains:
- name: "netmodule.intranet"
servers: "netmodule.intranet:3268"
site: "NTMCloudGIA"
bindName: "cn=svc-ldap-ci,ou=Service,ou=Users,ou=NetModule,dc=netmodule,dc=intranet"
bindPassword: "${bindadpw}"
tlsConfiguration: JDK_TRUSTSTORE
groupLookupStrategy: "AUTOMATIC"
removeIrrelevantGroups: false
customDomain: true
cache:
size: 500
ttl: 600
startTls: true
internalUsersDatabase:
jenkinsInternalUser: "jenkins"
# local:
# allowsSignup: false
# users:
# - id: admin
# password: ${adminpw:-passw0rd}
# securityRealm:
# local:
# allowsSignup: false
# users:
# - id: admin
# password: ${adminpw:-passw0rd}
# - id: developer
# password: ${developerpw:-builder}
authorizationStrategy:
globalMatrix:
permissions:
- "USER:Overall/Administer:admin"
- "GROUP:Overall/Read:authenticated"
- "GROUP:Agent/Build:authenticated"
- "GROUP:Job/Read:authenticated"
- "GROUP:Job/Build:authenticated"
- "GROUP:Job/Cancel:authenticated"
- "GROUP:Job/Workspace:authenticated"
- "GROUP:Run/Replay:authenticated"
- "GROUP:Run/Delete:authenticated"
crumbIssuer: "standard"
security:
GlobalJobDslSecurityConfiguration:
useScriptSecurity: true
queueItemAuthenticator:
authenticators:
- global:
strategy:
specificUsersAuthorizationStrategy:
userid: build_user
credentials:
system:
domainCredentials:
- credentials:
- basicSSHUserPrivateKey:
scope: GLOBAL
id: git_credentials
# need to keep this username for the first run
username: build_user
usernameSecret: true
passphrase: "${sshkeypw}"
description: "SSH passphrase with private key file for git access"
privateKeySource:
directEntry:
privateKey: "${readFile:${A_SSH_PRIVATE_FILE_PATH}}"
- usernamePassword:
scope: GLOBAL
id: nexus_credentials
username: build_user
usernameSecret: true
password: "${somepw}"
description: "Username/Password Credentials for Nexus artifactory"
unclassified:
location:
url: http://<server-hostname>:8080
adminAddress: Mr Jenkins <no-reply@netmodule.com>
tool:
git:
installations:
- name: Default
home: "git"
jobs:
- script: >
multibranchPipelineJob('doc') {
displayName('10. Build Documentation')
description('Builds the Documentation of the CI/CD')
factory {
workflowBranchProjectFactory {
scriptPath('pipelines/Jenkinsfile_Documentation')
}
}
orphanedItemStrategy {
discardOldItems {
numToKeep(5)
}
}
branchSources {
git {
id('build-doc')
remote('git@gitlab.com:netmodule/core-os/cicd.git')
credentialsId('git_credentials')
includes('develop release*')
}
}
}
Comparison to the HAC CI
#########################
This section describes the differences of the concept above and the one at HAC after having a sync meeting with the
guardians.
Situation at HAC
****************
As already known the CI at HAC is constructed with docker containers. But how do they handle the infrastructure if it
comes to a use case where they need to reproduce an already release version. Find the situation at HAC as follows:
* the CI infrastructure bases on the help of the IT department
- new infrastructure like new physical machines and their setup is done by the IT department
- they restore parts from backups if necessary
* dockerfiles describe the docker containers used for building software releases
- AFAIK, the images are pushed to the HAC docker registry
* some infrastructure parts refer directly to the images on docker hub without pushing them to the HAC docker registry
* they use self-created scripts to orchestrate build instances, e.g. creating and starting new instances
* depending on the age of the release to reproduce a bunch of manual steps are needed to rebuild it
* there is already a good state of tracking the versions of a software release and CI infrastructure
- tickets are already open to optimize this version breakdown
* no ansible playbook used
Differences between HAC CI and the Proposal
********************************************
The following list shows the biggest difference of the proposal and the current HAC CI.
* Bring-up of new instances: ansible playbook versus self-created scripts
* General usage of an ansible playbook
- with a playbook the setup of the infrastructure is as well versionized (git repository which can be tagged)
- less dependencies to the IT department
* one dedicated infrastructure part, e.g. web server, artifactory
- all the different CI chains depend on this dedicated infrastructure part
- tracing all the dependencies when releasing a software increases very much over time
- replication on another network is more difficult as the dedicated infrastructure part needs to be realized too
- better encapsulation if the web server and artifactory is part of the instance compound
* docker images pushed to company internal docker registry
- for a proper tracking of versions and reproduction of an already released version, the sources need to be in the
companies network, i.e. all used docker images need to be available on the company internal docker registry
- with the availability of the images in the company docker registry the versioning is guaranteed as the the docker
files refer to an image residing in an accessbile registry and do not depend on the docker hub.
.. note::
Maybe there are some more differences but at the current point these are the most important ones.
Conclusion
***********
After the discussion about the differences and due to the case that versioning is already in the focus of the HAC CI, we
decided to not build a docker compound as stated in the section `Setup of First Prototype`_. We try to bring up an
instance on the HAC CI but with an interface so that the CI jobs can be managed by the teams itself to not disturb the
heavily lodaded HAC CI team too much.
So the further documentation is done in :ref:`nwlCiCd`
Sandbox Section ;-D
###################
some links:
* https://www.howtoforge.com/how-to-setup-nginx-as-a-reverse-proxy-for-apache-on-debian-11/
* https://www.supereasy.com/how-to-configure-nginx-as-a-https-reverse-proxy-easily/
* https://xavier-pestel.medium.com/how-to-manage-docker-compose-with-ansible-c08933ba88a8
* https://stackoverflow.com/questions/62452039/how-to-run-docker-compose-commands-with-ansible/62452959#62452959
* https://plugins.jenkins.io/ansible/
* https://www.ansible.com/blog
* https://k3s.io/
* https://www.dev-insider.de/kubernetes-cluster-mit-einem-klick-einrichten-a-1069489/
* https://adamtheautomator.com/ansible-kubernetes/
* http://web.archive.org/web/20190723112236/https://wiki.jenkins.io/display/JENKINS/Jenkins+behind+an+NGinX+reverse+proxy
a docker-compose file as look-up example:
.. code-block::
services:
postgres:
image: 'postgres:latest'
redis:
image: 'redis:latest'
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- '3050:80'
api:
build:
dockerfile: Dockerfile.dev
context: ./server
volumes:
- /app/node_modules
- ./server:/app
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /app/node_modules
- ./client:/app
worker:
build:
dockerfile: Dockerfile.dev
context: ./worker
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
volumes:
- /app/node_modules
- ./worker:/app
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf
upstream client {
server client:3000;
}
upstream api {
server api:5000;
}
server {
listen 80;
location / {
proxy_pass http://client;
}
location /sockjs-node {
proxy_pass http://client;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://api;
}
}

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

721
doc/src/setup/nwl-ci.rst Normal file
View File

@ -0,0 +1,721 @@
.. _nwlCiCd:
************************************
NetModule Wireless Linux (NWL) CI/CD
************************************
Foreword
########
Time is running and thus it was decided to use the current available infastructure means, see :ref:`nextLevelCiCd` for
more details.
Please note that a Next-Level CI/CD as stated in :ref:`nextLevelCiCd` is neither cancelled nor away from the table. It
is currently just not the right time to force this way and thus a rational decision. For the future there is still
potential about using the best of the current CI solutions of Belden and NetModule and this Next-Level CI/CD idea.
For now the NetModule Wireless Linux CI/CD shall be started on the infrastructure provided by HAC. The following
sections descibe the work on this CI/CD instance.
Getting Started
###############
Tobias Hess set up a new VM as server where this CI/CD instance can be set up. He prepared the VM manually by installing
necessary tools, setting up a mail relay using EXIM4 and adding the SSH keys for accessing the bitbucket repositories.
.. note::
This manual work might be refelcted in an ansible playbook so that this can be automated.
Mailing from command line works as follows (NOTE: apparently you need to be root for this):
.. code-block:: bash
# general email address
echo "Test" | mail <email-address>
# using the created aliases
echo "Test" | mail root
The VM acting as CI/CD server is accessible as follows:
* IP = ``10.115.101.98``
* Users (ask me or Tobias Hess for the passwords):
- root
- user
Overview
********
There are several repositories involved in this CI infrastructure, please find here the most important ones:
* `build-admin <https://bitbucket.gad.local/projects/INET-CI/repos/build-admin>`_
- contains configuration items in xml format
- keys and certificates for signing and service access
- scripts like the manage script for creating/starting/stopping an instance:
.. code-block:: bash
# example for the hilcos platform:
./manage.sh --image=ci.gad.local:5000/env-ci-hilcos:latest --branch=release/hilcos/10/12-exotec \
--name=hilcos_10_12 --platform=hilcos \
--config=/home/administrator/work/ci/instances/hilcos/release/hilcos/10/12-exotec/config/config.xml \
--revision=10.12.5000 -maintainer=TeamFlamingo create
* `build-docker <https://bitbucket.gad.local/projects/INET-CI/repos/build-docker>`_
- **NOTE:** This repository is over 12GB because some toolchain tarballs are included
- contains the files for the docker images
- scripts to build the docker images
- holds jenkins scripts seeding jobs
* `build-pipeline <https://bitbucket.gad.local/projects/INET-CI/repos/build-pipeline>`_
- contains the build pipeline in the HAC CI system
* `build-env <https://bitbucket.gad.local/projects/INET-CI/repos/build-env>`_
- contains class objects for the scripted pipelines
- sets up a Jenkins Library
For completness and as information the installed Jenkins plugins are the follows:
.. code-block:: bash
cloudbees-folder
antisamy-markup-formatter
credentials-binding
timestamper
ws-cleanup
workflow-aggregator
pipeline-stage-view
git
ssh-slaves
matrix-auth
pam-auth
ldap
email-ext
mailer
credentials
durable-task
git-client
git-server
ace-editor
handlebars
jquery-detached
momentjs
junit
xunit
workflow-basic-steps
pipeline-build-step
workflow-cps
pipeline-input-step
workflow-job
workflow-durable-task-step
workflow-scm-step
pipeline-groovy-lib
workflow-step-api
workflow-support
plain-credentials
scm-api
script-security
ssh-credentials
structs
workflow-api
branch-api
display-url-api
token-macro
pipeline-graph-analysis
pipeline-milestone-step
workflow-multibranch
pipeline-utility-steps
ssh-agent
job-dsl
cvs
config-file-provider
ant
matrix-project
pipeline-maven
maven-plugin
permissive-script-security
uno-choice
jdk-tool
throttle-concurrents
sidebar-link
generic-webhook-trigger
publish-over-cifs
metrics
# the following plugins were used within NetModule which
# are not yet reflected above:
authorize-project:latest
build-timeout:latest
configuration-as-code:latest
copyartifact:latest
docker-workflow:latest
envinject:latest
github-branch-source:latest
htmlpublisher:latest
parameterized-trigger:latest
pretested-integration:latest
nexus-artifact-uploader:latest
blueocean:latest
Workflow Understood by mma
==========================
As far as I understood the dependencies in the CI of HAC, the workflow looks as follows:
#. Manually set up a physical or virtual server
- installing basic tools like docker as the most important one
- add the needed users and its used credentials for the CI environment
- preparing the server itself in a manner to be ready for hooking it in the HAC environment
#. Manually preparation of the CI instance
- preparing the server with the structure needed for hooking the instance in the HAC CI system, e.g.
+ credentials for the docker instance(s)
+ etc.
- preparing the CI instance properties
+ checking out the right branches (build-docker, evtl. build-admin)
+ database and its environment
+ etc.
#. Unless available build the docker compound in the desired composition
#. Calling the manage script with create (repository: build-admin)
#. Bringing the instance up with the manage script
With this workflow and the seen dependencies I see two potential ways to step in for the NWL:
#. within *build-admin* in the configuration file
- create a branch for NWL
- adding another repository than build-pipeline
#. within *build-docker* in the jenkins seed job
- create a branch from *feature/ci/core_os* for NWL, e.g. *feature/ci/nwl*
- adapt the seee.groovy script with the needed multibranch pipelines
The next section descibes the proposed and chosen way.
Proposed Hook to Step In
************************
Branching the *build-admin* project to bring declarative pipelines into the HAC CI infrastructure (which then are
maintained by the developer team) seems **not** to be the best way because this is the common base for all CI projects.
In addition thinking into the future where potentially all CI projects might be integrated into one Belden CI
infrastructure this way neither seems to be right. Thus, I propose the second way:
#. Implement a hook in the Jenkins CI seed job residing in the repository **build-docker**
#) branching from *feature/ci/core_os* for the NWL, e.g. *feature/ci/nwl*
#) adapt the *seed.groovy* script with the needed multibranch pipelines
#. The seed job points to a multibranch pipeline in a NWL CI repository:
- `Repository <https://bitbucket.gad.local/projects/NM-NSP/repos/nwl-ci>`_
- Clone with Git: ``git clone ssh://git@bitbucket.gad.local:7999/nm-nsp/nwl-ci.git``
Implementation of Proposed Hook
*******************************
An adpation in the Jenkins CI seed job points to a multibranch pipeline for the NWL instead of the usual build pipeline.
Additionally a draft pipeline for the NWL is committed.
There was an open question if there is a docker image for the CoreOS instance in the HAC docker registry. The sync
meeting between HAC and NM regarding CoreOS clarified this question with the statement "no currently there is not as it
is still not final and the docker registry is moved to another instance". This means we need to create a docker image
locally on the server and start with this one.
Well, let's create and start a NWL Jenkins instance...
On `build-docker <https://bitbucket.gad.local/projects/INET-CI/repos/build-docker>`_ we fork the branch
*feature/ci/core_os* and name the new branch **feature/ci/nwl**. All changes for the NWL CI are made on this branch. See
the following sections.
Brining Up the Instance
=======================
Before we can start the instance we need to create a docker image so that it is available locally on the physical
server.
.. note::
The docker registry is currently moved to another location and thus it is not recommended to push this first trial
to the registry.
Building the CI Docker Image
----------------------------
The Guardians recommend to use a user **without** root priviledges. Thus, we need to add our user to the docker group:
.. code-block:: bash
# Log into the server as root
ssh root@10.115.101.98
usermod -a -G docker user
# Log out
.. note::
Several question could be clarified after a short meeting with Arne Kaufmann.
We build the docker image with user *user*. But as the jenkins-ci image layer uses the *nofmauth* repository it was
not possible to build it on the server itself. The reason seems to be that the SSH key of user@netmodule-03 has no
access to the *nofmauth* repo but is somehow added to bitbucket. Arne is trying to find the location where the
user's SSH key is added.
Arne recommended to build the docker image locally on the developer machine and then transfer it to the server.
Building locally did not work out of the box. There was an issue with the DNS when building the docker image:
.. code-block:: bash
=> ERROR [stage-0 6/10] RUN --mount=type=ssh mkdir -p /var/lib/ci/libs && cd /var/lib/ci/libs && mkdir -p -m 0700 ~/.ssh && ss 0.3s
------
> [stage-0 6/10] RUN --mount=type=ssh mkdir -p /var/lib/ci/libs &&
cd /var/lib/ci/libs && mkdir -p -m 0700 ~/.ssh &&
ssh-keyscan -p 7999 bitbucket.gad.local >> ~/.ssh/known_hosts &&
git clone ssh://git@bitbucket.gad.local:7999/inet-ci/nofmauth.git &&
rm ~/.ssh/known_hosts && nofmauth/lib && ./buildLib.sh && mkdir -p WEB-INF/lib && mv nofmauth.jar WEB-INF/lib &&
jar --update --file /usr/share/jenkins/jenkins.war WEB-INF/lib/nofmauth.jar && cd /var/lib/ci && rm -rf libs:
#0 0.240 getaddrinfo bitbucket.gad.local: Name or service not known
#0 0.244 getaddrinfo bitbucket.gad.local: Name or service not known
#0 0.247 getaddrinfo bitbucket.gad.local: Name or service not known
#0 0.251 getaddrinfo bitbucket.gad.local: Name or service not known
#0 0.255 getaddrinfo bitbucket.gad.local: Name or service not known
Thus, the the build.sh got a new docker build argument according commit
`fb26e9 <https://bitbucket.gad.local/projects/INET-CI/repos/build-docker/commits/fb26e99023ecad0d212711940f7c8c0105b28d8c>`_.
.. note::
The master or stable branch holds the newly changed jenkins plugin installation solution. If there are any issues
the branch needs to be rebased with the master/stable.
Let's build the NWL docker images locally on our machine assuming that the repository *build-docker* is cloned, the
branch *feature/ci/nwl* checked out and available:
.. code-block:: bash
# Enter the build-docker directory:
cd ~/belden/build-docker
# Enable the ssh-agent with your private SSH key which should only be used to clone the nofmauth repo and
# should not go into the docker image. Finally check if the key is loaded:
eval `ssh-agent`
ssh-add ~/.ssh/id_rsa
ssh-add -L
ssh-rsa ********************************
# Build the docker image:
DOCKER_BUILDKIT=1 ./build.sh nwl 0.1.0
**********************************************************
** Building basic-os image
**********************************************************
[+] Building 1.4s (18/18) FINISHED
...
=> => naming to docker.io/library/basic-os
**********************************************************
** Building nwl image
**********************************************************
[+] Building 0.0s (6/6) FINISHED
...
=> => naming to docker.io/library/nwl
**********************************************************
** Building jenkins image
**********************************************************
...
[+] Building 0.1s (19/19) FINISHED
...
=> => naming to docker.io/library/nwl-jenkins
**********************************************************
** Building jenkins-ci image
**********************************************************
[+] Building 1.3s (17/17) FINISHED
...
=> => naming to docker.io/library/nwl-jenkins-ci
**********************************************************
** Building env-ci image
**********************************************************
[+] Building 0.0s (6/6) FINISHED
...
=> => naming to docker.io/library/nwl-env-ci
**********************************************************
** Building klocwork image
**********************************************************
[+] Building 0.0s (18/18) FINISHED
...
=> => naming to docker.io/library/nwl-klocwork
Done!
# Overview of the created images:
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
nwl-klocwork latest 17e59d7f36fd 2 hours ago 7.18GB
nwl-env-ci latest 0d053988863b 2 hours ago 1.99GB
nwl-jenkins-ci latest c4298f02759e 2 hours ago 1.99GB
nwl-jenkins latest d6d06c06c790 3 hours ago 1.72GB
nwl latest 924de047f0bf 3 hours ago 1.6GB
basic-os latest d20d08843c00 3 hours ago 739MB
For completeness we transfer all those images to the server by using two potential ways:
#. Using pipes directly:
.. code-block:: bash
docker save <image>:<tag> | bzip2 | pv | ssh <user>@<host> docker load
#. Using multiple steps when there are issues with the VPN:
.. code-block:: bash
docker save -o <path for generated tar file> <image>:<tag>
rsync -avzP -e ssh <path for generated tar file> <user>@<10.115.101.98>:/home/user/
docker load -i /home/user/<file>
After transferring the docker images to the server we check if they are listed:
.. code-block:: bash
user@netmodule-03:~/build-docker$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
nwl-klocwork latest 17e59d7f36fd 2 hours ago 7.18GB
nwl-env-ci latest 0d053988863b 2 hours ago 1.99GB
nwl-jenkins-ci latest c4298f02759e 2 hours ago 1.99GB
nwl-jenkins latest d6d06c06c790 3 hours ago 1.72GB
nwl latest 924de047f0bf 3 hours ago 1.6GB
basic-os latest d20d08843c00 3 hours ago 739MB
.. _basicoswarning:
Switching to Debian as Basic OS
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. note::
After running a first pipeline in this instance, after the work documented in `Starting the CI Instance`_ I detected
the yocto warning ``WARNING: Host distribution "ubuntu-20.04" ...``. As in NetModule the servers and developer
machines base on Debian, I created new images based on Debian 11. Additionally I tagged all the images accordingly
to have a proper overview:
.. code-block:: bash
user@netmodule-03:~/work/ci$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
nwl-env-ci 0.1.1 e89036df18a3 58 minutes ago 2.19GB
nwl-env-ci latest e89036df18a3 58 minutes ago 2.19GB
nwl-jenkins-ci 0.1.1 9fbb8eeaa717 2 hours ago 2.19GB
nwl-jenkins-ci latest 9fbb8eeaa717 2 hours ago 2.19GB
nwl-jenkins 0.1.1 1f6b2c0d644a 2 hours ago 1.94GB
nwl-jenkins latest 1f6b2c0d644a 2 hours ago 1.94GB
nwl 0.1.1 a30655b9de0e 2 hours ago 1.82GB
nwl latest a30655b9de0e 2 hours ago 1.82GB
basic-os 0.1.1 fc2ea6009615 2 hours ago 823MB
basic-os latest fc2ea6009615 2 hours ago 823MB
nwl-klocwork 0.1.0 17e59d7f36fd 24 hours ago 7.18GB
nwl-klocwork latest 17e59d7f36fd 24 hours ago 7.18GB
nwl-env-ci 0.1.0 0d053988863b 24 hours ago 1.99GB
nwl-jenkins-ci 0.1.0 c4298f02759e 24 hours ago 1.99GB
nwl-jenkins 0.1.0 d6d06c06c790 25 hours ago 1.72GB
nwl 0.1.0 924de047f0bf 25 hours ago 1.6GB
basic-os 0.1.0 d20d08843c00 25 hours ago 739MB
Starting the CI Instance
------------------------
.. note::
The *build-admin* repository does neither have a branch for CoreOS nor holds a directory with the keys for it. The
directory ``~/work/ci`` on the server was prepared by the Guardians. This preparation holds keys for the CoreOS
residing in the subdirectory keys/coreos.
To have at least on the server a separation, we copy the keys of the CoreOS and use them for NWL.
.. code-block:: bash
# Change the directory to
cd ~/work/ci
# Separating NWL from CoreOS by copying its keys (those keys are already set up):
cp -R keys/coreos keys/nwl
So far we have everything set up to start the instance using ``manage.sh``. The arguments are explained as follows:
* image
- the docker image to take
* branch
- the branch of the NWL repository to build
- the branch of the repository where the jenkins file is located
+ This one here can be omitted as we use the hook over *seed.groovy*
* name
- the name of the instance
* config
- the configuration XML to use --> currently we do not have changes as we use the hook over *seed.groovy*
* platform
- keep in mind that this argument defines as well the directory for the keys
* revision
- revision of the container (build version) - Note: for HiOs this parameter is read for the release version
* maintainer
- the team which is in charge for this instance
With all those information we give now a try to launch this instance:
.. code-block:: bash
# create the instance:
./manage.sh --image=nwl-env-ci:latest --branch=main \
--name=nwl_0_1 --platform=nwl \
--config=/home/user/work/ci/config/config.xml \
--revision=0.1.0 --maintainer=TeamCHBE create
Creating new instance...
Done!
# check the entry:
./manage.sh -p
+---------+----------------------------+-------+---------+--------+----------+------------+----------+-------------------+--------------+---------+
| name | host | port | status | branch | revision | maintainer | platform | image | container | display |
+---------+----------------------------+-------+---------+--------+----------+------------+----------+-------------------+--------------+---------+
| nwl_0_1 | netmodule-03.tcn.gad.local | 32780 | running | main | 0.1.0 | TeamCHBE | nwl | nwl-env-ci:latest | 60726aec0ebc | NULL |
+---------+----------------------------+-------+---------+--------+----------+------------+----------+-------------------+--------------+---------+
.. note::
Currently there is the LDAP password missing in jenkins configuration XML (jenkins.xml), thus Jenkins does not start
properly.
To continue testing the instance start-up, I disabled the LDAP configuration.
Let's enter the newly created instance in the `browser <https://10.115.101.98:32780/>`_.
|coreOsCiChain|
As mentioned in :ref:`basicoswarning` all the images are rebuilt by basing on a Debian 11. To be sane with versioning
and to clean the instances the previous instance was destroyed with ``./manage.sh --name=nwl_0_1 destroy``. The newly
created images are tagged with version *0.1.1*.
Let's create the new instance and bring it up:
.. code-block:: bash
# create the instance:
./manage.sh --image=nwl-env-ci:0.1.1 --branch=main \
--name=nwl_0_1_1 --platform=nwl \
--config=/home/user/work/ci/config/config.xml \
--revision=0.1.1 --maintainer=TeamCHBE create
Creating new instance...
Done!
# check the entry:
./manage.sh -p
+-----------+----------------------------+-------+---------+--------+----------+------------+----------+------------------+--------------+---------+
| name | host | port | status | branch | revision | maintainer | platform | image | container | display |
+-----------+----------------------------+-------+---------+--------+----------+------------+----------+------------------+--------------+---------+
| nwl_0_1_1 | netmodule-03.tcn.gad.local | 32780 | running | main | 0.1.1 | TeamCHBE | nwl | nwl-env-ci:0.1.1 | 0eb450fc827a | NULL |
+-----------+----------------------------+-------+---------+--------+----------+------------+----------+------------------+--------------+---------+
.. note::
The setup above ran for about a week and today when I entered Jenkins with the browser I got an error page with the
error ``An LDAP connection URL must be supplied``. I have no clue why it worked the week before.
So far I did not get support from the Guardians regarding LDAP and this password story. Thus I needed to help myself
somehow...
I entered the running Jenkins docker with ``docker exec -it 0eb450fc827a /bin/bash`` and I verified the config.xml
of Jenkins (*/var/jenkins_home/config.xml*). Inthere I saw that the server config was empty, hence I added
*ldaps://denec1adc003p.gad.local:3269* to this config item and stopped Jenkins with
``./manage.sh --name=nwl_0_1_1 stop``, checking if no container is running anymore and started it again with
``./manage.sh --name=nwl_0_1_1 start``.
Now Jenkins is back but without the rights to build a job...
Needed Security Adaptions
^^^^^^^^^^^^^^^^^^^^^^^^^^
In the note before you read about the issue of LDAP and read-only permissions. I got the information that the IT did
some work in the Active Directories. Currently Belden and NetModule users are still not in the same directory. Hence the
password for LDAP as mentioned above would not solve my issue. I needed some trials to find the right way but finally it
worked to bring the job back so that an anonymous user can build and configure the job. The following steps were
necessary that we could launch jobs without authentication.
* Clean-up and changes on the server ``10.115.101.98``
- Adaptions because of the LDAP URL in ``~/work/ci/config/jenkins.xml``:
.. code-block:: bash
# log into the server and enter the ci directory:
cd ~/work/ci
# perform the changes according this git difference:
git diff
diff --git a/config/jenkins.xml b/config/jenkins.xml
index 83cd9b8..fa0eeed 100644
--- a/config/jenkins.xml
+++ b/config/jenkins.xml
@@ -3,7 +3,7 @@
<jenkins>
<admin name="GA_ContinuousIntegration" user="GA_ContinousIntegrat" email="GA_ContinuousIntegration@belden.com"/>
- <ldap managerPw="<<password>>" managerDn="GA_ContinousIntegration@eu.GAD.local" server="ldaps://denec1adc003p.gad.local:3269"/>
+ <ldap managerDn="GA_ContinousIntegration@eu.GAD.local" server="ldaps://denec1adc003p.gad.local:3269"/>
<smtp server="host.docker.internal" suffix="@belden.com"/>
- Stop and destroy the current instnace:
.. code-block:: bash
# assuming we are still logged in the server
# stop and destroy the current running instance
./manage.sh --name=nwl_0_1_1 destroy
# remove the residing file system content
rm -rf instances/nwl/main
* Switching to our local machine:
- Configuration changes in build-docker repository according these commits:
+ `job config <https://bitbucket.gad.local/projects/INET-CI/repos/build-docker/commits/8bb9276ebde54f7fcf413bd676c87b0c2e3869c3>`_
+ `jenkins config <https://bitbucket.gad.local/projects/INET-CI/repos/build-docker/commits/eed1ad7dcdac7937397c9fac2fbcf9a324b17076>`_
- Rebuild the docker images on my local machine with tag 0.1.2 (we focus only on *nwl-env-ci* and *nwl-jenkins-ci*):
.. code-block:: bash
DOCKER_BUILDKIT=1 ./build.sh nwl 0.1.2
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
nwl-env-ci latest 11ea232de20e 48 minutes ago 2.18GB
nwl-jenkins-ci latest 990a0aebd49f 48 minutes ago 2.18GB
- Upload the essential images to the server (*nwl-env-ci* and *nwl-jenkins-ci*):
.. code-block:: bash
docker save nwl-env-ci:latest | bzip2 | pv | ssh user@10.115.101.98 docker load
The image nwl-env-ci:latest already exists, renaming the old one with ID sha256:e28a607cbbfb19dddf766e9404572811475fe8fc533a1737b2dc325ecbc06e6e to empty string
Loaded image: nwl-env-ci:latest
docker save nwl-jenkins-ci:latest | bzip2 | pv | ssh user@10.115.101.98 docker load
The image nwl-jenkins-ci:latest already exists, renaming the old one with ID sha256:c7666cf7a03e5e1096325f26e83f8bde1cbe102cdce3fbb5242e6ab9e08eb89f to empty string
Loaded image: nwl-jenkins-ci:latest
* Switching back to the server
- Tag the new images to differntiate them from the others:
.. code-block:: bash
docker image tag nwl-env-ci:latest nwl-env-ci:0.1.2
docker image tag nwl-jenkins-ci:latest nwl-jenkins-ci:0.1.2
- Create and start the new instance:
.. code-block:: bash
./manage.sh --image=nwl-env-ci:0.1.2 --branch=main \
--name=nwl_0_1_2 --platform=nwl \
--config=/home/user/work/ci/config/config.xml \
--revision=0.1.2 --maintainer=TeamCHBE create
Creating new instance...
Done!
# check the entry:
./manage.sh -p
+-----------+----------------------------+-------+---------+--------+----------+------------+----------+------------------+--------------+---------+
| name | host | port | status | branch | revision | maintainer | platform | image | container | display |
+-----------+----------------------------+-------+---------+--------+----------+------------+----------+------------------+--------------+---------+
| nwl_0_1_2 | netmodule-03.tcn.gad.local | 32780 | running | main | 0.1.2 | TeamCHBE | nwl | nwl-env-ci:0.1.2 | 59675d5b0142 | NULL |
+-----------+----------------------------+-------+---------+--------+----------+------------+----------+------------------+--------------+---------+
* Entering Jenkins in the `browser <https://10.115.101.98:32780/>`_ shows us now the desired effect and we have the
button to build a NWL image.
Build Pipeline
==============
Starting Point
--------------
After having the CI instance up and running we faced the issue regarding host operating system as mentioned in the
sections above. Within section :ref:`basicoswarning` this issue was solved. So from CI infrastructure point of view
everything is ready to build a NWL image.
After triggering the first job we have seen that the **NWL Yocto repository** was not yet ready to build a NWL image.
Thus, I checked with Samuel Dolt to bring it to a base as we have had with the CoreOS:
* added base class for NWL image
- `commit 0d804aa <https://bitbucket.gad.local/projects/NM-NSP/repos/netmodule-wireless-linux/commits/0d804aa79a528dc0a2559884e3b42de0e8cd8d0a>`_
* added first images for NWL
- `commit 7e44f31 <https://bitbucket.gad.local/projects/NM-NSP/repos/netmodule-wireless-linux/commits/7e44f31bb9c073e736067e2499de7272803fcf6b>`_
* changed default machine
- `commit c21e3fc <https://bitbucket.gad.local/projects/NM-NSP/repos/netmodule-wireless-linux/commits/c21e3fce57f3bd8af6a1846674ed0db6ebdbca61>`_
Image Build Job
----------------
With the NWL repository set up as needed, we prepare the build pipeline for beeing ready to build the first real NWL
image :-)
As CI look-up base we took the Core OS pipeline running once upon a time on lxbuild5 (= replicated CoreOS CI instance).
After having the changes on the NWL Yocto repository as stated in the section above, the following adaptions are made:
* First we set the correct image in the Jenkins files
- `commit 3e52ea9 <https://bitbucket.gad.local/projects/NM-NSP/repos/nwl-ci/commits/3e52ea97ed84ebbc509f529f99e74714ac0c1383>`_
* Removed currently the sstate and equiv server (servers not yet ready)
- `commit 27c7777 <https://bitbucket.gad.local/projects/NM-NSP/repos/nwl-ci/commits/27c7777f7912dcbbb2b5b1b44c523ecef179ee10>`_
* Use default target in nightly builds
- `commit a41f1b1 <https://bitbucket.gad.local/projects/NM-NSP/repos/nwl-ci/commits/a41f1b1148d809adcc69132129f3d67b0e83f391>`_
With these commits we are able to build a NWL image that can be used for further testing.
.. |coreOsCiChain| image:: ./media/nwl-ci-jenkins-dashboard.png
:width: 700px

24
docker/Dockerfile Normal file
View File

@ -0,0 +1,24 @@
FROM jenkins/jenkins:2.387.3-lts-jdk11
USER root
RUN apt-get -y update && apt-get -y install \
gcc build-essential make git tree unzip xz-utils zip vim tcpdump htop rsync file \
chrpath diffstat gawk debianutils libegl1-mesa mesa-common-dev libsdl1.2-dev cpio \
lz4 liblz4-tool zstd libffi-dev net-tools iproute2 iputils-ping procps less wget \
python3-pip python3-pexpect python3-git python3-jinja2 python3-subunit pylint3 \
bmap-tools efitools openssl sbsigntool pandoc texinfo socat cppcheck complexity \
locales locales-all
RUN pip3 install sphinx sphinx-rtd-theme recommonmark
RUN pip3 install robotframework && \
pip3 install --upgrade robotframework-sshlibrary && \
pip3 install --upgrade robotframework-jsonlibrary
USER jenkins
LABEL maintainer="marc.mattmueller@netmodule.com"
ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false -Dhudson.slaves.WorkspaceList=- -Djavax.net.ssl.trustStore=/var/jenkins_home/.cacerts/cacerts -Djavax.net.ssl.trustStorePassword=changeit
ENV CASC_JENKINS_CONFIG /var/jenkins_home/casc.yaml
ENV JENKINS_HOME /var/jenkins_home
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN jenkins-plugin-cli --plugin-file /usr/share/jenkins/ref/plugins.txt

33
docker/plugins.txt Normal file
View File

@ -0,0 +1,33 @@
git:latest
authorize-project:latest
build-timeout:latest
cloudbees-folder:latest
configuration-as-code:latest
copyartifact:latest
credentials:latest
credentials-binding:latest
docker-workflow:latest
email-ext:latest
envinject:latest
github-branch-source:latest
htmlpublisher:latest
ldap:latest
mailer:latest
matrix-auth:latest
pam-auth:latest
parameterized-trigger:latest
pretested-integration:latest
pipeline-github-lib:latest
pipeline-groovy-lib:latest
pipeline-stage-view:latest
pipeline-utility-steps:latest
job-dsl:latest
ssh-slaves:latest
ssh-agent:latest
text-finder:latest
timestamper:latest
workflow-aggregator:latest
workflow-cps:latest
ws-cleanup:latest
nexus-artifact-uploader:latest
blueocean:latest

View File

@ -0,0 +1,6 @@
linux:
hosts:
10.115.101.101:
rls_info_path: /etc/os-release
user_name: user
host_name: testvm

164
jobs/Jenkinsfile_Build Normal file
View File

@ -0,0 +1,164 @@
// Loading code requires a NODE context
// But we want the code accessible outside the node Context
// So declare common (object created by the LOAD operation) outside the Node block.
def common
// Preloaded the list of supported targets used for the choice parameters TARGET:
def targetList
// This step is necessary to get the files directly from the git repo, as for the
// first build it is not yet cloned into the workspace. Normally we use an agent
// of a certain label (like core-os_buildagent) to have define credentials to get
// the files. In the current case we use any agent.
node() {
def repoPreCloneDir = "repo-preclone"
dir(repoPreCloneDir) {
// clone the repository to get the file with the target list
sshagent (credentials: ['admin_credentials']) {
sh "git clone --depth 1 --branch ${env.BRANCH_NAME} ssh://git@bitbucket.gad.local:7999/nm-nsp/nwl-ci.git ."
}
// get the list of the targets:
targetList = sh(returnStdout: true, script: "cat ./jobs/nwlTargets").trim()
// load common file
common = load "./jobs/Jenkinsfile_Common"
}
sh("rm -rf ${repoPreCloneDir}")
}
// declarative pipeline
pipeline {
agent any
parameters {
choice(name: 'TARGET', choices: "${targetList}", description: 'choose the build target')
string(name: 'BUILD_BRANCH', defaultValue: 'main', description: 'Enter the branch of the NWL to build (default = main), will skip deployment if not main')
booleanParam(name: 'CLEAN_BUILD', defaultValue: false, description: 'do a clean build, i.e. remove the yocto directory and start from scratch')
booleanParam(name: 'DEBUGGING', defaultValue: false, description: 'debugging mode, removes quiet mode for bitbake')
}
options {
timeout(time: 5, unit: 'HOURS')
disableConcurrentBuilds()
buildDiscarder(
logRotator(numToKeepStr: '20', daysToKeepStr: '7')
)
}
triggers {
cron('H H(5-6) * * 1-5')
}
stages {
stage('Check Parameters') {
steps {
script {
printJobParameters()
checkJobParameters()
setDisplayName()
}
}
}
stage('Prepare') {
steps {
script {
if(params.CLEAN_BUILD) {
println "CLEAN BUILD REQUESTED, cleaning..."
common.cleaningClonedRepoDir()
}
setupEnvironment(common)
}
}
}
stage('Build') {
steps {
script {
dir("${env.YOCTO_REPO_DIR}") {
common.buildTheYoctoPackage()
def artifactName = "NWL-${machine}.zip"
common.collectingPackageArtifacts("${env.MACHINE}")
common.packAndArchiveArtifacts("${env.MACHINE}", artifactName)
}
println "TODO: sync sstate-cache to the server"
}
}
}
stage('Deploy') {
when { expression { return common.isCurrentJobSuccess() } }
steps {
script {
println "TODO: Deploy artifacts"
}
}
}
} // stages
}
//-----------------------------------------------------------------------------
def printJobParameters() {
println "----------------------------------\n\
Job Parameters:\n\
----------------------------------\n\
TARGET = ${params.TARGET}\n\
BUILD_BRANCH = ${params.BUILD_BRANCH}\n\
CLEAN_BUILD = ${params.CLEAN_BUILD}\n\
DEBUGGING = ${params.DEBUGGING}\n\
----------------------------------\n"
}
//---------------------------------------------------------------------------------------------------------------------
def isJobTriggeredByTimer() {
// The following check is not allowed without having an Administrator approved the script signature
// return (currentBuild.rawBuild.getCause(hudson.triggers.TimerTrigger$TimerTriggerCause) != null)
// Thus we need to find another way round with:
// CAUSE = "${currentBuild.getBuildCauses()[0].shortDescription}"
def jobCause = currentBuild.getBuildCauses()
println "jobCause as information:\n" + jobCause
def jobDescriptionString = "${jobCause[0].shortDescription}"
return jobDescriptionString.contains("timer")
}
//---------------------------------------------------------------------------------------------------------------------
def getDefaultTarget() {
def defaultTarget = sh(returnStdout: true, script: "head -n2 ./jobs/nwlTargets | tail -n1").trim()
return "${defaultTarget}"
}
//---------------------------------------------------------------------------------------------------------------------
def checkJobParameters() {
// Check the selected target and overwrite it with a default one when triggered by a timer
def selectedTarget = "${params.TARGET}"
if("${params.TARGET}" == "select...") {
selectedTarget = ""
if(isJobTriggeredByTimer()) {
selectedTarget = getDefaultTarget()
println "Triggered by Timer --> taking default target = ${selectedTarget}"
}
else {
currentBuild.result = 'ABORTED'
error("Missing build target --> select parameter TARGET for a proper build")
}
}
env.TARGET = "${selectedTarget}"
}
//---------------------------------------------------------------------------------------------------------------------
def setDisplayName() {
def buildName = "#${env.BUILD_NUMBER}"
def postfix = isJobTriggeredByTimer() ? "-nightly" : ""
currentBuild.displayName = "${buildName}-${env.TARGET}${postfix}"
}
//---------------------------------------------------------------------------------------------------------------------
def setupEnvironment(commonHelpers) {
def machine = "${env.TARGET}"
def nwlBranch = "${params.BUILD_BRANCH}"
def nwlRepoDir = "${env.YOCTO_REPO_DIR}"
commonHelpers.setupBuildEnvironment(machine, nwlBranch, nwlRepoDir, params.DEBUGGING)
commonHelpers.printEnvironmentParameters()
}

325
jobs/Jenkinsfile_Common Normal file
View File

@ -0,0 +1,325 @@
//=============================================
// NetModule Wireless Linux CI commons
//=============================================
echo "loading NWL CI common module..."
// URLs
//----------------------------
env.BITBUCKET_LOCAL = "bitbucket.gad.local"
env.BITBUCKET_URL = "https://${env.BITBUCKET_LOCAL}"
env.YOCTO_REPO_URL = "ssh://git@${env.BITBUCKET_LOCAL}:7999/nm-nsp/netmodule-wireless-linux.git"
env.STORAGE_URL = "http://nmrepo.netmodule.intranet"
env.SSTATE_STORAGE_URL = "${env.STORAGE_URL}/core-os-sstate"
env.HASHSERVER = "172.16.70.254:8686"
// Yocto build definitions
//----------------------------
env.YOCTO_REPO_DIR = "nwl"
env.YOCTO_RELEASE = 'kirkstone'
env.CI_IMAGE = "nwl-image-testable"
// Methods declared in external code are accessible
// directly from other code in the external file
// indirectly via the object created by the load operation
// eg. extcode.build(...)
//-----------------------------------------------------------------------------
def isCurrentJobSuccess() {
return (currentBuild.currentResult == 'SUCCESS')
}
//-----------------------------------------------------------------------------
def cleaningClonedRepoDir() {
println "cleaning the entire repository..."
sh("git clean -ffdx")
}
//-----------------------------------------------------------------------------
def getGitCredentialID() {
return 'admin_credentials'
}
//-----------------------------------------------------------------------------
def setupGlobalEnvironmentVariables(repoDir, machine) {
env.MACHINE = "${machine}"
env.WORK_DIR = "${WORKSPACE}/${repoDir}"
env.SHARED_BUILD = "${env.WORK_DIR}/build"
env.BUILD_DEPLOY_DIR = "${env.SHARED_BUILD}/tmp/deploy"
env.IMG_DEPLOY_DIR = "${env.BUILD_DEPLOY_DIR}/images"
env.LICENSE_DEPLOY_DIR = "${env.BUILD_DEPLOY_DIR}/licenses"
env.SDK_DEPLOY_DIR = "${env.BUILD_DEPLOY_DIR}/sdk"
env.BUILD_HISTORY_DIR = "${env.SHARED_BUILD}/buildhistory"
env.SSTATE_CACHE = "${env.SHARED_BUILD}/sstate-cache"
env.PKG_CONTENT_DIR = "${env.WORK_DIR}/tmp/build-output"
env.DEPLOY_CONTENT_DIR = "${env.WORK_DIR}/toDeploy"
env.DOWNLOAD_DIR = "${JENKINS_HOME}/downloads"
}
//-----------------------------------------------------------------------------
def getBitbakePackage(machine) {
// ToDo: handle here bitbake packages if they differ and depend on the machine
return "${env.CI_IMAGE}"
}
//-----------------------------------------------------------------------------
def removePreExistingYoctoConfigs(confPath) {
if(fileExists("${env.YOCTO_REPO_DIR}/${confPath}")) {
println "Removing the bitbake config to integrate new meta layers..."
sh(script: "rm -rf ${env.YOCTO_REPO_DIR}/${confPath}")
}
}
//-----------------------------------------------------------------------------
def gitCheckout(gitUrl, branchTag, repoDir, hasSubmodules) {
println "checking out ${gitUrl} to ${repoDir}..."
def gitCredentials = getGitCredentialID()
if(!fileExists("./${repoDir}")) {
sshagent (credentials: [gitCredentials]) {
def inclSubmodulesOpt = hasSubmodules ? "--recurse-submodules" : ""
sh(script: "git clone ${inclSubmodulesOpt} ${gitUrl} ${repoDir}")
}
}
dir("${repoDir}") {
def updateSubmodulesCmd = hasSubmodules ? " && git submodule update --init --recursive" : ""
sshagent (credentials: [gitCredentials]) {
sh(script: "git fetch -ap && git fetch -t")
sh(script: "git checkout ${branchTag} && git pull --rebase ${updateSubmodulesCmd}")
}
if(hasSubmodules) {
submoduleStatus = sh(script: "git submodule status", returnStdout: true)
println "${submoduleStatus}"
}
gitHistory = sh(returnStdout: true, script: "git log --pretty=oneline -3")
println "Last 3 git commits:\n-----------------------------\n${gitHistory}"
}
}
//-----------------------------------------------------------------------------
def getMachineNameConfig() {
return "MACHINE ?= \"${env.MACHINE}\""
}
//-----------------------------------------------------------------------------
def getDownloadDirConfig() {
return "DL_DIR = \"${env.DOWNLOAD_DIR}\""
}
//-----------------------------------------------------------------------------
def getSstateMirrorConfig() {
// ToDo: set the sstate-cache mirror and the HashEquivalance Server
def mirrorCfg = "SSTATE_MIRRORS = \"file://.* ${env.SSTATE_STORAGE_URL}/PATH\""
def signatureHdl = "BB_SIGNATURE_HANDLER = \"OEEquivHash\""
def hashSrv = "BB_HASHSERVE = \"${env.HASHSERVER}\""
//return "${signatureHdl}\n${hashSrv}\n${mirrorCfg}"
return ""
}
//-----------------------------------------------------------------------------
def getArtifactConfig() {
return "NWL_IMAGE_EXTRACLASSES += \"nwl-image-ci\""
}
//-----------------------------------------------------------------------------
def setupConfigFile(confPath, confFile) {
// Keep in mind: order of configurations: site.conf, auto.conf, local.conf
dir("${env.YOCTO_REPO_DIR}") {
def machineCfg = getMachineNameConfig()
def downloadCfg = getDownloadDirConfig()
def sstateCfg = getSstateMirrorConfig()
def artifactCfg = getArtifactConfig()
def autoCfg = "${machineCfg}\n${downloadCfg}\n${sstateCfg}\n${artifactCfg}\n"
if(!fileExists("./${confPath}")) {
def sourceCmd = "source ${env.YOCTO_ENV}"
println "Initial build detected, sourcing environment to create structures and files..."
def srcEnvStatus = sh(returnStatus: true, script: "bash -c '${sourceCmd} > /dev/null 2>&1'")
println " -> status sourcing the yocto env = ${srcEnvStatus}"
}
writeFile(file: "${confFile}", text: "${autoCfg}")
}
}
//-----------------------------------------------------------------------------
def setupEnvironmentForArtifacts(machine) {
// NOTE: this part depends on
// - the path defined in env.YOCTO_DEPLOYS
// - the target as defined in env.BITBAKE_PKG
// - the yocto config preparation as done in setupConfigFile()
// - the specific configuration as stated in getArtifactConfig()
// and affects the function getPackageArtifacts()
env.YOCTO_ARTIFACTS = "${env.YOCTO_DEPLOYS}/${env.BITBAKE_PKG}-${machine}.ci-artifacts"
}
//-----------------------------------------------------------------------------
def setupBuildEnvironment(machine, branchTag, cloneDir, isDebug) {
// with the machine parameter it will be possible to set up different
// environment variables in here. Currently we use the SolidRun board
setupGlobalEnvironmentVariables(cloneDir, machine)
def confPath = "build/conf"
env.RELATIVE_AUTOCONF_FILE = "${confPath}/auto.conf"
env.YOCTO_DEPLOYS = "${env.IMG_DEPLOY_DIR}/${machine}"
env.YOCTO_ENV = "nwl-init-build-env"
env.BITBAKE_PKG = getBitbakePackage(machine)
env.ISQUIET = isDebug.toBoolean() ? "" : "-q"
env.BITBAKE_CMD = "${env.ISQUIET} ${env.BITBAKE_PKG}"
removePreExistingYoctoConfigs(confPath)
gitCheckout("${env.YOCTO_REPO_URL}", branchTag, cloneDir, true)
env.PKG_NAME = "${env.BITBAKE_PKG}-${machine}"
sh("mkdir -p ${env.DEPLOY_CONTENT_DIR}")
setupConfigFile(confPath, "${env.RELATIVE_AUTOCONF_FILE}")
setupEnvironmentForArtifacts(machine)
}
//-----------------------------------------------------------------------------
def printEnvironmentParameters() {
println "----------------------------------\n\
Environment Parameters:\n\
\n\
--> machine = ${env.MACHINE}\n\
--> git URL = ${env.YOCTO_REPO_URL}\n\
--> yocto dir = ${env.YOCTO_REPO_DIR}\n\
--> shared build dir = ${env.SHARED_BUILD}\n\
--> autoconf file = ${env.RELATIVE_AUTOCONF_FILE}\n\
--> yocto deploys = ${env.YOCTO_DEPLOYS}\n\
--> yocto environment = ${env.YOCTO_ENV}\n\
--> bitbake pagkage = ${env.BITBAKE_PKG}\n\
--> pagkage name = ${env.PKG_NAME}\n\
--> download dir = ${env.DOWNLOAD_DIR }\n\
--> artifacts file = ${env.YOCTO_ARTIFACTS}\n\
----------------------------------\n"
}
//-----------------------------------------------------------------------------
// check Yocto output file for warnings and print them
def checkAndHintOnWarnings(yoctoOutputFile) {
def warnFindCmd = "cat \"${yoctoOutputFile}\" | grep \"WARNING:\" || true"
def foundWarnings = sh(returnStdout: true, script: "${warnFindCmd}")
if("${foundWarnings}" != "") {
println "----------=< WARNINGS FOUND >=-----------\n${foundWarnings}\n-----------------------------------------\n"
}
}
//-----------------------------------------------------------------------------
// retruns true if there is a fetch error
def hasFetchError(yoctoOutputFile) {
def hasFetchErrorCmd = "cat \"${yoctoOutputFile}\" | grep \"FetchError\" > /dev/null 2>&1 && exit 1 || exit 0"
return (sh(returnStatus: true, script: "bash -c '${hasFetchErrorCmd}'") != 0).toBoolean()
}
//-----------------------------------------------------------------------------
// retruns true if bitbake is unable to connect
def isBitbakeUnableToConnect(yoctoOutputFile) {
def errMsg= "ERROR: Unable to connect to bitbake server"
def isUnableCmd = "cat \"${yoctoOutputFile}\" | grep \"${errMsg}\" > /dev/null 2>&1 && exit 1 || exit 0"
return (sh(returnStatus: true, script: "bash -c '${isUnableCmd}'") != 0).toBoolean()
}
//-----------------------------------------------------------------------------
// kill any residing bitbake processes on errors
def killResidingBitbakeProcessesAtError(yoctoOutputFile) {
if(hasFetchError(yoctoOutputFile) || isBitbakeUnableToConnect(yoctoOutputFile)) {
println "Fetch- or connection error detected, killing residing bitbake processes..."
def getBbPidCmd = "ps -ax | grep bitbake | grep -v grep | head -n 1 | sed -e 's/^[ \t]*//' | cut -d' ' -f1"
def bitbakePid = sh(returnStdout: true, script: "${getBbPidCmd}")
if("${bitbakePid}" != "") {
println "Residing process found: ${bitbakePid}"
sh("kill -9 ${bitbakePid}")
}
}
}
//-----------------------------------------------------------------------------
def buildTheYoctoPackage() {
def yoctoOutFile = "yocto.out"
def sourceCall = "source ${env.YOCTO_ENV} > ${env.SHARED_BUILD}/${yoctoOutFile} 2>&1"
def buildCall = "bitbake ${env.BITBAKE_CMD} >> ${env.SHARED_BUILD}/${yoctoOutFile} 2>&1"
def buildCmd = "${sourceCall}; ${buildCall}"
def bitbakeStatus = 0;
def gitCredentials = getGitCredentialID()
sshagent (credentials: [gitCredentials]) {
bitbakeStatus = sh(returnStatus: true, script: "bash -c '${buildCmd}'")
}
println "bitbakeStatus=${bitbakeStatus}"
if(fileExists("${env.SHARED_BUILD}/${yoctoOutFile}")) {
if((bitbakeStatus != 0) || ("${env.ISQUIET}" == "")) {
println "Yoco Build Output: ----------------"
sh "cat ${env.SHARED_BUILD}/${yoctoOutFile}"
println "-----------------------------------"
}
checkAndHintOnWarnings("${env.SHARED_BUILD}/${yoctoOutFile}")
killResidingBitbakeProcessesAtError("${env.SHARED_BUILD}/${yoctoOutFile}")
}
// Do clean-up
sh "rm -f ${env.SHARED_BUILD}/${yoctoOutFile}"
sh(script: "git clean -f ${env.RELATIVE_AUTOCONF_FILE}")
if(bitbakeStatus != 0) {
error("Build error, check yocto build output")
}
}
//-----------------------------------------------------------------------------
// copy the package- and license manifest into the current directory
def getManifests(machine, artifactPath, targetBaseName) {
def pkgArtifactName = "${env.BITBAKE_PKG}-${machine}"
def pkgManifestFile = "${artifactPath}/${pkgArtifactName}.manifest"
println "Copying Manifests...\n\
--> artifactPath = ${artifactPath}\n\
--> pkgArtifactName = ${pkgArtifactName}\n\
--> pkgManifestFile = ${pkgManifestFile}\n\
--> targetBaseName = ${targetBaseName}"
sh(label: "Copy Package Manifest", script: "cp ${pkgManifestFile} ${targetBaseName}.manifest")
sh(label: "Copy License Manifest", script: """
LATEST_LICENSE_DIR=\$(ls -Artd ${env.LICENSE_DEPLOY_DIR}/${pkgArtifactName}* | tail -n 1)
cp \$LATEST_LICENSE_DIR/license.manifest ${targetBaseName}_license.manifest
""")
}
//-----------------------------------------------------------------------------
// copy the yocto artifacts into the current directory
def getPackageArtifacts(machine, artifactPath, artifactListFile) {
println "Getting package artifacts and copy them to current directory...\n\
--> artifactPath = ${artifactPath}\n\
--> artifactListFile = ${artifactListFile}"
sh(label: "Copy ${machine} Package Artifacts", script: """
cat ${artifactListFile}
cat ${artifactListFile} | xargs -I % sh -c 'cp ${artifactPath}/% .'
""")
}
//-----------------------------------------------------------------------------
// copy the package artifacts to destination directory for packing
def collectingPackageArtifacts(machine) {
dir("${env.PKG_CONTENT_DIR}") {
println "Collecting yocto package artifacts (machine = ${machine})..."
def artifactPath = "${env.IMG_DEPLOY_DIR}/${machine}"
getManifests(machine, "${artifactPath}", "./${machine}")
getPackageArtifacts(machine, artifactPath, "${env.YOCTO_ARTIFACTS}")
}
}
//-----------------------------------------------------------------------------
// pack and archive the artifacts
def packAndArchiveArtifacts(machine, pkgArchiveName) {
println "archiving the yocto package artifacts (machine = ${machine})..."
dir ('tmp/artifacts') {
zip archive: true, dir: "${env.PKG_CONTENT_DIR}", glob: "*", zipFile: "${pkgArchiveName}"
sh("cp ${pkgArchiveName} ${env.DEPLOY_CONTENT_DIR}/")
}
sh("rm -rf ${env.PKG_CONTENT_DIR}/*")
sh("rm -rf tmp/artifacts")
}
// !!Important Boilerplate!!
// The external code must return it's contents as an object
return this;

2
jobs/nwlTargets Normal file
View File

@ -0,0 +1,2 @@
select...
cn9130-cf-pro

View File

@ -0,0 +1,64 @@
- name: Setup Basic Debian Host Machine
hosts: linux
become: yes
tasks:
- name: Update Apt Cache and Upgrade all Packages
register: updatesys
apt:
name: "*"
state: latest
update_cache: yes
cache_valid_time: 3600 #1 hour
become: yes
- name: Display the last line of the previous task to check the stats
debug:
msg: "{{updatesys.stdout_lines|last}}"
- name: Install First Basic Packages
apt:
name:
- git
- tree
- vim
- rsync
- ca-certificates
- curl
- gnupg
- python3-pip
become: yes
- name: Adding Docker's GPG key and Setup the apt repository for Debian
ansible.builtin.shell: |
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
tee /etc/apt/sources.list.d/docker.list > /dev/null
become: yes
- name: Update the apt Cache with the new Docker repository
apt:
update_cache: yes
become: yes
- name: Install Docker Engine
apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-compose-plugin
become: yes
- name: Install docker-compose with pip3
ansible.builtin.pip:
name: docker-compose
state: latest
become: yes
- name: Adding user {{user_name}} to the docker group
ansible.builtin.shell: |
groupadd docker
usermod -aG docker {{user_name}}
become: yes

View File

@ -0,0 +1,13 @@
- name: Build Jenkins Docker Image
hosts: linux
gather_facts: false
vars:
root_path: "~/nwl-ci"
tasks:
- name: Build the Jenkins Image
register: buildDocker
ansible.builtin.shell: |
docker build -t jenkins:nwl-0.0.1 .
args:
chdir: "{{root_path}}/docker"
executable: /bin/bash

View File

@ -0,0 +1,43 @@
- name: Clone a Git Repository
hosts: linux
gather_facts: false
vars:
git_repo_path: "~/nwl-ci"
ssh_auto_sshadd_file: "./auto-sshadd"
ssh_keyfile: "~/.ssh/testvm_ed25519"
ssh_passphrase: !vault |
$ANSIBLE_VAULT;1.1;AES256
61323235356163363166663139613464303262333231656236313335313133373330316431333139
3135643639363966653938663666653831393132633765340a306665393864343466376637386661
39353535616366393631333161613065356666626266396138633866346462316365663339613263
6564643935326565630a386266376230613230336564363066373730363239303763663666363462
35353634626464656436633165316336323839616463333064633363306337353534
tasks:
- name: Check if auto-sshadd file exists
stat:
path: "{{ ssh_auto_sshadd_file }}"
register: auto_sshadd_stat
- name: Prepare auto ssh-add file
ansible.builtin.shell: |
echo '#!/bin/bash' > {{ ssh_auto_sshadd_file }}
echo 'echo $SSH_PASS' >> {{ ssh_auto_sshadd_file }}
chmod +x {{ ssh_auto_sshadd_file }}
no_log: true
when: not auto_sshadd_stat.stat.exists
- name: Clone and Update Repository
ansible.builtin.shell: |
eval `ssh-agent -s`
SSH_PASS={{ssh_passphrase}} DISPLAY=1 SSH_ASKPASS="{{ssh_auto_sshadd_file}}" ssh-add {{ssh_keyfile}} < /dev/null
if [[ ! -d {{git_repo_path}} ]]; then
git clone ssh://git@bitbucket.gad.local:7999/nm-nsp/nwl-ci.git {{git_repo_path}}
fi
cd {{git_repo_path}}
git checkout develop
git fetch -ap
git pull
args:
executable: /bin/bash
no_log: true

View File

@ -0,0 +1,37 @@
- name: Configure Docker Network Adapter
hosts: linux
gather_facts: false
become: yes
tasks:
- name: Bring docker network down and remove routes
ansible.builtin.shell: |
sudo systemctl stop docker
sudo systemctl stop docker.socket
sudo iptables -t nat -F POSTROUTING
sudo ip link set dev docker0 down
sudo ip addr del 172.17.0.1/16 dev docker0
become: yes
- name: Configure docker network
ansible.builtin.shell: |
echo "{ \"bip\": \"192.168.5.1/24\" }" > /run/daemon.json
sudo mv /run/daemon.json /etc/docker/daemon.json
sudo ip addr add 192.168.5.1/24 dev docker0
sudo ip link set dev docker0 up
become: yes
- name: Verify docker IP address
register: verifyIp
ansible.builtin.shell: |
ip addr show docker0
- name: Display IP verification output
debug:
msg: "{{verifyIp.stdout_lines}}"
- name: Bring docker up again
register: bringUp
ansible.builtin.shell: |
sudo systemctl start docker
sudo iptables -t nat -L -n
sudo ip route
become: yes
- name: Display Bring-up output
debug:
msg: "{{bringUp.stdout_lines}}"

View File

@ -0,0 +1,19 @@
- name: Create SSH Keypair
hosts: linux
gather_facts: false
vars:
ssh_passphrase: !vault |
$ANSIBLE_VAULT;1.1;AES256
61323235356163363166663139613464303262333231656236313335313133373330316431333139
3135643639363966653938663666653831393132633765340a306665393864343466376637386661
39353535616366393631333161613065356666626266396138633866346462316365663339613263
6564643935326565630a386266376230613230336564363066373730363239303763663666363462
35353634626464656436633165316336323839616463333064633363306337353534
tasks:
- name: Generate ssh keypair on host
ansible.builtin.shell: |
ssh-keygen -t ed25519 -f "{{host_name}}_ed25519" -N "{{ssh_passphrase}}" -C "{{host_name}}@testenv"
args:
chdir: ~/.ssh
executable: /bin/bash
no_log: True

View File

@ -0,0 +1,11 @@
- name: The First Playbook
hosts: linux
tags:
- rls
gather_facts: False
tasks:
- name: Get OS release Information
command: cat {{rls_info_path}}
- name: Get Host Name
command: hostname