doc/nwl-ci: documented adaptions for sstate-cache mirror and job split

Signed-off-by: Marc Mattmüller <marc.mattmueller@netmodule.com>
This commit is contained in:
Marc Mattmüller 2023-07-31 18:07:36 +02:00
parent ea82dedaa1
commit a734db2932
1 changed files with 292 additions and 0 deletions

View File

@ -991,6 +991,298 @@ readability see the steps below:
Uploading artifact testPkg.zip completed.
Integrating SSTATE-CACHE Mirror and Additional Pipleine for NWL
###############################################################
Preliminary Information
***********************
There were two request for the NWL CI located in target process:
* setup a NWL `sstate mirror <https://tp.gad.local/restui/board.aspx?#page=userstory/414480>`_
- the NWL CI shall be adapted to use the sstate-cache mirror
- the NWL CI shall upload the sstate-cache to the mirror
* build `legacy NetModule machines <https://tp.gad.local/restui/board.aspx?#page=userstory/414705>`_
- a nightly build shall build all machines
For latter request it makes sense to add a new job and adapt the current setup as follows:
* first job building a yocto target (machine as parameter)
* second job - a nightly build - building all machines
.. note::
For both of these requests the NWL jenkins instance needs to be adapted. Hence, it makes sense to combine them to
only do it once.
SSTATE-CACHE Mirror Information
*******************************
The guardians use one sstate-cache mirror for all projects, e.g. BIL and CoreOS. Currently the team wants to progress
with NWL in intermediate steps regarding kernel and CoreOS integration. Thus, I first decided to sync the sstate-cache
to NM's internal webserver *nmrepo.netmodule.intranet* to not corrupt the sstate-cache of the guardians. Unfortunately
the network for the NetModule servers is again down. According to IT there is an issue with the switch which needs to be
replaced. As workaround the switch needs a reboot. But this is now the case in two consecutive weeks that on Monday the
network is not reachable. In addition to all that, port 22 (SSH) to the NetModule servers are closed.
Hence, I was forced to choose another location for the sstate-cache mirror. The server *netmodule-01* is already set up
with a webserver and acts as playground. So, we integrate the sstate-cache mirror on this server.
Integration of the Requests
***************************
To build and set up the new NWL instance in one step we first adapt the Jenkins files for the requested jobs:
* Renaming *Jenkinsfile_Build* to *Jenkinsfile_BuildAll* and adapting accordingly
- Acts as the nightly / overall pipeline
* Adding a new Jenkinsfile called *Jenkinsfile_BuildTarget* taking the build parts from the old *Jenkinsfile_Build*
- The build job for one Yocto target
* Adapting *Jenkinsfile_Common*
- Needs adaptions and new functions
* Adding new targets to nwlTargets
We have now prepared the repository for the setup of the CI environment. Let's integrate the changes:
* Adapt the seed script with the additional Job in *build-docker* (branch *feature/ci/nwl*):
.. code-block:: bash
diff --git a/jenkins-ci/jobs/seed.groovy b/jenkins-ci/jobs/seed.groovy
index f47c358..79d3781 100644
--- a/jenkins-ci/jobs/seed.groovy
+++ b/jenkins-ci/jobs/seed.groovy
@@ -45,14 +45,14 @@ if (config.build.types.buildType.size() != 0) {
multibranchPipelineJob('build-pipeline') {
- displayName('NWL Build Pipeline')
- description('Builds the NWL distro')
+ displayName('NWL Pipeline')
+ description('Builds all NWL targets')
authorization {
permissionAll('anonymous')
}
factory {
workflowBranchProjectFactory {
- scriptPath('jobs/Jenkinsfile_Build')
+ scriptPath('jobs/Jenkinsfile_BuildAll')
}
}
orphanedItemStrategy {
@@ -70,3 +70,29 @@ multibranchPipelineJob('build-pipeline') {
}
}
+multibranchPipelineJob('build-yocto-target-pipeline') {
+ displayName('Build Yocto Target')
+ description('Builds a NWL target')
+ authorization {
+ permissionAll('anonymous')
+ }
+ factory {
+ workflowBranchProjectFactory {
+ scriptPath('jobs/Jenkinsfile_BuildTarget')
+ }
+ }
+ orphanedItemStrategy {
+ discardOldItems {
+ numToKeep(5)
+ }
+ }
+ branchSources {
+ git {
+ id('nwl-target-ci')
+ remote('ssh://git@bitbucket.gad.local:7999/nm-nsp/nwl-ci.git')
+ credentialsId("admin_credentials")
+ includes('develop release*')
+ }
+ }
+}
+
* Prepare the sstate-cache mirror on the webserver (10.115.101.100)
.. code-block:: bash
# assuming you are logged in on 10.115.101.100
mkdir /var/www/html/nwl-sstate
# log out from the webserver
* Make upload possible over SSH
- Create a new SSH keypair on the build server (10.115.101.98):
.. code-block:: bash
# log into the build server unless already done --> ssh user@10.115.101.98
# enter the ssh directory and create a new keypair:
cd .ssh
ssh-keygen -t ed25519 -f nginx-nwl -C "nginx@nwl"
- Copy the public SSH key to the webserver and test connection:
.. code-block:: bash
# copy the public key to the nginx server:
ssh-copy-id -i ~/.ssh/nginx-nwl.pub user@10.115.101.100
# test the connection - accept the fingerprint if requested+:
ssh -i ~/.ssh/nginx-nwl user@10.115.101.100 "hostname;exit"
Enter passphrase for key '/home/user/.ssh/nginx-nwl':
netmodule-01
* Adapt the NWL new docker instance *nwl-env-ci*:
- Add the webserver credentials to jenkins in *build-docker* (branch *feature/ci/nwl*):
.. code-block:: bash
diff --git a/jenkins-ci/scripts/credentials.groovy b/jenkins-ci/scripts/credentials.groovy
index 046b309..e03ffe2 100644
--- a/jenkins-ci/scripts/credentials.groovy
+++ b/jenkins-ci/scripts/credentials.groovy
@@ -8,6 +8,7 @@ import com.cloudbees.plugins.credentials.impl.*;
def config = new XmlSlurper().parse(new File('/var/lib/ci/config/jenkins.xml'));
def keyFilePath = "/var/lib/ci/keys/id_ed25519";
+def nginxKeyFilePath = "/var/lib/ci/keys/nginx-nwl";
def managerSecret = config.jenkins.ldap.@managerPw.text();
def domain = Domain.global();
def store = Jenkins.instance.getExtensionList('com.cloudbees.plugins.credentials.SystemCredentialsProvider')[0].getStore();
@@ -16,19 +17,22 @@ def credentials = new BasicSSHUserPrivateKey(CredentialsScope.GLOBAL, "admin_cre
config.jenkins.admin.@user.text(),
new BasicSSHUserPrivateKey.FileOnMasterPrivateKeySource(keyFilePath),
"", "");
-
store.addCredentials(domain, credentials);
+def nginxcredentials = new BasicSSHUserPrivateKey(CredentialsScope.GLOBAL, "nginx_credentials",
+ config.jenkins.nginx.@user.text(),
+ new BasicSSHUserPrivateKey.FileOnMasterPrivateKeySource(nginxKeyFilePath),
+ config.jenkins.nginx.@syncPw.text(), "");
+store.addCredentials(domain, nginxcredentials);
+
Credentials c = (Credentials) new UsernamePasswordCredentialsImpl(CredentialsScope.GLOBAL,
"admin_ldap_credentials", "",
config.jenkins.admin.@user.text(), managerSecret);
- Build a new image of the docker instance:
.. code-block:: bash
# on your local machine enter the directory
within the cloned repository build-docker on branch
# on the local machine in build-docker, run:
DOCKER_BUILDKIT=1 ./build.sh nwl 0.3.0
# upload the newly created images to the build server:
docker save nwl-env-ci:latest | bzip2 | pv | ssh user@10.115.101.98 docker load
docker save nwl-jenkins-ci:latest | bzip2 | pv | ssh user@10.115.101.98 docker load
- Tag the new image on the build server:
.. code-block:: bash
# log into the build server unless already done --> ssh user@10.115.101.98
# tag the uploaded images:
docker image tag nwl-env-ci:latest nwl-env-ci:0.3.0
docker image tag nwl-jenkins-ci:latest nwl-jenkins-ci:0.3.0
- On the build server:
+ Add the user and password of the webserver credentials to the jenkins configuration
.. code-block:: bash
# log into the build server unless already done --> ssh user@10.115.101.98
# enter the ci directory:
cd ~/work/ci
# adapt config/jenkins.xml as follows:
diff --git a/config/jenkins.xml b/config/jenkins.xml
index f852dea..c5a3ad4 100644
--- a/config/jenkins.xml
+++ b/config/jenkins.xml
@@ -6,6 +6,7 @@
<ldap managerDn="GA_ContinousIntegration@eu.GAD.local" server="ldaps://denec1adc003p.gad.local:3269"/>
<nexus name="CI_NexusArtifacts" user="ci-build-user" uploaderPw="4ciArtifacts" email="GA_ContinuousIntegration@belden.com"/>
+ <nginx name="CI_Nginx" user="user" syncPw="nginx4NWL!" email="GA_ContinuousIntegration@belden.com"/>
<smtp server="host.docker.internal" suffix="@belden.com"/>
+ Add the prevously created ssh key to the jenkins environment:
.. code-block:: bash
# enter the ci directory:
cd ~/work/ci
# copy the ssh key to the environment:
cp ~/.ssh/nginx-nwl keys/nwl/
+ Add the nginx webserver to known_hosts file of the jenkins environment:
.. code-block:: bash
# add the nginx webserver to known_hosts:
ssh-keyscan 10.115.101.100 >> ~/work/ci/config/known_hosts
* Setup the new instance on the build server:
.. code-block:: bash
# assuming you are still on the server in the directory ~/work/ci
# stop and destroy the current running instance
./manage.sh --name=nwl_0_2_0 destroy
# remove the residing file system content
rm -rf instances/nwl/main
# create and launch the new instance:
./manage.sh --image=nwl-env-ci:0.3.0 --branch=main \
--name=nwl_0_3_0 --platform=nwl \
--config=/home/user/work/ci/config/config.xml \
--revision=0.3.0 --maintainer=TeamCHBE create
Creating new instance...
Done!
# check the entry:
./manage.sh -p
+-----------+----------------------------+-------+---------+--------+----------+------------+----------+------------------+--------------+---------+
| name | host | port | status | branch | revision | maintainer | platform | image | container | display |
+-----------+----------------------------+-------+---------+--------+----------+------------+----------+------------------+--------------+---------+
| nwl_0_3_0 | netmodule-03.tcn.gad.local | 32780 | running | main | 0.3.0 | TeamCHBE | nwl | nwl-env-ci:0.3.0 | 93298bc13038 | NULL |
+-----------+----------------------------+-------+---------+--------+----------+------------+----------+------------------+--------------+---------+
* Test and fix the piplines
- There were some issues because we do not have adminstration rights on this Jenkins instance and because the build
is triggered anonymously
.. note::
In case of using *nmrepo.netmodule.intranet* as sstate-cache mirror, the IT first needs to open port 22. If so then
you would need to add the credentials for it, adapt the config accordingly and add it to known_hosts similar as we
did it with the nginx keypair and nginx connection.
.. warning::
With all the changes for sstate-cache mirror and pipeline adaptions, the Ansible Jenkins instance might fail. So far
no tests and adaptions were made.
Standardization of Project-Specific Jenkins Instances
######################################################