...
- We have a self-hosted runner image built: pnispero/gh-runner-image - Docker Image | Docker Hub (Temporary location)
- Can use that image how to deploy to kubernetes cbs on ad-build-dev cluster
- TODO: create formal instructions from below on how to deploy the image
- TODO: Create a yaml that'll create the runner containers (with options to create multiple), and deletes with a grace period.
This is good for now, don't need to dynamically scale since the runner(s) takes minimal resources while idle (Waiting for a request)- kubectl create configmap env-config-map --from-file=env-config-map.yaml
- last step: kubectl apply -f deployment.yaml
- Solution: a vault operator will be in place and we won't need these secrets
Current Tasks
- work on cli - refer to CLI Tool
- blocked waiting for s3df/db update: Create a simple hello world project,
- DONE Add to Jira - we can use this as the test project, upload it to github, add it to the component db
- try this with the mongodb for us to view - https://www.mongodb.com/products/tools/compass
- create a basic build environment with a 'build.sh' script copied over. Push this image, and add its url to hello world component. This build environment can be used by both developers and the build system.
- DONE - use the basic.yml workflow from the BuildSystem/ (which will be the workflow all 400 something repos under ad will have)
- See we can move the workflow to BuildSystem repo and all other repos can call it from there, like 'actions/checkout' can do like 'adbuild/build' this way any updates we roll out any repo can easily receive it
- the workflow should eventually do a GET request to the db, to get the build environment image,
- then spin up that new build environment image, 3 options to get the actual repo itself onto the new image
- (ideal) Runner does an actions/checkout onto a certain directory on s3df which will be accessible to all pods
- Runner does an actions/checkout and passes in the actual repo through a 'kubectl cp',
- or probably pass in the url to the repo which it can then clone - issue with git authorization
- Then signal to the new environment container to do a 'make', then we have 3 options
- Have the environment container copy over a 'build.sh' script which does not get invoked when the container is spun up, but when it is explicitly called with 'kubectl exec'
- when you do 'kubectl exec' it can wait until the make is finished, then report back to actions that the build finished
- it can signal the make, but report back to actions that the build continued at a certain container.
- CAVEATS: we can assume that if there are no additional instructions on the component entry in the db, then we just do a vanilla make. (Which is what most apps here do to build, at least for the iocs its true)
- Then we also want the simple project packed up as a package (src code and executable).
- Make sure we get the build output available to users, either to github actions, or point them to the container output (we may make a cli command for it?)
- Figure out the authentication automation for the runners. (at the moment i get the blob of config from https://k8s.slac.stanford.edu/ad-build-dev)
- Eventually we will need the runner have access to s3df, so we can build on /scratch. Then check kaniko?
- Try to get a dockerfile for buildroot going, most of the building will be in the dockerfile (copying over files, then running make), then the gh workflow will just call docker build and start the process, and can push the image anywhere (Local registry, docker registry, github registry)
- Get the build system container running on the kluster Deploying Self-Hosted GitHub Actions Runners with Docker | TestDriven.io (Altered to fit our situation)
- Lets do it vanilla first (running build system container)
- Create the image using base image: Package actions-runner (github.com)
- push the docker image to a registry so anyone can pull it
- From where the dockerfile is
- 'docker build --tag pnispero/gh-runner-image:latest .'
- This step may change (make a docker account, then create a access token, which will allow you to login on your shell)
- 'docker push pnispero/gh-runner-image:latest'
- Output: pnispero/gh-runner-image - Docker Image | Docker Hub
Dockerfile (Here temporarily, these are the only 2 files you need to get this to work)
Code Block |
---|
# base
FROM ubuntu:22.04
# set the github runner version
ARG RUNNER_VERSION="2.316.0"
# update the base packages and add a non-sudo user
RUN apt-get update -y && apt-get upgrade -y && useradd -m docker
# install python and the packages the your code depends on along with jq so we can parse JSON
# add additional packages as necessary
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
curl jq build-essential libssl-dev libffi-dev python3 python3-venv python3-dev python3-pip
# cd into the user directory, download and unzip the github actions runner
RUN cd /home/docker && mkdir actions-runner && cd actions-runner \
&& curl -O -L https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz \
&& tar xzf ./actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz
# install some additional dependencies
RUN chown -R docker ~docker && /home/docker/actions-runner/bin/installdependencies.sh
# copy over the start.sh script
COPY start.sh start.sh
# make the script executable
RUN chmod +x start.sh
# since the config and run script for actions are not allowed to be run by root,
# set the user to "docker" so all subsequent commands are run as the docker user
USER docker
# set the entrypoint to the start.sh script
ENTRYPOINT ["./start.sh"] |
start.sh
Code Block |
---|
#!/bin/bash
ORGANIZATION=$ORGANIZATION
ACCESS_TOKEN=$ACCESS_TOKEN
# Generate organization registration token
REG_TOKEN=$(curl -L \
-X POST \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${ACCESS_TOKEN}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/orgs/${ORGANIZATION}/actions/runners/registration-token | jq .token --raw-output)
cd /home/docker/actions-runner
./config.sh --url https://github.com/${ORGANIZATION} --token ${REG_TOKEN}
cleanup() {
echo "Removing runner..."
./config.sh remove --unattended --token ${REG_TOKEN}
}
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
./run.sh & wait $! |
- do 'docker image ls' to ensure its there
- Then you must be an organization administrator, and make a personal access token with the "admin:org" and "repo" scope to create a registration token for an organization (REST API endpoints for self-hosted runners - GitHub Docs)
- Copy the token, and use it in the next step
Run the docker image
Code Block |
---|
docker run \
--env ORGANIZATION=<ORG> \
--env ACCESS_TOKEN=<PERSONAL-TOKEN> \
--name runner1 \
runner-image |
Replace <ORG> with the organization name
Replace <PERSONAL-TOKEN> with the token you created above
- And now your runner should be registered and running
- When done testing make sure to 'ctrl+c' and 'stop' and 'remove' the container
- Start the image using kubectl for our ad-build kubernetes cluster you created above
Code Block |
---|
# Start the image with environment variables
kubectl run gh-runner1 --image=pnispero/gh-runner-image --env="ORGANIZATION=<ORG>" --env="ACCESS_TOKEN=<PERSONAL-TOKEN>" |
Replace <ORG> with the organization name
Replace <PERSONAL-TOKEN> with the token you created above
REMEMBER IF STOPPING THE CONTAINER, give it a grace period so it has some time to remove itself and from the organization
Code Block |
---|
kubectl delete --grace-period=15 pod gh-runner1 |
Sample request - but refer to the api docs (https://accel-webapp-dev.slac.stanford.edu/api-doc/?urls.primaryName=Core%20Build%20System)
Code Block |
---|
# gets component list
curl -X 'GET' \
'https://accel-webapp-dev.slac.stanford.edu/api/cbs/v1/component' \
-H 'accept: application/json' |
- Then we can use that for building buildroot. One of the workflows will be it checking out on /scratch/ in s3df, then build, and output results there.
Other Basic
Deployment of an image (running container) ex: Using kubectl to Create a Deployment | Kubernetes
...
- backend proposal?
- Figure out what we need to pass in from the runner to the backend to start the build container
and how this system can handle large number of requests - Possible things we need to start the build container
- repo name
- organization
- branch
- user
- filepath to where the repo is checked out
- Think about where we are going to store the builds, where the build container is doing the builds, we may want to seperate by user like /sdf/group/ad/user1/component_name/branch
- make script that maybe the runners can invoke (to pass in the vars)
- Then we also want to think about where to keep the 'scripts' for building somewhere on s3df, so we don't have to bake the scripts into the images. maybe /sdf/group/ad/eed/ad-build/build-scripts/
- Possible scripts we need (this can be one or multiple scripts):
- script to start the make or buildInstructions and enter the right filepath
- script to log what part of the build system workflow are we in (Used for build system developers to debug failed builds due to build system)
- Create a deployment file for the build containers (may be minimal)
- needs the volume mounted
- the image
- the name of the container (repo name + branch + unique id (automatically made))
- jerry is working on trying to get x86/amd64 architecture to build not arm, then saw on s3df there are hundreds of packages installed, we may or may not want to install a good amount of them onto our containers.
- Build Workflow example
- Build System Backend Flow - LCLSControls - SLAC Confluence (stanford.edu)
- Create a simple hello world project,
- DONE Add to Jira - we can use this as the test project, upload it to github, add it to the component db
- try this with the mongodb for us to view - https://www.mongodb.com/products/tools/compass
- create a basic build environment with a 'build.sh' script copied over. Push this image, and add its url to hello world component. This build environment can be used by both developers and the build system.
- DONE - use the basic.yml workflow from the BuildSystem/ (which will be the workflow all 400 something repos under ad will have)
- See we can move the workflow to BuildSystem repo and all other repos can call it from there, like 'actions/checkout' can do like 'adbuild/build' this way any updates we roll out any repo can easily receive it
- the workflow should eventually do a GET request to the db, to get the build environment image,
- then spin up that new build environment image, 3 options to get the actual repo itself onto the new image
- (ideal) Runner does an actions/checkout onto a certain directory on s3df which will be accessible to all pods
- Runner does an actions/checkout and passes in the actual repo through a 'kubectl cp',
- or probably pass in the url to the repo which it can then clone - issue with git authorization
- Then signal to the new environment container to do a 'make', then we have 3 options
- Have the environment container copy over a 'build.sh' script which does not get invoked when the container is spun up, but when it is explicitly called with 'kubectl exec'
- when you do 'kubectl exec' it can wait until the make is finished, then report back to actions that the build finished
- it can signal the make, but report back to actions that the build continued at a certain container.
- CAVEATS: we can assume that if there are no additional instructions on the component entry in the db, then we just do a vanilla make. (Which is what most apps here do to build, at least for the iocs its true)
- Then we also want the simple project packed up as a package (src code and executable).
- Make sure we get the build output available to users, either to github actions, or point them to the container output (we may make a cli command for it?)
- Figure out the authentication automation for the runners. (at the moment i get the blob of config from https://k8s.slac.stanford.edu/ad-build-dev: new solution: we are going to have only a couple 'orchestrator' containers that will do the kubectl commands, so only those need to be authenticated)
- Get the build system container running on the kluster Deploying Self-Hosted GitHub Actions Runners with Docker | TestDriven.io (Altered to fit our situation)
- Lets do it vanilla first (running build system container)
- Create the image using base image: Package actions-runner (github.com)
- push the docker image to a registry so anyone can pull it
- From where the dockerfile is
- 'docker build --tag pnispero/gh-runner-image:latest .'
- This step may change (make a docker account, then create a access token, which will allow you to login on your shell)
- 'docker push pnispero/gh-runner-image:latest'
- Output: pnispero/gh-runner-image - Docker Image | Docker Hub
Dockerfile (Here temporarily, these are the only 2 files you need to get this to work)
Code Block |
---|
# base
FROM ubuntu:22.04
# set the github runner version
ARG RUNNER_VERSION="2.316.0"
# update the base packages and add a non-sudo user
RUN apt-get update -y && apt-get upgrade -y && useradd -m docker
# install python and the packages the your code depends on along with jq so we can parse JSON
# add additional packages as necessary
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
curl jq build-essential libssl-dev libffi-dev python3 python3-venv python3-dev python3-pip
# cd into the user directory, download and unzip the github actions runner
RUN cd /home/docker && mkdir actions-runner && cd actions-runner \
&& curl -O -L https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz \
&& tar xzf ./actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz
# install some additional dependencies
RUN chown -R docker ~docker && /home/docker/actions-runner/bin/installdependencies.sh
# copy over the start.sh script
COPY start.sh start.sh
# make the script executable
RUN chmod +x start.sh
# since the config and run script for actions are not allowed to be run by root,
# set the user to "docker" so all subsequent commands are run as the docker user
USER docker
# set the entrypoint to the start.sh script
ENTRYPOINT ["./start.sh"] |
start.sh
Code Block |
---|
#!/bin/bash
ORGANIZATION=$ORGANIZATION
ACCESS_TOKEN=$ACCESS_TOKEN
# Generate organization registration token
REG_TOKEN=$(curl -L \
-X POST \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${ACCESS_TOKEN}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/orgs/${ORGANIZATION}/actions/runners/registration-token | jq .token --raw-output)
cd /home/docker/actions-runner
./config.sh --url https://github.com/${ORGANIZATION} --token ${REG_TOKEN}
cleanup() {
echo "Removing runner..."
./config.sh remove --unattended --token ${REG_TOKEN}
}
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
./run.sh & wait $! |
- do 'docker image ls' to ensure its there
- Then you must be an organization administrator, and make a personal access token with the "admin:org" and "repo" scope to create a registration token for an organization (REST API endpoints for self-hosted runners - GitHub Docs)
- Copy the token, and use it in the next step
Run the docker image
Code Block |
---|
docker run \
--env ORGANIZATION=<ORG> \
--env ACCESS_TOKEN=<PERSONAL-TOKEN> \
--name runner1 \
runner-image |
Replace <ORG> with the organization name
Replace <PERSONAL-TOKEN> with the token you created above
- And now your runner should be registered and running
- When done testing make sure to 'ctrl+c' and 'stop' and 'remove' the container
- Start the image using kubectl for our ad-build kubernetes cluster you created above
Code Block |
---|
# Start the image with environment variables
kubectl run gh-runner1 --image=pnispero/gh-runner-image --env="ORGANIZATION=<ORG>" --env="ACCESS_TOKEN=<PERSONAL-TOKEN>" |
Replace <ORG> with the organization name
Replace <PERSONAL-TOKEN> with the token you created above
REMEMBER IF STOPPING THE CONTAINER, give it a grace period so it has some time to remove itself and from the organization
Code Block |
---|
kubectl delete --grace-period=15 pod gh-runner1 |
Sample request - but refer to the api docs (https://accel-webapp-dev.slac.stanford.edu/api-doc/?urls.primaryName=Core%20Build%20System)
Code Block |
---|
# gets component list
curl -X 'GET' \
'https://accel-webapp-dev.slac.stanford.edu/api/cbs/v1/component' \
-H 'accept: application/json' |
Other Basic
Deployment of an image (running container) ex: Using kubectl to Create a Deployment | Kubernetes
Code Block |
---|
pnispero@PC100942:~$ kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
deployment.apps/kubernetes-bootcamp created
pnispero@PC100942:~$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kubernetes-bootcamp 1/1 1 1 6s
pnispero@PC100942:~$ kubectl delete deployment kubernetes-bootcamp
deployment.apps "kubernetes-bootcamp" deleted
pnispero@PC100942:~$ kubectl get deployments
No resources found in default namespace.
pnispero@PC100942:~$ |
Deploy core build system backend to ad-build cluster (TODO)
- Apply crd for mongodb percona server (one time)
https://github.com/eed-web-application/eed-accel-webapp-clusters-wide-setup.git
Code Block |
---|
pnispero@PC100942:~/eed-accel-webapp-clusters-wide-setup/test/mongodb-operator$ kubectl apply --server-side -f resource-1.15.0.yaml
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com serverside-applied
pnispero@PC100942:~/eed-accel-webapp-clusters-wide-setup/test/mongodb-operator$ |
- Apply entire folder of https://github.com/eed-web-application/core-build-system-deployment/tree/main/test using 'kubectl apply -k test/'
Code Block |
---|
pnispero@PC100942:~/core-build-system-deployment$ kubectl apply -k test/
serviceaccount/core-build-system-sa created
serviceaccount/percona-server-mongodb-operator created
role.rbac.authorization.k8s.io/percona-server-mongodb-operator created
rolebinding.rbac.authorization.k8s.io/core-build-system-rb created
rolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator created
configmap/env-config-map created
service/core-build-system-service created
persistentvolumeclaim/core-build-system-s3df-ad-group created
deployment.apps/core-build-system created
deployment.apps/percona-server-mongodb-operator created
Warning: path /api/cbs(/|$)(.*) cannot be used with pathType Prefix
ingress.networking.k8s.io/core-build-system-ingress created
ingress.networking.k8s.io/core-build-system-webhook-ingress created
ingress.networking.k8s.io/elog-plus-backend-public-doc-ingress created
perconaservermongodb.psmdb.percona.com/cbs-cluster created
vaultsecret.ricoberger.de/application-secrets created
vaultsecret.ricoberger.de/github-secret created
vaultsecret.ricoberger.de/mongodb-secret created |
Didn't work, got error
Code Block |
---|
│ Warning Failed 7m2s (x8 over 8m29s) kubelet Error: secret "mongodb-secret-x-default-x-vcluster--ad-build-dev" not found |
- TODO: to fix error, add the secrets to the vault secrets operator
- https://vault.slac.stanford.edu/ui/vault/secrets/secret/list/ad/ad-build-dev/
- To get the current secrets: 'kubectl get secret mongodb-secret -o jsonpath='{.data}''
- ACTUALLY - check you saved the tokens in your
- Debug - you can also mass delete resources using 'kubectl delete -k <folder>'