Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. We have a self-hosted runner image built: pnispero/gh-runner-image - Docker Image | Docker Hub (Temporary location)
  2. Can use that image how to deploy to kubernetes cbs on ad-build-dev cluster
    1. TODO: create formal instructions from below on how to deploy the image 
    2. TODO: Create a yaml that'll create the runner containers (with options to create multiple), and deletes with a grace period.
    This is good for now, don't need to dynamically scale since the runner(s) takes minimal resources while idle (Waiting for a request)
    1. kubectl create configmap env-config-map --from-file=env-config-map.yaml
    2. last step: kubectl apply -f deployment.yaml
    3. Solution: a vault operator will be in place and we won't need these secrets

Current Tasks

  1. work on cli - refer to CLI Tool
  2. backend proposal?while wait for claudio tomorrow to figure out build backend orchestration
    1. Figure out what we need to pass in from the runner to the backend to start the build containercontainer 
      and how this system can handle large number of requests
    2. Possible things we need to start the build container
      1. repo name
      2. organization
      3. branch
      4. user
      5. filepath to where the repo is checked out
    3. Think about where we are going to store the builds, where the build container is doing the builds, we may want to seperate by user like /sdf/group/ad/user1/component_name/branch
      1. make script that maybe the runners can invoke (to pass in the vars)
    4. Then we also want to think about where to keep the 'scripts' for building somewhere on s3df, so we don't have to bake the scripts into the images. maybe /sdf/group/ad/eed/ad-build/build-scripts/
      1. Possible scripts we need (this can be one or multiple scripts):
      2. script to start the make or buildInstructions and enter the right filepath
      3. script to log what part of the build system workflow are we in (Used for build system developers to debug failed builds due to build system)
    5. Create a deployment file for the build containers (may be minimal)
      1. needs the volume mounted
      2. the image
      3. the name of the container (repo name + branch + unique id (automatically made))
    6. jerry is working on trying to get x86/amd64 architecture to build not arm, then saw on s3df there are hundreds of  packages installed, we may or may not want to install a good amount of them onto our containers.
  3. Build Workflow example
      1. Build System Backend Flow - LCLSControls - SLAC Confluence (stanford.edu)
  4. Create a simple hello world project,
    1. DONE Add to Jira - we can use this as the test project, upload it to github, add it to the component db
    2. try this with the mongodb for us to view - https://www.mongodb.com/products/tools/compass
    3. create a basic build environment with a 'build.sh' script copied over. Push this image, and add its url to hello world component. This build environment can be used by both developers and the build system.
    4. DONE - use the basic.yml workflow from the BuildSystem/ (which will be the workflow all 400 something repos under ad will have) 
      1. See we can move the workflow to BuildSystem repo and all other repos can call it from there, like 'actions/checkout' can do like 'adbuild/build' this way any updates we roll out any repo can easily receive it
    5. the workflow should eventually do a GET request to the db, to get the build environment image,
    6. then spin up that new build environment image, 3 options to get the actual repo itself onto the new image
      1. (ideal) Runner does an actions/checkout onto a certain directory on s3df which will be accessible to all pods 
      2. Runner does an actions/checkout and passes in the actual repo through a 'kubectl cp',
      3. or probably pass in the url to the repo which it can then clone - issue with git authorization
    7. Then signal to the new environment container to do a 'make', then we have 3 options
      1. Have the environment container copy over a 'build.sh' script which does not get invoked when the container is spun up, but when it is explicitly called with 'kubectl exec'
      2. when you do 'kubectl exec' it can wait until the make is finished, then report back to actions that the build finished
      3. it can signal the make, but report back to actions that the build continued at a certain container.
    8. CAVEATS: we can assume that if there are no additional instructions on the component entry in the db, then we just do a vanilla make. (Which is what most apps here do to build, at least for the iocs its true)
    9. Then we also want
    Create a simple hello world project,
    1. DONE Add to Jira - we can use this as the test project, upload it to github, add it to the component db
    2. try this with the mongodb for us to view - https://www.mongodb.com/products/tools/compass
    3. create a basic build environment with a 'build.sh' script copied over. Push this image, and add its url to hello world component. This build environment can be used by both developers and the build system.
    4. DONE - use the basic.yml workflow from the BuildSystem/ (which will be the workflow all 400 something repos under ad will have) 
      1. See we can move the workflow to BuildSystem repo and all other repos can call it from there, like 'actions/checkout' can do like 'adbuild/build' this way any updates we roll out any repo can easily receive it
    5. the workflow should eventually do a GET request to the db, to get the build environment image,
    6. then spin up that new build environment image, 3 options to get the actual repo itself onto the new image
      1. (ideal) Runner does an actions/checkout onto a certain directory on s3df which will be accessible to all pods 
      2. Runner does an actions/checkout and passes in the actual repo through a 'kubectl cp',
      3. or probably pass in the url to the repo which it can then clone - issue with git authorization
    7. Then signal to the new environment container to do a 'make', then we have 3 options
      1. Have the environment container copy over a 'build.sh' script which does not get invoked when the container is spun up, but when it is explicitly called with 'kubectl exec'
      2. when you do 'kubectl exec' it can wait until the make is finished, then report back to actions that the build finished
      3. it can signal the make, but report back to actions that the build continued at a certain container.
    8. CAVEATS: we can assume that if there are no additional instructions on the component entry in the db, then we just do a vanilla make. (Which is what most apps here do to build, at least for the iocs its true)
    9. Then we also want the simple project packed up as a package (src code and executable).
    10. Make sure we get the build output available to users, either to github actions, or point them to the container output (we may make a cli command for it?)
  5. Figure out the authentication automation for the runners. (at the moment i get the blob of config from https://k8s.slac.stanford.edu/ad-build-dev: new solution: we are going to have only a couple 'orchestrator' containers that will do the kubectl commands, so only those need to be authenticated)
  6. Eventually we will need the runner have access to s3df, so we can build on /scratch. Then check kaniko?
  7. Try to get a dockerfile for buildroot going, most of the building will be in the dockerfile (copying over files, then running make), then the gh workflow will just call docker build and start the process, and can push the image anywhere (Local registry, docker registry, github registry)
  8. Get the build system container running on the kluster Deploying Self-Hosted GitHub Actions Runners with Docker | TestDriven.io (Altered to fit our situation) 
    1. Lets do it vanilla first (running build system container) 
      1. Create the image using base image: Package actions-runner (github.com)
        1. push the docker image to a registry so anyone can pull it
          1. From where the dockerfile is 
          2. 'docker build --tag pnispero/gh-runner-image
    Get the build system container running on the kluster Deploying Self-Hosted GitHub Actions Runners with Docker | TestDriven.io (Altered to fit our situation) 
    1. Lets do it vanilla first (running build system container) 
      1. Create the image using base image: Package actions-runner (github.com)
        1. push the docker image to a registry so anyone can pull it
          1. From where the dockerfile is 
          2. 'docker build --tag pnispero/gh-runner-image:latest .'
          3. This step may change (make a docker account, then create a access token, which will allow you to login on your shell)
          4. 'docker push pnispero/gh-runner-image:latest'
          5. Output: pnispero/gh-runner-image - Docker Image | Docker Hub
        2. Dockerfile (Here temporarily, these are the only 2 files you need to get this to work)

          Code Block
          # base
          FROM ubuntu:22.04
          
          # set the github runner version
          ARG RUNNER_VERSION="2.316.0"
          
          # update the base packages and add a non-sudo user
          RUN apt-get update -y && apt-get upgrade -y && useradd -m docker
          
          # install python and the packages the your code depends on along with jq so we can parse JSON
          # add additional packages as necessary
          RUN DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
              curl jq build-essential libssl-dev libffi-dev python3 python3-venv python3-dev python3-pip
          
          # cd into the user directory, download and unzip the github actions runner
          RUN cd /home/docker && mkdir actions-runner && cd actions-runner \
              && curl -O -L https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz \
              && tar xzf ./actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz
          
          # install some additional dependencies
          RUN chown -R docker ~docker && /home/docker/actions-runner/bin/installdependencies.sh
          
          # copy over the start.sh script
          COPY start.sh start.sh
          
          # make the script executable
          RUN chmod +x start.sh
          
          # since the config and run script for actions are not allowed to be run by root,
          # set the user to "docker" so all subsequent commands are run as the docker user
          USER docker
          
          # set the entrypoint to the start.sh script
          ENTRYPOINT ["./start.sh"]

          start.sh

          Code Block
          #!/bin/bash
          
          ORGANIZATION=$ORGANIZATION
          ACCESS_TOKEN=$ACCESS_TOKEN
          
          # Generate organization registration token
          REG_TOKEN=$(curl -L \
            -X POST \
            -H "Accept: application/vnd.github+json" \
            -H "Authorization: Bearer ${ACCESS_TOKEN}" \
            -H "X-GitHub-Api-Version: 2022-11-28" \
            https://api.github.com/orgs/${ORGANIZATION}/actions/runners/registration-token | jq .token --raw-output)
          
          cd /home/docker/actions-runner
          
          ./config.sh --url https://github.com/${ORGANIZATION} --token ${REG_TOKEN}
          
          cleanup() {
              echo "Removing runner..."
              ./config.sh remove --unattended --token ${REG_TOKEN}
          }
          
          trap 'cleanup; exit 130' INT
          trap 'cleanup; exit 143' TERM
          
          ./run.sh & wait $!
      2. do 'docker image ls' to ensure its there
      3. Then you must be an organization administrator, and make a personal access token with the "admin:org" and "repo" scope to create a registration token for an organization (REST API endpoints for self-hosted runners - GitHub Docs)
      4. Copy the token, and use it in the next step
      5. Run the docker image

        Code Block
        docker run \
          --env ORGANIZATION=<ORG> \
          --env ACCESS_TOKEN=<PERSONAL-TOKEN> \
          --name runner1 \
          runner-image

        Replace <ORG> with the organization name
        Replace <PERSONAL-TOKEN> with the token you created above

      6. And now your runner should be registered and running
      7. When done testing make sure to 'ctrl+c' and  'stop' and 'remove' the container
    2. Start the image using kubectl for our ad-build kubernetes cluster you created above
      1. Code Block
        # Start the image with environment variables
        kubectl run gh-runner1 --image=pnispero/gh-runner-image --env="ORGANIZATION=<ORG>" --env="ACCESS_TOKEN=<PERSONAL-TOKEN>"

        Replace <ORG> with the organization name
        Replace <PERSONAL-TOKEN> with the token you created above

    3. REMEMBER IF STOPPING THE CONTAINER, give it a grace period so it has some time to remove itself and from the organization

      Code Block
      kubectl delete --grace-period=15 pod gh-runner1

      Sample request - but refer to the api docs (https://accel-webapp-dev.slac.stanford.edu/api-doc/?urls.primaryName=Core%20Build%20System)

      Code Block
      # gets component list
      curl -X 'GET' \
        'https://accel-webapp-dev.slac.stanford.edu/api/cbs/v1/component' \
        -H 'accept: application/json'
  9. Then we can use that for building buildroot. One of the workflows will be it checking out on /scratch/ in s3df, then build, and output results there.

Other Basic


Other Basic

Deployment of an image (running container) Deployment of an image (running container) ex: Using kubectl to Create a Deployment | Kubernetes

Code Block
pnispero@PC100942:~$ kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
deployment.apps/kubernetes-bootcamp created
pnispero@PC100942:~$ kubectl get deployments
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
kubernetes-bootcamp   1/1     1            1           6s
pnispero@PC100942:~$ kubectl delete deployment kubernetes-bootcamp
deployment.apps "kubernetes-bootcamp" deleted
pnispero@PC100942:~$ kubectl get deployments
No resources found in default namespace.
pnispero@PC100942:~$


Deploy core build system backend to ad-build cluster (TODO)

  1. Apply crd for mongodb percona server (one time)
    1. https://github.com/eed-web-application/eed-accel-webapp-clusters-wide-setup.git

      Code Block
      pnispero@PC100942:~/eed-accel-webapp-clusters-wide-setup/test/mongodb-operator$ kubectl apply --server-side -f resource-1.15.0.yaml
      customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com serverside-applied
      customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com serverside-applied
      customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com serverside-applied
      pnispero@PC100942:~/eed-accel-webapp-clusters-wide-setup/test/mongodb-operator$
  2. Apply entire folder of https://github.com/eed-web-application/core-build-system-deployment/tree/main/test using 'kubectl apply -k test/'
    1. Code Block
      pnispero@PC100942:~/core-build-system-deployment$ kubectl apply -k test/
      serviceaccount/core-build-system-sa created
      serviceaccount/percona-server-mongodb-operator created
      role.rbac.authorization.k8s.io/percona-server-mongodb-operator created
      rolebinding.rbac.authorization.k8s.io/core-build-system-rb created
      rolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator created
      configmap/env-config-map created
      service/core-build-system-service created
      persistentvolumeclaim/core-build-system-s3df-ad-group created
      deployment.apps/core-build-system created
      deployment.apps/percona-server-mongodb-operator created
      Warning: path /api/cbs(/|$)(.*) cannot be used with pathType Prefix
      ingress.networking.k8s.io/core-build-system-ingress created
      ingress.networking.k8s.io/core-build-system-webhook-ingress created
      ingress.networking.k8s.io/elog-plus-backend-public-doc-ingress created
      perconaservermongodb.psmdb.percona.com/cbs-cluster created
      vaultsecret.ricoberger.de/application-secrets created
      vaultsecret.ricoberger.de/github-secret created
      vaultsecret.ricoberger.de/mongodb-secret created
    2. Didn't work, got error

      Code Block
      │   Warning  Failed           7m2s (x8 over 8m29s)    kubelet            Error: secret "mongodb-secret-x-default-x-vcluster--ad-build-dev" not found
    3. TODO: to fix error, add the secrets to the vault secrets operator
      1. https://vault.slac.stanford.edu/ui/vault/secrets/secret/list/ad/ad-build-dev/
      2. To get the current secrets: 'kubectl get secret mongodb-secret -o jsonpath='{.data}''
      3. ACTUALLY - check you saved the tokens in your 
  3. Debug - you can also mass delete resources using 'kubectl delete -k <folder>'