You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 16 Current »

Location: BuildSystem/artifact_storage/api at main · ad-build-test/BuildSystem (github.com)

TBD - May not need linux_username and github_username for the purposes of this api.

(POST) Build Image:

'headers': {
    "linux_username": "string",
    "github_username": "string"
},
'body': {
    "dockerfile": file,
    "component": "string",
    "branch": "string",
	"architecture": "string" // OS environment
}

Behavior/Flow

  1. Given the input information, the backend creates builds the dockerfile given, with the name as <component>-<branch>-image:<tag>
    1. TODO: Come up with image naming convention
  2. The image is then stored into the artifact storage registry, available for use to test/deploy/dev environment

Return

{
    "status": "string"
    "errorMessage": "string"
}

(GET) Get Image:

'headers': {
    "linux_username": "string",
    "github_username": "string"
},
'body': {
    "component": "string",
    "branch": "string",
	"architecture": "string" // OS environment
}

Behavior/Flow

  1. Given the input information, the backend checks if image exists for a certain component.
    1. if not exists: then return none, 
    2. if exists: return the name of the image, possibly a command on how to pull the image

Return

{
    "status": "string"
    "errorMessage": "string"
	"imageName": "string"
}

(POST) Build Component:

'headers': {
    "linux_username": "string",
    "github_username": "string"
},
'body': {
    "url": "string",
    "component": "string",
    "tag": "string" ,
	"architecture": "string" // OS environment
}

Behavior/Flow

  1. Given the input information, the backend clones the component, and builds it based off instructions in the repo itself
  2. The component is then stored in artifact storage, where it can be used as a dependency for other components

Return

{
    "status": "string"
    "errorMessage": "string"
}

(GET) Get component:

'headers': {
    "linux_username": "string",
    "github_username": "string"
},
'body': {
    "component": "string",
    "tag": "string",
	"architecture": "string" // OS environment
}

Behavior/Flow

  1. Given the input information, the backend checks to see if component exists in artifact storage, 
    1. if not exists: then clone and build the component in the artifact storage
    2. if exists: (TBD) either return the component itself, by zipping up the folder, or return the filepath to the component, and the client can just copy it over

Return

{
    "status": "string"
    "errorMessage": "string" // Optional
	"component": "string" // TBD (return filepath or component itself)
}

Other Information

  1. TODO: Right now your trying to see if can get podman to build without user id 1000
    Seems like you can't, i tried setting 46487 to podman, doesnt work, and then tried
    setting it above its subuid range, same problem.
    Maybe its fine to run as user 1000 for now, try mounting s3df volume and see if we
    can build the dockerfile on registry. 
    If not fine, then may have to use alternative like buildah
    Then work on getting the testing structure done. like unit tests and integration tests, add to bom, parse, and make a basic unit test for the test-ioc
  2. Motivation for API to artifact storage: we don't want to repeat logic 3 times, one each for building, testing, and deployment. So have the logic once in the artifact storage itself, and can call it from each stage of the build system. Ex: When building, build this image I give, when testing, give me the built image to run my app, when deploying give me the built image to run my app. Also the build containers will run different os's including old ones like rhel5, and may have trouble building images in there, so moving the logic over to a container that has a single os consistent environment and podman version, then its less likely to have errors. Also only able to use podman ROOTLESS in a container if we use a podman image, so we cant do it in the build containers unless they're root, which we want to avoid.
  3. Also trying to see if can run podman in container because that may have more hopes of building images within an image than docker. Try kubectl exec it podman priv – sh, Then try to build an image.
  4. For this to work, need img with podman installed, and need to be root user, and security context privileged: true. 

    [root@rocky9-testd /]# cd build/
    [root@rocky9-testd build]# ls
    __pycache__  asyn  epics-base  start_build.py  start_test.py
    [root@rocky9-testd build]# vim Dockerfile
    [root@rocky9-testd build]# podman build -t docker.io/pnispero/rocky9-env:podman -f Dockerfile .
    Successfully tagged docker.io/pnispero/rocky9-env:podman
    Successfully tagged localhost/pnispero/rocky9-env:podman
    6dea88dccb6a6b4ff9116c7215a089f7c865613d4932fc03eeae4b25baad5996
    [root@rocky9-testd build]# podman images
    REPOSITORY                     TAG         IMAGE ID      CREATED             SIZE
    docker.io/pnispero/rocky9-env  podman      6dea88dccb6a  About a minute ago  984 MB
    localhost/pnispero/rocky9-env  podman      6dea88dccb6a  About a minute ago  984 MB
    # The following is needed for me to push on pnispero/ on dockerhub
    [root@rocky9-testd build]# podman login docker.io
    Username: pnispero
    Password:
    Login Succeeded!
    [root@rocky9-testd build]# podman push docker.io/pnispero/rocky9-env:podman
    Getting image source signatures
    Copying blob 7c554e5c0228 done   |
    Copying blob 9e3fa8fc4839 done   |
    Copying blob 22514acd460a done   |
    Copying blob d3c9bab34657 done   |
    Copying blob e489bb4f45f2 done   |
    Copying blob 446f83f14b23 skipped: already exists
    Copying blob 9142ea245948 done   |
    Copying blob a9ebe5aa7e2b done   |
    Copying blob c776803672c2 done   |
    Copying blob f2f869ceb9a5 done   |
    Copying blob 7f312795052b done   |
    Copying config 6dea88dccb done   |
    Writing manifest to image destination

    Since confirmed it worked, we can have buildscript generate the Dockerfile, send it over to the artifact storage, then start another container on ad-build that is root/privliged so it can build the image from the Dockerfile and push to the registry. 

  5. Update: Found a way to use podman to build image WITHOUT root user or privleged. See podman-test.yaml
    Possible workflow: Buildscript generate Dockerfile → api request to artifact storage to build → artifact storage starts container to build Dockerfile.
    TODO: We can make the rest api ourselves, (django/flask/fastapi framework, and swagger ui for doc?) 
    This artifact storage process/container should have logic to build dockerfile images, and components themselves. It'll be a middle man accepting client requests, and starting up containers to do its work.
    Then the artifact storage container can just return the filepath to copy the built components from
    Come up with api definitions and what we need, then go over with Jerry, and see if we should use django or flask
    authenticate rest api with api key to pass to build containers.

    1. Resource: How to use Podman inside of Kubernetes | Enable Sysadmin (redhat.com)
      How to run systemd in a container | Red Hat Developer

How to authorize api service onto kubernetes cluster

  1. Create a service account. a role binding, and a service account token secret. See BuildSystem/artifact_storage/api/artifact_service_account.yaml at main · ad-build-test/BuildSystem (github.com)
  2. Apply this manifest with
    1. kubectl apply -f artifact_service_account.yaml 
  3. Then look at the secret
    1. kubectl describe -n artifact secret/myexample-sa-token
  4. Add that to environment variable passed in to artifact_api_deployment.yaml BuildSystem/artifact_storage/api/artifact_api_deployment.yaml at main · ad-build-test/BuildSystem (github.com)
  5. Apply this manifest with
    1. kubectl apply -f artifact_api_deployment.yaml
    2. What this does is starts the api deployment, it'll start the script to add in the kube config at $HOME/.kube/config, then start the api process
  6. done TODO: Create the script that initializes the kube configuration, then start the api process (probably bash)

How to test service is accessible to other build containers

  1. TODO: Apparently the artifact needs to be in the same namespace as the build containers if they want to access it, so in that case, may just put the build containers in the artifact namespace? Or put the artifact api in the namespace of the build containers
  • No labels