You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
« Previous
Version 21
Next »
Location: BuildSystem/artifact_storage/api at main · ad-build-test/BuildSystem (github.com)
TBD - May not need linux_username and github_username for the purposes of this api.
(POST) Build Image:
'headers': {
"linux_username": "string",
"github_username": "string"
},
'body': {
"dockerfile": file,
"component": "string",
"branch": "string",
"architecture": "string" // OS environment
}
Behavior/Flow
- Given the input information, the backend creates builds the dockerfile given, with the name as <component>-<branch>-image:<tag>
- TODO: Come up with image naming convention
- The image is then stored into the artifact storage registry, available for use to test/deploy/dev environment
Return
{
"status": "string"
"errorMessage": "string"
}
(GET) Get Image:
'headers': {
"linux_username": "string",
"github_username": "string"
},
'body': {
"component": "string",
"branch": "string",
"architecture": "string" // OS environment
}
Behavior/Flow
- Given the input information, the backend checks if image exists for a certain component.
- if not exists: then return none,
- if exists: return the name of the image, possibly a command on how to pull the image
Return
{
"status": "string"
"errorMessage": "string"
"imageName": "string"
}
(POST) Build Component:
'headers': {
"linux_username": "string",
"github_username": "string"
},
'body': {
"url": "string",
"component": "string",
"tag": "string" ,
"architecture": "string" // OS environment
}
Behavior/Flow
- Given the input information, the backend clones the component, and builds it based off instructions in the repo itself
- The component is then stored in artifact storage, where it can be used as a dependency for other components
Return
{
"status": "string"
"errorMessage": "string"
}
(GET) Get component:
'headers': {
"linux_username": "string",
"github_username": "string"
},
'body': {
"component": "string",
"tag": "string",
"architecture": "string" // OS environment
}
Behavior/Flow
- Given the input information, the backend checks to see if component exists in artifact storage,
- if not exists: then clone and build the component in the artifact storage
- if exists: (TBD) either return the component itself, by zipping up the folder, or return the filepath to the component, and the client can just copy it over
Return
{
"status": "string"
"errorMessage": "string" // Optional
"component": "string" // TBD (return filepath or component itself)
}
- TODO: Right now your trying to see if can get podman to build without user id 1000
Seems like you can't, i tried setting 46487 to podman, doesnt work, and then tried
setting it above its subuid range, same problem.
Maybe its fine to run as user 1000 for now, try mounting s3df volume and see if we
can build the dockerfile on registry.
If not fine, then may have to use alternative like buildah
Then work on getting the testing structure done. like unit tests and integration tests, add to bom, parse, and make a basic unit test for the test-ioc - Motivation for API to artifact storage: we don't want to repeat logic 3 times, one each for building, testing, and deployment. So have the logic once in the artifact storage itself, and can call it from each stage of the build system. Ex: When building, build this image I give, when testing, give me the built image to run my app, when deploying give me the built image to run my app. Also the build containers will run different os's including old ones like rhel5, and may have trouble building images in there, so moving the logic over to a container that has a single os consistent environment and podman version, then its less likely to have errors. Also only able to use podman ROOTLESS in a container if we use a podman image, so we cant do it in the build containers unless they're root, which we want to avoid.
- Also trying to see if can run podman in container because that may have more hopes of building images within an image than docker. Try kubectl exec it podman priv – sh, Then try to build an image.
For this to work, need img with podman installed, and need to be root user, and security context privileged: true.
[root@rocky9-testd /]# cd build/
[root@rocky9-testd build]# ls
__pycache__ asyn epics-base start_build.py start_test.py
[root@rocky9-testd build]# vim Dockerfile
[root@rocky9-testd build]# podman build -t docker.io/pnispero/rocky9-env:podman -f Dockerfile .
Successfully tagged docker.io/pnispero/rocky9-env:podman
Successfully tagged localhost/pnispero/rocky9-env:podman
6dea88dccb6a6b4ff9116c7215a089f7c865613d4932fc03eeae4b25baad5996
[root@rocky9-testd build]# podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/pnispero/rocky9-env podman 6dea88dccb6a About a minute ago 984 MB
localhost/pnispero/rocky9-env podman 6dea88dccb6a About a minute ago 984 MB
# The following is needed for me to push on pnispero/ on dockerhub
[root@rocky9-testd build]# podman login docker.io
Username: pnispero
Password:
Login Succeeded!
[root@rocky9-testd build]# podman push docker.io/pnispero/rocky9-env:podman
Getting image source signatures
Copying blob 7c554e5c0228 done |
Copying blob 9e3fa8fc4839 done |
Copying blob 22514acd460a done |
Copying blob d3c9bab34657 done |
Copying blob e489bb4f45f2 done |
Copying blob 446f83f14b23 skipped: already exists
Copying blob 9142ea245948 done |
Copying blob a9ebe5aa7e2b done |
Copying blob c776803672c2 done |
Copying blob f2f869ceb9a5 done |
Copying blob 7f312795052b done |
Copying config 6dea88dccb done |
Writing manifest to image destination
Since confirmed it worked, we can have buildscript generate the Dockerfile, send it over to the artifact storage, then start another container on ad-build that is root/privliged so it can build the image from the Dockerfile and push to the registry.
Update: Found a way to use podman to build image WITHOUT root user or privleged. See podman-test.yaml
Possible workflow: Buildscript generate Dockerfile → api request to artifact storage to build → artifact storage starts container to build Dockerfile.
TODO: We can make the rest api ourselves, (django/flask/fastapi framework, and swagger ui for doc?)
This artifact storage process/container should have logic to build dockerfile images, and components themselves. It'll be a middle man accepting client requests, and starting up containers to do its work.
Then the artifact storage container can just return the filepath to copy the built components from
Come up with api definitions and what we need, then go over with Jerry, and see if we should use django or flask
authenticate rest api with api key to pass to build containers.
- Resource: How to use Podman inside of Kubernetes | Enable Sysadmin (redhat.com)
How to run systemd in a container | Red Hat Developer
How to authorize api service onto kubernetes cluster
- Create a service account. a role binding, and a service account token secret. See BuildSystem/artifact_storage/api/artifact_service_account.yaml at main · ad-build-test/BuildSystem (github.com)
- Apply this manifest with
kubectl apply -f artifact_service_account.yaml
- Then look at the secret
kubectl describe -n artifact secret/myexample-sa-token
- Add that to environment variable passed in to artifact_api_deployment.yaml BuildSystem/artifact_storage/api/artifact_api_deployment.yaml at main · ad-build-test/BuildSystem (github.com)
- Apply this manifest with
kubectl apply -f artifact_api_deployment.yaml
- What this does is starts the api deployment, it'll start the script to add in the kube config at $HOME/.kube/config, then start the api process
- done
How to test service is accessible to other build containers
- TODO: Apparently the artifact needs to be in the same namespace as the build containers if they want to access it, so in that case, may just put the build containers in the artifact namespace? Or put the artifact api in the namespace of the build containers
IMPORTANT
Jerry vacation 7-11 to 7-16
S3DF down 7-10 to 7-12 including the cluster
So come up with what work your going to do in those days since clusters are down
- documentation, diagrams, planning?
- mps prep?
- Look into ansible, then once done, maybe help out Lukas with the python conversions, at least until s3df is back up