Todo

Building includes the following:

  1. CLI to initiate builds
  2. Build images of different operating systems with baked in build scripts to be deployed as a container/pod in the ad-build cluster
  3. S3DF file space specifically for app/component repo builds /sdf/group/ad/eed/ad-build/scratch/
  4. End-goal: 
    1. Preliminary:
      1. End-user creates a build system manifest for their repo (ex: test-ioc/configure/CONFIG.yaml at main · ad-build-test/test-ioc (github.com))
      2. End-user creates the correct RELEASE_SITE depending on the build type
    2. Have the user initiate the build using the CLI,
    3. the CLI calls to backend,
    4. backend clones app repo to ad-build/scratch/
    5. backend starts the build containers (multiple if an app is used for different Op Systems) on the 'ad-build' Kubernetes cluster
    6. build containers parses the build manifest of the app repo
    7. build container checks if regular or container build
      1. if regular build - run the build script the repo provides that the user specified
      2. if container build - send requests to artifact storage to grab the dependencies and copy them over to /build. THEN run the build script the repo provides that the user specified
    8. Once successfully built, report to user, then place the build results (currently this is just copying the entire thing) onto artifact storage for test and deployment usage.
    9. if container build - send request to artifact storage to build an image of the app with all its dependencies baked in, and this can be used for dev/test/deployment.
  5. Current status:
    1. We have the container build mostly done, don't have a regular build process. 
    2. TODO: We need a regular build process because moving apps to containers will be a long process (slowly integrate apps to containers). So, we need a way to build locally, (ex: a user will 'bs build -local' and the cli will just run the script specified in the manifest) Then need to figure out how we are going to test and deploy these 'regular' builds. Things to consider:
      1. How are the dependencies going to be accessed? Same as lcls-dev3 where dependencies sit scattered across the filesystem (like $EPICS_MODULES, $PACKAGE_TOP, $TOOLS)? Or like the container builds, do we want to have artifact storage contain all the dependencies and just install them into an image?
      2. If building locally, how is this any different/better than a user just building it themselves?
      3. How are the build results going to be put in the artifact storage? Should they be put in artifact storage? 
      4. How are we going to test built apps? Where will we get the build artifacts?
      5. For deployment, do we keep it the way it is?
  6. Detailed Gameplan for build system (a lot of this logic already exists, but specified here to CLEARLY lay out what EXACTLY is needed):
    1. 3 options:
      1. DONE - build locally (for now we can assume user has environment for all dependencies, may need to consider coordinated changes)
        1. have a 'bs build' 
          1. parse manifest, and run the 'build' field which is provided by user.
        2. the initial tests should be ran after the 'bs build', but have an option for user to opt out of the tests
          1. we would want the test app be deployed "next" to the official version with usernames-branch names
          2. user will need to specify their preferred test location, 
          3. test location: (host name, filepath to install app: /opt/<sioc>/user-branch/) Which can be multiple entries (maybe multiple locations in prod / dev) - These will be entries in the deployment database (which will have official and test locations). 
          4. have a 'bs test'
            1. Option 1 (deploy and test) - This will DEPLOY the build results to the test location specified by user, and THEN RUN TESTS
            2. Option 2 (test only) - This will use CURRENT existing deployment and then run tests
            3. Running tests involves the CLI will try to find a UNIVERSAL 'run_tests.sh/py' in the tests directories (initial_tests/, integration_tests/). (although this is not set, open to other options)
        3. We might not need local build results to be placed in the artifact storage
        4. Possible we can have the user decide if they want to store their build results to the artifact storage in their own 'personal' space.
      2. develop and build in a container (a build environment container like ROCKY9, RHEL8)
        1. have a 'bs build'
          1. parse manifest, and run the 'build' field which is provided by user.
        2. may want these development environments to be full-featured development (although this may be resource-heavy, if so we may use VMs instead)
      3. build remotely with no user interaction (like a nightly build)
        1. PATRICK right now -
          1. edit start_build.py to take an additional ENV variable which is (ADBS_BUILD_TYPE) and test with your own rocky9 dev containers first (would have to use your local pc since it has docker for now but be sure to coordinate changes from local pc and s3df). if NORMAL REMOTE BUILD, then skip all the container pieces in the build_script. Once we get this to run, then building step is done for now. Ask claudio to add that env variable as well
        2. have a 'bs build --remote'
          1. parse manifest, and run the 'build' field which is provided by user. (this is the build script container)
        3. Note - this bs build remote is ESSENTIALLY A NORMAL BUILD but is done entirely through the Build System.
        4. Note - Claudio had mentioned building a container based on build results for use test/deployment (which is later down the line - which we can test with containerized archivers first)
          1. at the moment a lot of this logic is already in place, but will be placed on a lower priority for now
        5. Note - not clear on how we would do coordinated development (i.e. developing 2 components at once)
          1. the issue is where do you put the build results for one component so that the other component can see it (ex: modified EPICS and modified IOC)
        6. Process - At the moment we have a dedicated filepath (one volume mount to the container) like (container path: /mnt is currently mapped to S3DF path: /sdf/group/ad/eed/ad-build/) where all the dependencies is needed for a remote container to build your app.
        7. BUT WE SHOULD DO THIS INSTEAD
          1. Where build is occurring/component is check out - /sdf/group/ad/eed/ad-build/scratch/66aac28f08804932f1aec88d-RHEL8-test-ioc-dev-patrick
          2. component is checked out into /sdf/group/ad/eed/ad-build/scratch/66aac28f08804932f1aec88d-RHEL8-test-ioc-dev-patrick/<component>
          3. Dependency1 is installed into /sdf/group/ad/eed/ad-build/scratch/66aac28f08804932f1aec88d-RHEL8-test-ioc-dev-patrick/<dependency1>, and appropriate env variables are set to point to it (the ones in the RELEASE_SITE_REMOTE, specifically $EPICS_SITE_TOP, and the other ones are defined in terms of $EPICS_SITE_TOP)
          4. process -
            1. build containers ask artifact api for a installer package of the dependency (Only ONE mount path for build container)
            2. artifact api (only ONE mount path for artifact api) looks if available in artifact storage, otherwise clone and build on demand, then return teh installer package to the build container
            3. build container then installs the package to /mnt/<dependency>, and sets the environment variable(s) in RELEASE_SITE to point to /mnt/<dependency> (ex: /mnt/epics)
            4. component is checked out into /mnt/<component> (ex: /mnt/test-ioc)
            5. cd into /mnt/<component> and do equivalent of 'bs build'
          5. Having only one mount per container is ideal, we don't want containers having access to more than what they need
    2. Build results
      1. TODO: need to figure out what contents we need exactly to 'install' an app in order for it to run,
        1. for example iocs need more than just the binaries, they need the scripts, and the right links etc.
        2. modules like epics itself, (what are the actual build results?) and how can we package that into an installer package
          1. symlinks and env variables as well, how are these handled

Types of Builds

Note - Since it will take some time for all apps/components to move to containers, we must have a building strategy for regular builds

Regular (Adopting current build process to the Build System)

  1. Currently, no strategy for this. I think for IOCs that want to build but a user isn't ready to run it in a container should have the option to build it like they usually do.
    1. If this is the case, we can have 2 separate RELEASE_SITE files, one for a regular build, and one for a container build.
      1. Where the regular build RELEASE_SITE is the same as usual (assumes all dependencies exist on the filesystem),
      2. And where the container build RELEASE_SITE points all dependencies to /build

Container (The end-goal for most apps)

  • No labels