• shared filesystem can be bad because if someone makes changes without you knowing
  • if we have one ioc that have 3 different versions
  • Are component dependencies for building against src code or used for integration testing?
  • how we deal with dependencies?
    • 1) Use docker image with all the dependencies baked into it
    • 2) or create container that have repo download/install all dependencies with cmake maybe
    • 3) Have a configuration file for user to use saying which dependencies he has, and the ad-build will create the docker image for you.
  • Have base containers like ubuntu, rhel, rocky with basic compilation
    • then need 
  • what is the right way to automate the building the development image
  • s3df apptainer is read-only, can't install dependencies so we need an image with full dependencies already baked
  • can use package manager to install prebuilt component to resolve time concern with installing depenencies on image
  • lets update schema to have the 'image' of the component, 
  • have run image (which is component dependencies) built from the build image (Which is base os with basic compilation)
  • we should create the docker file for components dynamically
    • lets say we have ioc that depends on boost and epics, if we start with vanilla dev container like rocky9, what information do we need to have
    • once the image is built dynamically, then transfer that to a registry, and developers can use that on s3df apptainer since it has full dependencies.
    • Who will build the image dynamically? - To build on kubernetres isn't possible, may need to find alternative like buildah, or just use github?
  • Example of component
  • my_Ioc component - can be any environment (os)
    • boost component
    • epics component
  • The artifacts that we need to produce after the build is finished
    • image with everything is installed, one for each environment
    • has bin,lib,etc, everything with all dependencies installed with dependencies you chose
    • with goal of having iocs run in containers
  • build system should have prebuilt components 
    • We want to end up with an installer package
  • Where do we want to automize?
  • If i manage the dependencies bymyself, means if you try to build on differnet base images (os), it will be the same dependencies, but build system
  • C propsal: user should use make or cmake or gradle to build their component. 
  • The database doesn't need to know about the components, if we have the bill of materials of components on the repo already.
  • if something goes wrong, the developer will fix in their sources, the build system is not responsible for the dependencies, we will look into what users define in their source code
    • we want to avoid users tweaking the database each time
    • we can have users specify in bom:
      • os enviornments
      • startup sequence
      • components and tags
    • then after it build, copy the built directory to build system registry, and the build image will be used as the developer image. All the same image build, run, and development.
    • start scripts (start_build.py) will read the bom that the user specifies and go from there
  • For depolyment
    • if a machine is down, then we have an error on a componet deployment, and we have component A is dependent on component B
    • we can use ansible to deploy the run binary
    • deployment is not a problem
  • Claudio doesnt like shared components like epics because now he HAS to be on s3df to develop
    • if we run on a container, we don't have to worry about 
  • the shared filesystem is bad practice, but the developers are already used to it, 
    • but we willl convince ernest and greg to move away from the shared filesystem and have everything in the container and MODERNIZE
  • for now we manage the dependencies on src code - proof of concept
  • one problem we can run into, for my high level app i need this version of python epics, but 5 years from now we want everything to run on containers and focus on that endgoal
  • From cli perspective, how do we take what we build and how to deploy it.
  • For each build environment, we have a run environment as well
  • for cli create component, we prompt user which environments to use
  • For each build, the backend will start multiple pods, one for each os environment 
  • for each build we can use s3df storage, and move the artifact to desired location
  • One pod for build, and one pod to take the artifact to the registry
    • So where are the build results are going to go?
    • We can have a convention/standard directory for build results (tarball) to go, and the backend starts a pod that will know to look in there
    • the start_build.py should have build and unit tests, then copy over to build results registry
  • TODO: Work on test project like my_ioc, with bom. claudio can use this to test
  • claudio starts the containers using backend, then the rest is our work with start_build.py and claudio will give rest api to give to start_build.py to log back to database
  • authroization for external project
    • we should have cli use gh cli project to give read authorization to the backend - which is permanent until a user removes it
    • not now, but when we get to it, the CLI will pass the installation ID to the backend. the user will 'bs login' 
  • No labels