Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

          1. Proof it works:

            Code Block
            languagebash
            linenumberstrue
            collapsetrue
            [root@ad-build-container-rocky9-544f5787dc-gl4sq iocBoot]# cd iocGuardianTest/
            [root@ad-build-container-rocky9-544f5787dc-gl4sq iocGuardianTest]# ls
            Makefile  envPaths  st.cmd
            [root@ad-build-container-rocky9-544f5787dc-gl4sq iocGuardianTest]# vim st.cmd
            [root@ad-build-container-rocky9-544f5787dc-gl4sq iocGuardianTest]# vim st.cmd
            [root@ad-build-container-rocky9-544f5787dc-gl4sq iocGuardianTest]# ls
            Makefile  envPaths  st.cmd
            [root@ad-build-container-rocky9-544f5787dc-gl4sq iocGuardianTest]# vim st.cmd
            [root@ad-build-container-rocky9-544f5787dc-gl4sq iocGuardianTest]# chmod +x st.cmd
            [root@ad-build-container-rocky9-544f5787dc-gl4sq iocGuardianTest]# ./st.cmd
            #!../../bin/linux-x86_64/Guardian
            < envPaths
            epicsEnvSet("IOC","iocGuardianTest")
            epicsEnvSet("TOP","/build/Guardian")
            epicsEnvSet("ASYN","/build/support/asyn")
            epicsEnvSet("EPICS_BASE","/build/epics-base")
            cd "/build/Guardian"
            ## Register all support components
            dbLoadDatabase "dbd/Guardian.dbd"
            Guardian_registerRecordDeviceDriver pdbbase
            ## Load record instances
            #dbLoadRecords("db/xxx.db","user=GUARDIAN")
            dbLoadRecords("db/test.db") # PATRICK TODO: Temp here for testing
            dbLoadRecords("db/guardian_snapshot.db", "BASE=SIOC:B34:GD_PATRICK") # PATRICK TODO: Temp add patrick so its unique
            dbLoadRecords("db/guardian_device_data.db", "BASE=SIOC:B34:GD_PATRICK") # PATRICK TODO: Temp add patrick so its unique
            ## Configure Guardian driver
            # GuardianDriverConfigure(
            #    Port Name,                 # Name given to this port driver
            GuardianDriverConfigure("GUARDIAN")
            cd "/build/Guardian/iocBoot/iocGuardianTest"
            iocInit
            Starting iocInit
            ############################################################################
            ## EPICS R7.0.8
            ## Rev. R7.0.8
            ## Rev. Date Git: 2023-12-14 16:42:10 -0600
            ############################################################################
            iocRun: All initialization complete
            ## Start any sequence programs
            #seq sncxxx,"user=GUARDIAN"
            epics> status: 0
            curVal after get : 1
            status: 0
            curVal after get : 1
            status: 0
            curVal after set : 1
            status: 0
            curVal after get : 1
            
            epics> dbl
            SIOC:B34:GD_PATRICK:FEL_PULSE_E
            FBCK:FB02:GN01:S2DES_STORED_RBV
            FBCK:FB02:GN01:S2DES_TEST
            FBCK:FB02:GN01:S2DES_STORED
            SIOC:B34:GD_PATRICK:SNAPSHOT_TRG_RBV
            SIOC:B34:GD_PATRICK:SNAPSHOT_TRG_EN
            epics> sstriggerval: 0
            sstriggerval: 0
            sstriggerval: 0
            sstriggerval: 1
            1: 0
            curVal after get : 5
            2: 0
            Successfully triggered and resetted
        1. see if we can make our config.yaml (of the steps above) for epics/asyn on the registry, so we can get start_build.py to build ioc automatically.This way we can show this to ernest and see if we can get everything including epics in a container.
          1. Get the prebuilt one from afs, and upload to ad-build-test github, and download in registry and use that
          2. Get test-ioc to have the basic ioc in there.
          3. Try to see if we can keep epics the same as much as we can, maybe just change env variables to point to epics on /build or the std directory. like the $EPICS
          4. steps to get a test-ioc with modules going:
          5. Convert the steps below to a config.yaml for test-ioc, then see if we can get it to build with start_build.py

            Code Block
            languagebash
            cp -r /mnt/eed/ad-build/registry/epics-base/R7.0.8/epics-base/ /build/
            # If this works add to .bashrc because this'll be dev image
            export EPICS_BASE=/build/epics-base
            export EPICS_HOST_ARCH=$(${EPICS_BASE}/startup/EpicsHostArch)
            export PATH=${EPICS_BASE}/bin/${EPICS_HOST_ARCH}:${PATH}
            cp -r /mnt/eed/ad-build/registry/asyn/R4.39-1.0.1/asyn/ /build/
            # Testing ioc
            cd /build 
            git clone -b dev-patrick https://github.com/ad-build-test/test-ioc.git
            cd Guardian/
            # in configure/RELEASE
            # Update EPICS_BASE to point to /build/epics-base
            # Update ASYN to point to /build/asyn
            # Update configure/CONFIG_SITE CHECK_RELEASE to WARN (temporary fix - since asyn on registry points to epics on registry)
            make
            cd iocBoot/iocGuardianTest/
            # in st.cmd
            # Update the shebang to point to #!../../bin/linux-x86_64/Guardian
            chmod +x st.cmd
            ./st.cmd
          6. caveat - for now we are copying entire component dependencies, which is fine and that allows for minimal changes to existing ioc build structure. But this can be a problem if each component has various environments built (this takes up space in the container /build dir), (may want to separate them into their own folder?
          7. One problem, how will the iocs in the container be accessed by any other server outside?
            1. Because on lcls-dev3 or prod, the every ioc's host is the server itself. While a container's host is the container itself, not the server the container is running on.
            2. Potential solution - TODO: May not be a problem since we are not deploying on kubernetes clusters, we deploy with the image with all the dependencies yes, but it will still be deployed on a production server. We could just use docker on prod (may want to test running epics using just docker, to see if it takes the host of the server machine, or the container still)
        2. TODO: work on artifact storage, and testing phase (unit tests), and onboard page. blocked on discussing about deployment side with Claudio/Jerry. But once we get artifact storage going, then we can start prototyping the deployment
          1. Also trying to see if can run podman in container because that may have more hopes of building images within an image than docker. Try kubectl exec it podman priv – sh, Then try to build an image.
          2. For this to work, need img with podman installed, and need to be root user, and security context privileged: true. 

            Code Block
            languagebash
            linenumberstrue
            collapsetrue
            [root@rocky9-testd /]# cd build/
            [root@rocky9-testd build]# ls
            __pycache__  asyn  epics-base  start_build.py  start_test.py
            [root@rocky9-testd build]# vim Dockerfile
            [root@rocky9-testd build]# podman build -t docker.io/pnispero/rocky9-env:podman -f Dockerfile .
            Successfully tagged docker.io/pnispero/rocky9-env:podman
            Successfully tagged localhost/pnispero/rocky9-env:podman
            6dea88dccb6a6b4ff9116c7215a089f7c865613d4932fc03eeae4b25baad5996
            [root@rocky9-testd build]# podman images
            REPOSITORY                     TAG         IMAGE ID      CREATED             SIZE
            docker.io/pnispero/rocky9-env  podman      6dea88dccb6a  About a minute ago  984 MB
            localhost/pnispero/rocky9-env  podman      6dea88dccb6a  About a minute ago  984 MB
            # The following is needed for me to push on pnispero/ on dockerhub
            [root@rocky9-testd build]# podman login docker.io
            Username: pnispero
            Password:
            Login Succeeded!
            [root@rocky9-testd build]# podman push docker.io/pnispero/rocky9-env:podman
            Getting image source signatures
            Copying blob 7c554e5c0228 done   |
            Copying blob 9e3fa8fc4839 done   |
            Copying blob 22514acd460a done   |
            Copying blob d3c9bab34657 done   |
            Copying blob e489bb4f45f2 done   |
            Copying blob 446f83f14b23 skipped: already exists
            Copying blob 9142ea245948 done   |
            Copying blob a9ebe5aa7e2b done   |
            Copying blob c776803672c2 done   |
            Copying blob f2f869ceb9a5 done   |
            Copying blob 7f312795052b done   |
            Copying config 6dea88dccb done   |
            Writing manifest to image destination

            Since confirmed it worked, we can have buildscript generate the Dockerfile, send it over to the artifact storage, then start another container on ad-build that is root/privliged so it can build the image from the Dockerfile and push to the registry. 
            Possible workflow: Buildscript generate Dockerfile → api request to artifact storage to build → artifact storage starts container to build Dockerfile.
            TODO: We can make the rest api ourselves, (django framework, and swagger ui for doc?) 
            This artifact storage process/container should have logic to build dockerfile images, and components themselves. It'll be a middle man accepting client requests, and starting up containers to do its work.
            Then the artifact storage container can just return the filepath to copy the built components from
            Come up with api definitions and what we need, then go over with Jerry, and see if we should use django or flask
            authenticate rest api with api key to pass to build containers.

            1. Resource: How to use Podman inside of Kubernetes | Enable Sysadmin (redhat.com)
              How to run systemd in a container | Red Hat Developer
          3. Then you can start deployment of build system and deployment of apps themselves
      1. How would channel access work if epics is in container? Can it be accessed from main server?
    1. 2) Or try moving everything (/include (.h), /lib (.so/.a), and /bin (binaries)) to according directories (This will save more space)

...