You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
« Previous
Version 43
Next »
Debugging, trying to see if can get epics installed in a container
Installation on Linux / MacOS — EPICS Documentation documentation (epics-controls.org)
- You need /lib, /include, and /bin.
- 1) Keep everything in the epics/ folder, but copy the folder over, then update $LD_LIBRARY_PATH(lib), $PATH(bin), $CPATH(include)?
- How build epics:
git clone --recursive https://github.com/epics-base/epics-base.git
- make
- cp -r /mnt/eed/ad-build/registry/epics-base/R7.0.8/epics-base/ /build/
# If this works add to .bashrc because this'll be dev image
export EPICS_BASE=/build/epics-base
export EPICS_HOST_ARCH=$(${EPICS_BASE}/startup/EpicsHostArch)
export PATH=${EPICS_BASE}/bin/${EPICS_HOST_ARCH}:${PATH}
Works for binaries, need to try building simple ioc with it.
mkdir -p /build/testIoc
cd /build/testIoc
makeBaseApp.pl -t example testIoc
makeBaseApp.pl -i -t example testIoc
make
cd iocBoot/ioctestIoc
chmod u+x st.cmd
./st.cmd
Building works, and can run the ioc.
[root@ad-build-container-rocky9-544f5787dc-gl4sq ioctestIoc]# pwd
/build/testIoc/iocBoot/ioctestIoc
[root@ad-build-container-rocky9-544f5787dc-gl4sq ioctestIoc]# ls
Makefile README envPaths st.cmd
[root@ad-build-container-rocky9-544f5787dc-gl4sq ioctestIoc]# ./st.cmd
#!../../bin/linux-x86_64/testIoc
< envPaths
epicsEnvSet("IOC","ioctestIoc")
epicsEnvSet("TOP","/build/testIoc")
epicsEnvSet("EPICS_BASE","/build/epics-base")
cd "/build/testIoc"
## Register all support components
dbLoadDatabase "dbd/testIoc.dbd"
testIoc_registerRecordDeviceDriver pdbbase
## Load record instances
dbLoadTemplate "db/user.substitutions"
dbLoadRecords "db/testIocVersion.db", "user=root"
dbLoadRecords "db/dbSubExample.db", "user=root"
cd "/build/testIoc/iocBoot/ioctestIoc"
iocInit
Starting iocInit
############################################################################
## EPICS R7.0.8
## Rev. R7.0.8
## Rev. Date Git: 2023-12-14 16:42:10 -0600
############################################################################
cas WARNING: Configured TCP port was unavailable.
cas WARNING: Using dynamically assigned TCP port 37587,
cas WARNING: but now two or more servers share the same UDP port.
cas WARNING: Depending on your IP kernel this server may not be
cas WARNING: reachable with UDP unicast (a host's IP in EPICS_CA_ADDR_LIST)
iocRun: All initialization complete
## Start any sequence programs
#seq sncExample, "user=root"
epics> ^C
[root@ad-build-container-rocky9-544f5787dc-gl4sq ioctestIoc]#
- Now need to try asyn as that would need 'include' statements of epics. After this then we can conclude epics can be built/ran in a container, although still unclear if can access s3df pvs?
- For registry git clone --depth 1 --branch R4-39 https://github.com/epics-modules/asyn.git
Steps to build asyn - ignore, it should be prebuilt in registry already
mkdir /build/support/
cp -r /mnt/eed/ad-build/registry/asyn/R4.39-1.0.1/asyn/ /build/support/
export LD_LIBRARY_PATH=/build/epics-base/lib/linux-x86_64:${LD_LIBRARY_PATH}
# Update asyn/configure/RELEASE
# Change EPICS_BASE to point to your epics base =/build/epics-base
# Change SUPPORT to point to your support folder = /build/support
# Comment out IPAC and SNCSEQ
# if get xdr.* missing error, then uncomment TIRPC in configure/CONFIG_SITE
dnf --enablerepo=crb install -y libtirpc-devel
# -devel is important, its (developer) and gives header files in /usr/include/rpc
yum install -y rpcgen
cd asyn/
make
- Now you have the libraries and binaries needed, in /asyn/lib.
- try test-ioc with asyn with your guardian 'example' branch
Steps to build Guardian ioc
cd /build
git clone -b example https://github.com/pnispero/Guardian.git
in configure/RELEASE
Update EPICS_BASE to point to /build/epics-base
Update ASYN to point to /build/asyn
make
cd iocBoot/iocGuardianTest/
in st.cmd
Update the shebang to point to #!../../bin/linux-x86_64/Guardian
chmod +x st.cmd
./st.cmd
Optional:
# Then in the shell you can 'dbl' to get pvs
# And in another terminal write to 'device' pv.
$ caput FBCK:FB02:GN01:S2DES_TEST 5
# Trigger the 'trigger' pv.
$ caput SIOC:B34:GD_PATRICK:SNAPSHOT_TRG_EN 1
# The code will then takeSnapshot(). And your output should be 5
Proof it works:
[root@ad-build-container-rocky9-544f5787dc-gl4sq iocBoot]# cd iocGuardianTest/
[root@ad-build-container-rocky9-544f5787dc-gl4sq iocGuardianTest]# ls
Makefile envPaths st.cmd
[root@ad-build-container-rocky9-544f5787dc-gl4sq iocGuardianTest]# vim st.cmd
[root@ad-build-container-rocky9-544f5787dc-gl4sq iocGuardianTest]# vim st.cmd
[root@ad-build-container-rocky9-544f5787dc-gl4sq iocGuardianTest]# ls
Makefile envPaths st.cmd
[root@ad-build-container-rocky9-544f5787dc-gl4sq iocGuardianTest]# vim st.cmd
[root@ad-build-container-rocky9-544f5787dc-gl4sq iocGuardianTest]# chmod +x st.cmd
[root@ad-build-container-rocky9-544f5787dc-gl4sq iocGuardianTest]# ./st.cmd
#!../../bin/linux-x86_64/Guardian
< envPaths
epicsEnvSet("IOC","iocGuardianTest")
epicsEnvSet("TOP","/build/Guardian")
epicsEnvSet("ASYN","/build/support/asyn")
epicsEnvSet("EPICS_BASE","/build/epics-base")
cd "/build/Guardian"
## Register all support components
dbLoadDatabase "dbd/Guardian.dbd"
Guardian_registerRecordDeviceDriver pdbbase
## Load record instances
#dbLoadRecords("db/xxx.db","user=GUARDIAN")
dbLoadRecords("db/test.db") # PATRICK TODO: Temp here for testing
dbLoadRecords("db/guardian_snapshot.db", "BASE=SIOC:B34:GD_PATRICK") # PATRICK TODO: Temp add patrick so its unique
dbLoadRecords("db/guardian_device_data.db", "BASE=SIOC:B34:GD_PATRICK") # PATRICK TODO: Temp add patrick so its unique
## Configure Guardian driver
# GuardianDriverConfigure(
# Port Name, # Name given to this port driver
GuardianDriverConfigure("GUARDIAN")
cd "/build/Guardian/iocBoot/iocGuardianTest"
iocInit
Starting iocInit
############################################################################
## EPICS R7.0.8
## Rev. R7.0.8
## Rev. Date Git: 2023-12-14 16:42:10 -0600
############################################################################
iocRun: All initialization complete
## Start any sequence programs
#seq sncxxx,"user=GUARDIAN"
epics> status: 0
curVal after get : 1
status: 0
curVal after get : 1
status: 0
curVal after set : 1
status: 0
curVal after get : 1
epics> dbl
SIOC:B34:GD_PATRICK:FEL_PULSE_E
FBCK:FB02:GN01:S2DES_STORED_RBV
FBCK:FB02:GN01:S2DES_TEST
FBCK:FB02:GN01:S2DES_STORED
SIOC:B34:GD_PATRICK:SNAPSHOT_TRG_RBV
SIOC:B34:GD_PATRICK:SNAPSHOT_TRG_EN
epics> sstriggerval: 0
sstriggerval: 0
sstriggerval: 0
sstriggerval: 1
1: 0
curVal after get : 5
2: 0
Successfully triggered and resetted
- see if we can make our config.yaml (of the steps above) for epics/asyn on the registry, so we can get start_build.py to build ioc automatically. This way we can show this to ernest and see if we can get everything including epics in a container.
- Get the prebuilt one from afs, and upload to ad-build-test github, and download in registry and use that
- Get test-ioc to have the basic ioc in there.
- Try to see if we can keep epics the same as much as we can, maybe just change env variables to point to epics on /build or the std directory. like the $EPICS
- steps to get a test-ioc with modules going:
Convert the steps below to a config.yaml for test-ioc, then see if we can get it to build with start_build.py
cp -r /mnt/eed/ad-build/registry/epics-base/R7.0.8/epics-base/ /build/
# If this works add to .bashrc because this'll be dev image
export EPICS_BASE=/build/epics-base
export EPICS_HOST_ARCH=$(${EPICS_BASE}/startup/EpicsHostArch)
export PATH=${EPICS_BASE}/bin/${EPICS_HOST_ARCH}:${PATH}
cp -r /mnt/eed/ad-build/registry/asyn/R4.39-1.0.1/asyn/ /build/
# Testing ioc
cd /build
git clone -b dev-patrick https://github.com/ad-build-test/test-ioc.git
cd Guardian/
# in configure/RELEASE
# Update EPICS_BASE to point to /build/epics-base
# Update ASYN to point to /build/asyn
# Update configure/CONFIG_SITE CHECK_RELEASE to WARN (temporary fix - since asyn on registry points to epics on registry)
make
cd iocBoot/iocGuardianTest/
# in st.cmd
# Update the shebang to point to #!../../bin/linux-x86_64/Guardian
chmod +x st.cmd
./st.cmd
- caveat - for now we are copying entire component dependencies, which is fine and that allows for minimal changes to existing ioc build structure. But this can be a problem if each component has various environments built (this takes up space in the container /build dir), (may want to separate them into their own folder?
- One problem, how will the iocs in the container be accessed by any other server outside?
- Because on lcls-dev3 or prod, the every ioc's host is the server itself. While a container's host is the container itself, not the server the container is running on.
![](/download/thumbnails/472711624/image-2024-6-17_16-43-42.png?version=1&modificationDate=1718667822189&api=v2)
![](/download/thumbnails/472711624/image-2024-6-17_16-44-10.png?version=1&modificationDate=1718667850930&api=v2)
![](/download/thumbnails/472711624/image-2024-6-17_16-44-41.png?version=1&modificationDate=1718667881649&api=v2)
- Potential solution - TODO: May not be a problem since we are not deploying on kubernetes clusters, we deploy with the image with all the dependencies yes, but it will still be deployed on a production server. We could just use docker on prod (may want to test running epics using just docker, to see if it takes the host of the server machine, or the container still)
- TODO: work on artifact storage, and testing phase (unit tests), and onboard page. blocked on discussing about deployment side with Claudio/Jerry. But once we get artifact storage going, then we can start prototyping the deployment
- Also trying to see if can run podman in container because that may have more hopes of building images within an image than docker. Try kubectl exec it podman priv – sh, Then try to build an image.
For this to work, need img with podman installed, and need to be root user, and security context privileged: true.
[root@rocky9-testd /]# cd build/
[root@rocky9-testd build]# ls
__pycache__ asyn epics-base start_build.py start_test.py
[root@rocky9-testd build]# vim Dockerfile
[root@rocky9-testd build]# podman build -t docker.io/pnispero/rocky9-env:podman -f Dockerfile .
Successfully tagged docker.io/pnispero/rocky9-env:podman
Successfully tagged localhost/pnispero/rocky9-env:podman
6dea88dccb6a6b4ff9116c7215a089f7c865613d4932fc03eeae4b25baad5996
[root@rocky9-testd build]# podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/pnispero/rocky9-env podman 6dea88dccb6a About a minute ago 984 MB
localhost/pnispero/rocky9-env podman 6dea88dccb6a About a minute ago 984 MB
# The following is needed for me to push on pnispero/ on dockerhub
[root@rocky9-testd build]# podman login docker.io
Username: pnispero
Password:
Login Succeeded!
[root@rocky9-testd build]# podman push docker.io/pnispero/rocky9-env:podman
Getting image source signatures
Copying blob 7c554e5c0228 done |
Copying blob 9e3fa8fc4839 done |
Copying blob 22514acd460a done |
Copying blob d3c9bab34657 done |
Copying blob e489bb4f45f2 done |
Copying blob 446f83f14b23 skipped: already exists
Copying blob 9142ea245948 done |
Copying blob a9ebe5aa7e2b done |
Copying blob c776803672c2 done |
Copying blob f2f869ceb9a5 done |
Copying blob 7f312795052b done |
Copying config 6dea88dccb done |
Writing manifest to image destination
Since confirmed it worked, we can have buildscript generate the Dockerfile, send it over to the artifact storage, then start another container on ad-build that is root/privliged so it can build the image from the Dockerfile and push to the registry.
Possible workflow: Buildscript generate Dockerfile → api request to artifact storage to build → artifact storage starts container to build Dockerfile.
We can make the rest api ourselves, (django framework, and swagger ui for doc?)
This artifact storage process/container should have logic to build dockerfile images, and components themselves. It'll be a middle man accepting client requests, and starting up containers to do its work.
Then the artifact storage container can just return the filepath to copy the built components from
- Resource: How to use Podman inside of Kubernetes | Enable Sysadmin (redhat.com)
How to run systemd in a container | Red Hat Developer
- Then you can start deployment of build system and deployment of apps themselves
- How would channel access work if epics is in container? Can it be accessed from main server?
- 2) Or try moving everything (/include (.h), /lib (.so/.a), and /bin (binaries)) to according directories (This will save more space)
Workflow from CLI (todo: move to different page)
- Add test-ioc repo to db
pnispero@PC100942:~/BuildSystem/bs_cli$ ./bs create repo -c test-ioc -u https://github.com/ad-build-test/test-ioc
[?] Specify organization name: ad-build-test
[?] Specify testing criteria: all
[?] Specify approval rule: all
[?] Specify component description: Test IOC used for BuildSystem testing
INFO-root:[create_commands.py:38 - repo() ] | {'linux_username': 'pnispero', 'github_username': 'test'}
INFO-root:[create_commands.py:39 - repo() ] | {'name': 'test-ioc', 'description': 'Test IOC used for BuildSystem testing', 'testingCriteria': 'all', 'approvalRule': 'all', 'organization': 'ad-build-test', 'url': 'https://github.com/ad-build-test/test-ioc'}
INFO-root:[create_commands.py:40 - repo() ] | 201
INFO-root:[create_commands.py:41 - repo() ] | {'errorCode': 0, 'payload': '66720253fd891a5aac14b3cf'}
INFO-root:[create_commands.py:42 - repo() ] | https://accel-webapp-dev.slac.stanford.edu/api/cbs/v1/component
INFO-root:[create_commands.py:43 - repo() ] | b'{"name": "test-ioc", "description": "Test IOC used for BuildSystem testing", "testingCriteria": "all", "approvalRule": "all", "organization": "ad-build-test", "url": "https://github.com/ad-build-test/test-ioc"}'
INFO-root:[create_commands.py:44 - repo() ] | {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'linux_username': 'pnispero', 'github_username': 'test', 'Content-Length': '210', 'Content-Type': 'application/json'}
pnispero@PC100942:~/BuildSystem/bs_cli$
2. Add dev branch to db
pnispero@PC100942:~/test-ioc$ bs create branch -a
Checking current directory if a component...
[?] Specify what to branch from:
> branch
tag
commit
Specify name of branch: main
main
Specify name of branch: main
[?] Specify type of branch to create:
fix
feat
> dev
Specify name of issue number (or dev name): patrick
INFO-root:[create_commands.py:128 - branch() ] | 200
INFO-root:[create_commands.py:129 - branch() ] | {'errorCode': 0, 'payload': True}
INFO-root:[create_commands.py:130 - branch() ] | https://accel-webapp-dev.slac.stanford.edu/api/cbs/v1/component/test-ioc/branch
INFO-root:[create_commands.py:131 - branch() ] | b'{"type": "branch", "branchPoint": "main", "branchName": "dev-patrick"}'
INFO-root:[create_commands.py:132 - branch() ] | {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'linux_username': 'pnispero', 'github_username': 'test', 'Content-Length': '70', 'Content-Type': 'application/json'}
pnispero@PC100942:~/test-ioc$
Start a build
pnispero@PC100942:~/test-ioc$ bs run build
Checking current directory if a component...
INFO-root:[run_commands.py:27 - build() ] | 201
INFO-root:[run_commands.py:28 - build() ] | {'errorCode': 0, 'payload': ['66721557fd891a5aac14b3d0']}
INFO-root:[run_commands.py:29 - build() ] | https://accel-webapp-dev.slac.stanford.edu/api/cbs/v1/build/component/test-ioc/branch/dev-patrick
INFO-root:[run_commands.py:30 - build() ] | b'{}'
INFO-root:[run_commands.py:31 - build() ] | {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'linux_username': 'pnispero', 'github_username': 'test', 'Content-Length': '2', 'Content-Type': 'application/json'}
pnispero@PC100942:~/test-ioc$
Check build (didn't make cli for this yet)
pnispero@PC100942:~/BuildSystem$ curl -X 'GET' \
'https://accel-webapp-dev.slac.stanford.edu/api/cbs/v1/build/component/test-ioc/branch/dev-patrick' \
-H 'accept: application/json'
{"errorCode":0,"payload":[{"id":"66721557fd891a5aac14b3d0","buildOs":"ROCKY9","buildStatus":"PENDING"}]}
How I viewed database
- log into accel-webapp-dev since core build system is hosted there for now
- pnispero@PC100942:~/BuildSystem/other$ kubectl port-forward --namespace=core-build-system cbs-cluster-rs0-0 28015:27017
Forwarding from 127.0.0.1:28015 -> 27017
Forwarding from [::1]:28015 -> 27017 - Open seperate terminal
- mongosh --port 28015
- if this doesnt work, then install follow this link Install mongosh - MongoDB Shell
- then enter this to see if works:
db.runCommand( { ping: 1 } )