Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

LinuxRT is installed on our system using the Preboot Execution Environment (PXE) method of network booting.

We enable the PXE/network-booting method in the BIOS.

In order to use PXE we need to setup a boot-server which will allow our client system to :
(a) Request an IP address (via DHCP)
(b) Download a kernel (via TFTP)

With both of these services in place any system which supports PXE/network-booting
should be able to gain an IP address, fetch a kernel, and boot without an installed operating system.

PXE uses three distinct network protocols that map to three server processes to perform the installation.
In our case, all three processes run on lcls-dev1 (LCLSDEV daemon)

(a) Dynamic Host Configuration Protocol (DHCP)

PXE uses DHCP to deliver initial network configuration options to client nodes.
The DHCP server supplies the PXE boot plug-in with
(i) IP address
(ii) TFTP server address
(iii) Stage 1 image boot-loader name from which to download and execute the image.

As the supplied PXE installation environments are non-interactive and will unconditionally reinstall a client machine,
we have the client associate its MAC address with a specific OS installation before starting the PXE boot.

The configuration information, in our case, in addition to IP/MAC address, includes a hostname and a pointer to the Master Starupt script in afs for our IOC.
It has an optional root-path variable pointing to the afs area which hosts the boot image that is served via TFTP.
This can be over-ridden as will be seen later.

When the Linux server is rebooted or power-cycled, PXE will attempt the network booting method first
and as a first step it will contact the DHCP server to retrieve the network configuration information.

Hence, every new linuxRT ioc (host) needs to be added to the DHCP server configuration file in afs.

This file is in /afs/slac/service/dhcp-pxe/dhcpd.conf

The IP/MAC address of the primary ethernet that will fetch the linuxRT boot image is defined here.
To add a new host to the DHCP configuration, contact Thuy.

Here's is an example - ioc-b34-bd32:

host ioc-b34-bd32 {
# SuperMicro (INTELx86)
#
hardware ethernet 00:25:90:D1:95:1E;
fixed-address 134.79.218.190;
option host-name "ioc-b34-bd32";
if ( substring( option vendor-class-identifier, 0, 5 ) = "udhcp" ) {
filename "/afs/slac/g/lcls/epics/iocCommon/ioc-b34-bd32/startup.cmd";
option root-path "afsnfs1:/afs/slac:/afs/slac";
}
}

(b) Trivial File Transfer Protocol (TFTP)

PXE uses TFTP that defines a simple UDP protocol for delivering files over a network.
PXE delivers kernels and initial bootstrap software to client nodes using TFTP.
In our case, we retrieve the linuxRT boot image from lcls-dev1 (LCLSDEV TFTP Server) from the following location:

/afs/slac/g/lcls/tftpboot/linuxRT/boot

In this location, there are several linuxRT-x86 bootimages.
These were custom-built by T.Straumann for the various Linux Servers/IPCs that we currently have setup to boot with linuxRT OS.

Of these images, '3.14.12-rt9' is the latest and it has in-built support for the
Broadcom networking ethernet chipset that are used in our dev Poweredge Dell Servers.


(c) Network File System (NFS)

The NFS service is used by the installation kernel to read all of the packages necessary to the installation process.
The NFS server therefore needs to provide access to the directory structure containing the PXE images.

This boot directory is available to all machines running NFS.

8. How do I start my IOC? Where is my ioc's statrup.cmd?

...

In the linux world, kernel modules are pieces of code that can be loaded and unloaded into the kernel upon demand.
They extend the functionality of the kernel without the need to reboot the system.
For example, one type of module is the device driver, which allows the kernel to access hardware connected to the system.
Without modules, we would have to build monolithic kernels and add new functionality directly into the kernel image.
Besides having larger kernels, this has the disadvantage of requiring us to rebuild and reboot the kernel every time we want new functionality.

linuxRT too, lets you load and unload kernel modules dynamically.

Now we are ready to load some kernel modules essential to our ioc - like EVR.

In our 'startup.cmd' script, we have the following line which lets us customize and load our kernel modules:

/afs/slac/g/lcls/epics/iocCommon/ioc-b34-bd32/kernel-modules.cmd

The location for kernel modules is specified as an environment variable in linuxRT:

KERNEL_DRIVER_HOME=/afs/slac/g/lcls/package/linuxKernel_Modules

There are several linuxRT drivers in this directory.

The PCI EVR230 driver is here. The following driver version has been built for the latest linuxRT 3.14.12-rt9:

EVR230_PCI_EVR_DRIVER=$KERNEL_DRIVER_HOME/pci_mrfevr_linuxRT/buildroot-2014.08

The PCI Express EVR300 driver built for linuxRT 3.14.12-rt9, is here:

EVR300_PCI_EVR_DRIVER=$KERNEL_DRIVER_HOME/pci_mrfev300_linuxRT/buildroot-2014.08

The kernel drivers are installed (loaded) dynamically as follows:

# Load the MRF EVR230 Kernel Module for timing
insmod $EVR230_PCI_EVR_DRIVER/pci_mrfevr.ko
$EVR230_PCI_EVR_DRIVER/module_load

# Load the MRF EVR300 Kernel Module for timing
insmod $EVR300_PCI_EVR_DRIVER/pci_mrfevr300.ko
$EVR300_PCI_EVR_DRIVER/module_load

There are a couple of things to note:

1. Currently the EVR kernel modules SW has the restriction that if there are both a PMC EVR230 and a PCI-e EVR300 in a linux box,
then the PMC EVR230 MUST BE initialized as card 0 and loaded first. EVR300 must be initialized as card 1.

Additionally, due to hard-coded device names in the module, it is essential to setup the following links:

ln -s /dev/er3a0 /dev/erb0
ln -s /dev/er3a1 /dev/erb1
ln -s /dev/er3a2 /dev/erb2
ln -s /dev/er3a3 /dev/erb3

2. If only one EVR (either PMC EVR230 or PCI EVR300) is installed in your system, then the above restriction does not apply and soft links are not needed.

Take a look at the following script:
/afs/slac/g/lcls/package/linuxKernel_Modules/pci_mrfev300_linuxRT/buildroot-2014.08/module_load

Notice how kernel modules are loaded as device drivers under the /dev/ in linuxRT much like linux.

3. The Broadcom Ethernet NIC driver used to be a separate kernel module and its dirver was loaded dynamically via ;modprobe tg3' in this script.
With the latest linuxRT version 3.14.12-rt9, this step has become unnecessary as the driver has become part of this linuxRT boot image.

4. The SIS digitizers for uTCA, loads their device drivers in 'kernel-modules.cmd':
SIS8300_DRIVER=$KERNEL_DRIVER_HOME/sis8300drv/MAIN_TRUNK

modprobe uio
insmod $SIS8300_DRIVER/sis8300drv.ko

Please note that as of date, SIS8300 has NOT been rebuilt for the latest linuxRT 3.14.12-rt9.
It is currently unsupported for the Poweredge Dell servers.

11. How do I create the a startup script for my ioc?

The third and final script is specifically to setup and start your EPICS ioc.

The 'startupConsole-laci-rt.cmd' lets you start your EPICS-based 'virtual' ioc as a foreground process in your host.

Your EPICS ioc must run as a real-time process. It must lock the kernel in memory under linuxRT.

The following command in your 'startupConsole-laci-rt.cmd' does that:

ulimit -l unlimited

The following line is also needed to run the ioc with real-time priorities:

ulimit -r unlimited

Finally, you will be running your 'virtual' ioc as a specific user called as 'laci' who has permissions to run this ioc.
Setup the permissions for this user 'laci':

umask 002

Now you are ready to start your IOC and have it run as a foreground process.

Create a directory called as 'vioc-b34-my01' for your 'virtual' ioc process under the following directory:

$IOC/ioc-b34-my01

cd $IOC/ioc-b34-my01/vioc-b34-my01

Set up a soft link to the 'bin' directory of your IOC app that you created in step (6):

ln -s /afs/slac/g/lcls/epics/R3-14-12-3_1-0/iocTop/MyApp/bin/linuxRT-x86 bin

In the same directory $IOC/ioc-b34-my01/vioc-b34-my01, add a startup script 'iocStartup.cmd' for vioc-b34-my01:

This script is very similar to the Soft IOC startup scripts that we are familiar with.
It setups some env variables used by all iocs, then changes to the ioc boot directory and starts the st.cmd file.

12. How do I start my ioc and monitor it?

Example, from any lcls-dev host ssh to your ioc as 'laci':

ssh laci@ioc-b34-bd32

$ cd /afs/slac/g/lcls/epics/iocCommon/ioc-b34-bd32

Ensure that 'startupConsole-laci-rt.cmd' is in your current directory.

Start the virtual ioc as a foreground process:

./startupConsole-laci-rt.cmd



























.