You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

Created by Shantha Condamoor on 15-Oct-2015
----------------------------------------------------------------------

While linuxRT is a relatively established RTOS at ICD, it has mostly been used only in conjunction with the uTCA platform.

Recently linuxRT has been 'ported' and running successfully on a few COTS linux servers such as the Dell PowerEdge servers
and SuperMicro Industrial PCs. A newer version of linuxRT was built by Till Straumann to support the new hardware.

The important change to the linuxRT RTOS was the addition of support for the Broadcom NIC chipsets which are
ethernet network controllers found in these servers.

The other important change was the setup of these servers to boot linuxRT targets as diskless clients.

The BIOS boot order was modified to over-ride the'boot out of hard-disk' or 'CD-ROM' which are the default in most cases,
and to perform network booting as the first choice.

Finally, the redirection of the console output to the 'screen' process via the  'iocConsole' script instead of the monitor.

This re-direction to the first serial port, instead of the terminal, was also done in BIOS to enable us to  observe

the PC boot process remotely like all our IOCs,

As we continue to test this new system and learn more, this document will also evolve.

Here's something to get started with linuxRT using the latest EPICS base.


1. Where is the new EPICS?

/afs/slac/g/lcls/epics/R3-14-12-3_1-0/

2. How do I seup my bash shell to use the new EPICS?

From a bash shell in an LCLSDEV host (Ex. lcls-dev2), type the following:

source /afs/slac/g/lcls/epics/setup/go_epics_3-14-12-3_1-0.bash

3. Where are the linuxRT kernel modules?

/afs/slac/g/lcls/package/linuxKernel_Modules

4. Where are the EPICS modules for this new base?

/afs/slac/g/lcls/epics/R3-14-12-3_1-0/modules

A handful of EPICS modules have been built for the new EPICS base.
Most of them have been built for several targets including the following:
linuxRT-x86
linux-x86
linux-x86_64

5. What do I do if I need a new module or a specific version for an existing module built for the new base?

Contact Murali or Ernest to have them built and installed. Murali has a python script that
automatically discovers the module dependencies and builds them in the correct order for all targets.

6. What do I need to setup a new EPICS APP and IOC to use linuxRT-86 OS? (Ex. 'vioc-b34-my01' for 'MyApp')

(a) Create the following directories:
$IOC/ioc-b34-my01
$IOC_DATA/ioc-b34-my01
$APP/MyApp (look at this as an example)

Now create the EPICS application using standard templates and scripts:

(b) cd $APP
makeBaseApp.pl -t slac MyApp

Now, an EPICS application has been created for you with the application name 'MyApp'.

Look through the RELEASE_SITE file in the top level that was automatically created for you.
Notice how EPICS base path, modules path, App path, EPICS version and other EPICS environment variables were set for you to point to
the new EPICS base R3-14-12-3_1-0.

Now, look at the configure/RELEASE file.
This file needs modifications from what was automatically generated:
I have commented in the example application $APP/MyApp/configure/RELEASE what needs to change.

(i) Add the following line to the beginning of RELEASE file:
include $(TOP)/RELEASE_SITE
(ii) Replace the TEMPLATE_TOP variable as follows:
TEMPLATE_TOP=$(EPICS_BASE)/templates/makeBaseApp/top
(iii) Add the following line:
LINUX_KERNEL_MODULES=$(PACKAGE_SITE_TOP)/linuxKernel_Modules/
(iv) Change the various module version numbers as needed for your application.
Ensure that the specific versions included in your RELEASE file DO EXIST under the modules directory
and that they have been built for the linuxRT-x86 target.
Use Event module event-R4-1-3 or greater.

AUTOSAVE_MODULE_VERSION=autosave-R5-0_1-0
IOCADMIN_MODULE_VERSION=iocAdmin-R3-1-12_1-1
EVENT_MODULE_VERSION=event-R4-1-3

(v) Now 'make' your application from the top level directory $APP/MyApp to ensure your changes to  RELEASE files are good:

There should be no build errors.

You can now uninstall the binaries and cleanup the make-generate files using the following command:

make clean uninstall

Now it is time to add your linuxRT-x86 vioc startup scripts.

From $APP/MyApp type the following command to create a boot directory for 'vioc-b34-my01':

makeBaseApp.pl -i -t slac vioc-b34-my01

When prompted to choose target architecture, choose linuxRT-x86.

When prompted with 'Application name?' just hit enter.

Now iocBoot has been created under $APP/MyApp and underneath iocBoot, 'vioc-b34-my01' has been created.

Open $APP/MyApp/vioc-b34-my01/st.cmd and notice that this script is setup for a linuxRT-x86 target.

MyApp example has support for a PMC EVR230 running on linuxRT-x86.

Replace the macros as needed.

Now cd $APP/MyApp again and 'make' the application again and ensure it builds fine.

(c) Add your application to CVS and commit the source files.


7. What is PXE, DHCP, TFTP and NFS and why are they needed by my linuxRT IOC?

LinuxRT is installed on our system using the Preboot Execution Environment (PXE) method of network booting.

We enable the PXE/network-booting method in the BIOS.

In order to use PXE we need to setup a boot-server which will allow our client system to :
(a) Request an IP address (via DHCP)
(b) Download a kernel (via TFTP)

With both of these services in place any system which supports PXE/network-booting
should be able to gain an IP address, fetch a kernel, and boot without an installed operating system.

PXE uses three distinct network protocols that map to three server processes to perform the installation.
In our case, all three processes run on lcls-dev1 (LCLSDEV daemon)

(a) Dynamic Host Configuration Protocol (DHCP)

PXE uses DHCP to deliver initial network configuration options to client nodes.
The DHCP server supplies the PXE boot plug-in with
(i) IP address
(ii) TFTP server address
(iii) Stage 1 image boot-loader name from which to download and execute the image.

As the supplied PXE installation environments are non-interactive and will unconditionally reinstall a client machine,
we have the client associate its MAC address with a specific OS installation before starting the PXE boot.

The configuration information, in our case, in addition to IP/MAC address, includes a hostname and a pointer to the Master Starupt script in afs for our IOC.
It has an optional root-path variable pointing to the afs area which hosts the boot image that is served via TFTP.
This can be over-ridden as will be seen later.

When the Linux server is rebooted or power-cycled, PXE will attempt the network booting method first
and as a first step it will contact the DHCP server to retrieve the network configuration information.

Hence, every new linuxRT ioc (host) needs to be added to the DHCP server configuration file in afs.

This file is in /afs/slac/service/dhcp-pxe/dhcpd.conf

The IP/MAC address of the primary ethernet that will fetch the linuxRT boot image is defined here.
To add a new host to the DHCP configuration, contact Thuy.

Here's is an example - ioc-b34-bd32:

host ioc-b34-bd32 {
# SuperMicro (INTELx86)
#
hardware ethernet 00:25:90:D1:95:1E;
fixed-address 134.79.218.190;
option host-name "ioc-b34-bd32";
if ( substring( option vendor-class-identifier, 0, 5 ) = "udhcp" ) {
filename "/afs/slac/g/lcls/epics/iocCommon/ioc-b34-bd32/startup.cmd";
option root-path "afsnfs1:/afs/slac:/afs/slac";
}
}

(b) Trivial File Transfer Protocol (TFTP)

PXE uses TFTP that defines a simple UDP protocol for delivering files over a network.
PXE delivers kernels and initial bootstrap software to client nodes using TFTP.
In our case, we retrieve the linuxRT boot image from lcls-dev1 (LCLSDEV TFTP Server) from the following location:

/afs/slac/g/lcls/tftpboot/linuxRT/boot

In this location, there are several linuxRT-x86 bootimages.
These were custom-built by T.Straumann for the various Linux Servers/IPCs that we currently have setup to boot with linuxRT OS.

Of these images, '3.14.12-rt9' is the latest and it has in-built support for the
Broadcom networking ethernet chipset that are used in our dev Poweredge Dell Servers.


(c) Network File System (NFS)

The NFS service is used by the installation kernel to read all of the packages necessary to the installation process.
The NFS server therefore needs to provide access to the directory structure containing the PXE images.

This boot directory is available to all machines running NFS.

8. How do I start my IOC? Where is my ioc's statrup.cmd?

There are a few scripts that automate this process.

To begin with, there is the 'ipxe.ini' script in the tftp boot area /afs/slac/g/lcls/tftpboot/linuxRT/boot that PXE will run.

This is where the version of (linuxRT) kernel to run is specified as follows:

set vers 3.2.13-121108

This version number can be over-ridden by a chained, host-specific pxe init script to load an image different from the above:

chain ${hostname}.ipxe ||

For example, we have defined a script specifically for our ioc ioc-b34-bd32.ipxe which chooses to load the latest linuxRT image:

set vers 3.14.12-rt9

This is also the place to over-ride the 'root-path' option specified in the DHCP configuration file dhcpd.conf.

For example, I may decide to over-ride the afsnfs1 server and instead choose to get my boot image from afsnfs2 server:

set extra-args ROOTPATH=afsnfs2:/afs/slac:/afs/slac BOOTFILE=/afs/slac/g/lcls/epics/iocCommon/ioc-b34-bd32/startup.cmd

A few more extra arguments are specified in ioc-b34-bd32. Leave them as they are.

The 'ipxe.ini' script loads and run the linuxRT kernel via the TFTP protocol:

kernel --name linux tftp://${next-server}/linuxRT/boot/${vers}/bzImage && initrd tftp://${next-server}/linuxRT/boot/${vers}/rootfs.ext2 || shell
imgargs linux debug idle=halt root=/dev/ram0 console=ttyS0,115200 BOOTIF_MAC=${net0/mac:hex} ${extra-args} || boot || shell

After linuxRT boot image is downloaded and run, nfs mounts can be done.

The afs to nfs translator service makes available the directory structure, to all clients that have mounted this nfs space.
NFS File Servers for LCLSDEV are afsnfs1 and afsnfs2

One of the arguments to the kernel process is the location of the BOOTFILE that does the mounting.

The 'filename' argument (which can be over-ridden by the BOOTFILE argument for linuxRT) is as follows:

"/afs/slac/g/lcls/epics/iocCommon/ioc-b34-bd32/startup.cmd"

This script is similar to and modelled after RTEMS startup.cmd.

When linuxRT loads and start, the kernel process is run as "root" user.
Hence it has permissions to setup the nfs mounts which is done by the following line in startup.cmd:

/afs/slac/g/lcls/epics/iocCommon/All/Dev/linuxRT_nfs.cmd

There is already an NFS mount point in AFS space to enable the the remote linuxRT target to access the control system central file server.

Additional NFS Mount Points for linuxRT pertaining to the ioc data directory $IOC_DATA are mounted as well

9. How do I monitor the server remotely as it boots up - after a power-cycle or after a 'reboot' command?

Thuy has setup the BIOS of the Dell Linux Servers and other servers used for linuxRT development, to re-direct the
console (monitor) output to one of the serial ports. This allows us to watch the boot process remotely via our standard 'iocConsole'
python script. iocConsole uses the 'screen' process to accomplish this re-direction.

Type the following command from any LCLSDEV host:

Example:

iocConsole ioc-b34-bd32

This will establish a serial connection with the linux box from lcls-dev1 via a DIGI Terminal Server port as follows:

 : ssh -x -t -l laci lcls-dev1 bash -l -c " pyiocscreen.py -t HIOC ioc-b34-bd32 ts-lclsdev05 2001 "

You can monitor the linuxRT as it goes through the PXE network booting process.

Finally, you will get the login prompt:

Welcome to Buildroot
ioc-b34-bd32 login:

Login as 'root'. No password is required.

At the shell, type the following to ensure that you are running the correct version of liunxRT for your ioc. This has the real-time Preempt RT patch.

 

# uname -a
Linux ioc-b34-pm32 3.14.12-rt9 #1 SMP PREEMPT RT Sat Oct 11 17:27:39 PDT 2014 i686 GNU/Linux

To see the PCI devices in your system type the following command:

# lspci

10. What are kernel modules and how are they loaded in linuxRT?

In the linux world, kernel modules are pieces of code that can be loaded and unloaded into the kernel upon demand.
They extend the functionality of the kernel without the need to reboot the system.
For example, one type of module is the device driver, which allows the kernel to access hardware connected to the system.
Without modules, we would have to build monolithic kernels and add new functionality directly into the kernel image.
Besides having larger kernels, this has the disadvantage of requiring us to rebuild and reboot the kernel every time we want new functionality.

linuxRT too, lets you load and unload kernel modules dynamically.

Now we are ready to load some kernel modules essential to our ioc - like EVR.

In our 'startup.cmd' script, we have the following line which lets us customize and load our kernel modules:

/afs/slac/g/lcls/epics/iocCommon/ioc-b34-bd32/kernel-modules.cmd

The location for kernel modules is specified as an environment variable in linuxRT:

KERNEL_DRIVER_HOME=/afs/slac/g/lcls/package/linuxKernel_Modules

There are several linuxRT drivers in this directory.

The PCI EVR230 driver is here. The following driver version has been built for the latest linuxRT 3.14.12-rt9:

EVR230_PCI_EVR_DRIVER=$KERNEL_DRIVER_HOME/pci_mrfevr_linuxRT/buildroot-2014.08

The PCI Express EVR300 driver built for linuxRT 3.14.12-rt9, is here:

EVR300_PCI_EVR_DRIVER=$KERNEL_DRIVER_HOME/pci_mrfev300_linuxRT/buildroot-2014.08

The kernel drivers are installed (loaded) dynamically as follows:

# Load the MRF EVR230 Kernel Module for timing
insmod $EVR230_PCI_EVR_DRIVER/pci_mrfevr.ko
$EVR230_PCI_EVR_DRIVER/module_load

# Load the MRF EVR300 Kernel Module for timing
insmod $EVR300_PCI_EVR_DRIVER/pci_mrfevr300.ko
$EVR300_PCI_EVR_DRIVER/module_load

There are a couple of things to note:

1. Currently the EVR kernel modules SW has the restriction that if there are both a PMC EVR230 and a PCI-e EVR300 in a linux box,
then the PMC EVR230 MUST BE initialized as card 0 and loaded first. EVR300 must be initialized as card 1.

Additionally, due to hard-coded device names in the module, it is essential to setup the following links:

ln -s /dev/er3a0 /dev/erb0
ln -s /dev/er3a1 /dev/erb1
ln -s /dev/er3a2 /dev/erb2
ln -s /dev/er3a3 /dev/erb3

2. If only one EVR (either PMC EVR230 or PCI EVR300) is installed in your system, then the above restriction does not apply and soft links are not needed.

Take a look at the following script:
/afs/slac/g/lcls/package/linuxKernel_Modules/pci_mrfev300_linuxRT/buildroot-2014.08/module_load

Notice how kernel modules are loaded as device drivers under the /dev/ in linuxRT much like linux.

3. The Broadcom Ethernet NIC driver used to be a separate kernel module and its dirver was loaded dynamically via ;modprobe tg3' in this script.
With the latest linuxRT version 3.14.12-rt9, this step has become unnecessary as the driver has become part of this linuxRT boot image.

4. The SIS digitizers for uTCA, loads their device drivers in 'kernel-modules.cmd':
SIS8300_DRIVER=$KERNEL_DRIVER_HOME/sis8300drv/MAIN_TRUNK

modprobe uio
insmod $SIS8300_DRIVER/sis8300drv.ko

Please note that as of date, SIS8300 has NOT been rebuilt for the latest linuxRT 3.14.12-rt9.
It is currently unsupported for the Poweredge Dell servers.

11. How do I create a startup script for my ioc?

The third and final script is specifically to setup and start your EPICS ioc.

The 'startupConsole-laci-rt.cmd' lets you start your EPICS-based 'virtual' ioc as a foreground process in your host.

Your EPICS ioc must run as a real-time process. It must lock the kernel in memory under linuxRT.

The following command in your 'startupConsole-laci-rt.cmd' does that:

ulimit -l unlimited

The following line is also needed to run the ioc with real-time priorities:

ulimit -r unlimited

Finally, you will be running your 'virtual' ioc as a specific user called as 'laci' who has permissions to run this ioc.
Setup the permissions for this user 'laci':

umask 002

Now you are ready to start your IOC and have it run as a foreground process.

Create a directory called as 'vioc-b34-my01' for your 'virtual' ioc process under the following directory:

$IOC/ioc-b34-my01

cd $IOC/ioc-b34-my01/vioc-b34-my01

Set up a soft link to the 'bin' directory of your IOC app that you created in step (6):

ln -s /afs/slac/g/lcls/epics/R3-14-12-3_1-0/iocTop/MyApp/bin/linuxRT-x86 bin

In the same directory $IOC/ioc-b34-my01/vioc-b34-my01, add a startup script 'iocStartup.cmd' for vioc-b34-my01:

This script is very similar to the Soft IOC startup scripts that we are familiar with.
It setups some env variables used by all iocs, then changes to the ioc boot directory and starts the st.cmd file.

12. How do I start my ioc and monitor it?

Example, from any lcls-dev host ssh to your ioc as 'laci':

ssh laci@ioc-b34-bd32

$ cd /afs/slac/g/lcls/epics/iocCommon/ioc-b34-bd32

Ensure that 'startupConsole-laci-rt.cmd' is in your current directory.

Start the virtual ioc as a foreground process:

./startupConsole-laci-rt.cmd



























.

  • No labels