...
Host ioc-* cpu-*
User laci
SendEnv TERM=xterm
HostName %h.slac.stanford.edu
ProxyCommandProxyJump ssh -X gateway /usr/bin/nc %h %p <gateway>
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
...
Host gateway
User myself
ProxyCommand ssh -X firewall /usr/bin/nc %h %p
However, as Faisal has pointed out - in the most common use-case the `ProxyJump` directive is more convenient:
Host gateway
User myself
ProxyJump firewall
Now when you say
bash$ ssh cpu-b12-xyz
herethen ssh will transparently set up the multi-hop connection. Note that others options can be passed and work as expected, i.e., you can set up port forwarding to the target machine, e.g.,
...
sets up an encrypted port forward from the external machine (where ssh runs), port 8000 to port 9000 on cpu-b12-xyz.
When you run a remote command via ssh as in
bash$ ssh cpu-b12-xyz lengthy_command
then it is important to know that 'lengthy_command
' is not associated with any (remote!) terminal. This means that when you kill ssh
on your local machine (Ctrl-C) then the remote command will keep executing (the remote command is not a child of the local 'ssh
' and cannot be notified of its death). The '-t' option forces the remote ssh server to allocate a pseudo-terminal and associates the lengthy_command
process with that terminal and this will allow propagation of signals:
bash$ ssh -tt cpu-b12-xyz lengthy_command
If you kill this ssh on your local machine then it will cause the lengthy_command
to receive a signal (via its controlling terminal) and terminate as well. Multiple 't's ensure a remote terminal is allocated even if there is no local terminal (e.g., if the ssh command is called from a daemonized script).
When you run an interactive session from remote then this often carries a lot of context information (environment, running processes etc.). It can be very painful if you get disconnected and as a consequence lose all of this context and have long-running build processes killed. Use the 'screen
' utility. If you work on AFS then screen
also keeps your tokens alive for you (until they expire, of course) - provided that you run a pagsh:
...
You need root access on both machines for these operations. It is also noteworthy that if the remote USB device is disconnected (e.g., because of a FPGA power-cycle) it is necessary to recreate the virtual USB device on the local host. Consult the usbip documentation for details.
It is also possible to drive JTAG with a firmware core and use the Xilinx XVC protocol to remotely access JTAG. The SLAC surf library provides the necessary components (a firmware block and a software XVC server which must be run on a linux box with connectivity to the firmware). This is a purely networked solution and no hardware JTAG nor USB are required.
Note: when operating over a slow connection then I get better response when I start a Xilinx hw_server on some machine that is close to the xvcSrv (rather than tunneling XVC from the remote machine):
Start xvcSrv
on a machine with fast connectivity to the FPGA
xvcSrv -t <target_ip>[:<udp_port>]
A pre-compiled binary is installed here
/afs/slac/g/reseng/xvcSrv/bin/<architecture>/
or here
/afs/slac/g/lcls/package/xvcSrv/<architecture>/
hw_server
on the same or a close-by machine (no special arguments necessary); if you run Vivado on-site then vivado takes care of this step and you may skip step 3.ssh
tunnel to get to the hw_server
. You can in fact use ssh
to directly launch the server and tunnel the connection (assuming hw_server
is on the PATH): ssh -L 3121:localhost:3121 gateway_machine hw_server
ssh
tunnel was opened on local port 3121 Vivado will find it) % open_hw_target -xvc_url <machine_where_xvcSrv_runs>:2542