You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

2014

September 24th, 2014

Intel Roadmap Overview at SLAC

 Location, time, and dial-in: Cypress Conference Room, B40, 3:00-5:00pm

Speaker:

Slides: [pdf] [pptx

Abstract:
In-depth on next generation Xeon Phi processor architecture.

 

September 19th, 2014

CernVM: a versatile environment for high-energy physics applications in the cloud

Location, time, and dial-in: Ballam Conference Room B84, Friday September 19th, 2014 - 11:00 am

1. Dial Toll-Free Number: 866-740-1260 (U.S. & Canada)
2. International participants dial: Toll Number: +1 303-248-0285
       Or International Toll-Free Number: http://www.readytalk.com/intl
3. Enter 7-digit access code, 3073828 followed by #

Speaker: Jakob Blomer, CERN

Slides: [pdf]

Abstract:
Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. 
Such resources can be either provided by "private clouds", academic infrastructures that allow running virtual
machines instead of batch jobs, or by public clouds such as Amazon EC2 or Google Compute Engine. 
In any case, users need to prepare a virtual machine image that provides the execution environment for the
physics application at hand.  CernVM is a small and versatile virtual machine base image that runs on a variety
of different cloud infrastructures and can be easily adapted to support typical physics workflows. 
It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer
computers, and as a container for the historic software environment of the decommissioned ALEPH experiment.

The presentation provides an overview of the CernVM and its core technology, the CernVM File System. 
The file system takes care of the on-demand distribution of experiment software and operating system binaries
to computing resources around the world.  The latest development efforts are targeted at streamlining the
maintenance and administration effort of operating a CernVM/CernVM-FS service. Currently ongoing efforts
include tapping of so far unused resources such as supercomputers.

 

 

  • No labels