FIFE Notes - December, 2015 News for Distributed Computing at Fermilab

Screen Shot 2015-10-14 at 1.10.21 PM.png

Best in class


This newsletter is brought to you by:

  • Ken Herner
  • Bo Jayatilaka
  • Mike Kirby
  • Arthur E. Kreymer
  • Katherine Lato
  • Marc Mengel
  • Parag A. Mhashilkar
  • Tanya Levshina
  • Dmitry Litvintserv
  • Anna Mazzacane
  • Gene Oleynik
  • Gabriel Perdue

This is the second in a series of newsletters to the community.  We welcome articles you might want to submit. Please email

The October newsletter is available here.


NOvA success for first results

The NOvA experiment, the largest running experiment at Fermilab, studies the oscillation parameters that define neutrino transformation. During the preparation for DPF, dCache was delivering files at a rate of approximately one terabyte per hour for analysis jobs. The utilization of FTS, SAM, and dCache allowed for complete integration into essentially all workflows without customization by analyzers.

Screen Shot 2015-12-08 at 3.30.28 PM.png 

While data-heavy processing was focused on workers nodes near (but not exclusively at) Fermilab, processing, such as Monte Carlo generation, was transitioned to offsite resources. All off-site opportunistic processing combined resulted in over 5 million CPU hours and increased the average number of cores utilized by NOvA from 2,200 cores on site to 3,250 cores total. More information


GENIE using OSG to improve neutrino interaction modeling

One of the goals of the Fermilab GENIE group has been to move its validation processing to the Open Science Grid.

Screen Shot 2015-12-07 at 9.17.55 AM.png 

Photo courtesy Luanne O'Boyle.

Preparing a GENIE physics release involves intensive computation that is not practical in a desktop environment. The work is largely "embarassingly parallel," making it easy to spread out over the Grid and finish in a matter of hours what might otherwise take weeks. More information


dCache: scaling out to new heights

The dCache data storage system was first adopted by the CDF Tevatron experiment and then became a backbone of regional CMS Tier-1 data center storage. dCache plays a very important role in helping to deliver major scientific results such as traces of the Higgs boson particle in digitized form.


The world map above shows distribution of dCache clients that have transferred at least one terabyte of data in the last three months. More information



OPOS: the importance of collaboration and cooperation

The OPOS group facilitates the transfer of tools and know-how among experiments by helping with the adoption of common tools, such as FIFE's Jobsub, SAMweb, IFDH, etc. As Tingjun Yang said, “I think the OPOS group is doing a fantastic job, and their contribution is very much appreciated by the DUNE collaboration.”

More information


Jobsub hints


Jobsub is a FIFE user's doorway into running jobs on computational grids, clouds and other HPC clusters. Jobsub provides a simple-to-use, scalable and reliable job submission abstraction layer for submitting scientific workflows that run on diverse computation resources.


Since Jan. 2015, users have consumed over 10 million hours of computing cycles every month using the Jobsub infrastructure. These numbers are expected to grow even further as more experiments start taking data and progress further into their life cycle. More information


FIFE computing at European sites

In the past year, the site outside of Fermilab providing the most computing resources to the NOvA experiment has been the Institute of Physics of the Czech Academy of Sciences (Fyzikální Ústav AV ČR or FZU). In the past year, NOvA has utilized over 6 million computational hours at FZU.

Screen Shot 2015-12-07 at 1.32.36 PM.png 


When NOvA collaborators at the Joint Institute for Nuclear Research (JINR) in Russia were interested in providing computing resources to NOvA, FIFE and OSG staff followed the model set by FZU and set up access to a JINR computing cluster via an OSG site. A similar setup is currently being established at the University of Bern in Switzerland for the MicroBooNE experiment. More information



Intensity Frontier Data Handling (IFDH) usage helpful hints

Tips and tricks for using IFDH include:

  1. Use a cleanup call.

  2. Make a list of files with 'ifdh ls'.

  3. See what's going on with environment variables.

  4. What to do when you get an error on a copy to/from DCache on-site.

  5. Stage output instead of copying files back.


BlueArc unmounting from GPGrid nodes - So Long, and thanks for all the files.

A long time ago, in a cluster far, far away, it was a period of rebellion against the limitations of local batch clusters. In 2009, the 3,000 cores of the GP Grid Farm were a vast improvement over the 50-core FNALU batch system, but the load has increased. The dCache storage elements deployed in 2015 can handle the current load. BlueArc cannot. We need to proceed this year with the BlueArc Unmount process, removing even GridFTP access to BlueArc data.   More information



To provide feedback on any of these articles, or the FIFE notes in general, please email

The complete material (for viewing offline) is available in the following formats: