Measuring the Universe -- one galaxy at a time --

The Dark Energy Survey (DES) is in its third year of gathering multi-colored digital images of large swaths of deep space with the 4m diameter mirror of the Blanco telescope on a mountaintop in Chile. The team of physicists, astrophysicists, engineers, computer professionals, technicians and managers who operate the survey are participants in a around-the-clock marathon consisting of a myriad of steps to bring the valuable sky data home and to get the images into shape for producing new insights into the nature of Dark Energy -- a phenomenon that is pushing distant galaxies within the visible universe apart from each other at ever accelerating rates contrary to our intuition of gravity as a force that pulls objects together.

The data arrive during the night from Chile in chunks called exposures, a digital package of 60 CCD images covering a 90-second snapshot of few square degrees on the sky.  Sent on by scripts operated by team members at the National Center for Supercomputing Applications (NCSA) in Urbana, IL, the data are fed to grid nodes for initial processing which extends well into the daylight hours.

These fixed-sized exposures take a constant, predictable amount of time to 'detrend' (a processing step which recognizes defects and removes instrumental signatures from the digital CCD images).  Detrending processing fits naturally into existing grid processing requirements (say, 2GB RAM and and 40 GB scratch disk per job running 6 hours with one CPU core).  This stage of the processing can be run in a highly parallel fashion.

Figure 1.jpeg

Figure 1, Courtesy Nikolai Kuropatkin, Brian Nord, Dark Energy Survey

The first figure shows the results of processing a sample set of overlapping exposures through detrending.  There will be several million individual rectangular regions processed by the end of the survey.  

After several thousand exposures are gathered, they are grouped on an annual basis and sorted by position on the sky and by color. This is in preparation for the second stage of processing, where overlapping exposures are registered and combined into deep images of the sky, and where individual objects are carefully measured for shape, brightness and position. This stage is considerably more demanding of compute resources. These jobs, of which there are some 20,000 per year of observing, require up to 64 GB RAM, 2 TB of scratch disk and several linked cores, running occasionally for more than 24 hours/job -- although some regions of sky require significantly less than this -- the jobs vary in size.  

Compute resources with such large requirements have not been widely available with multi-user grid-like common accessibility until very recently.

The FIFE group and Fermilab Scientific Computing Division have played key roles in providing tools to make available efficient, flexible and widely distributed use of such dynamic systems.  By building upon HT Condor-CE enhancements to provide access to dynamically configurable resources, GPgrid nodes can be reserved with these higher-capacity RAM, larger scratch disk, multi-core and extended run-time requirements as needed.

Figure 2.jpeg
Figure 2, Courtesy Nikolai Kuropatkin, Brian Nord, Dark Energy Survey

The second figure shows a rendering of the combination of some 100 individual rectangular DES images (a superset of those in Figure 1) in blue, green and red filters to construct a coadded image of the barred spiral galaxy NGC 1398, which lies within the DES survey's footprint.  This was done using a pipeline and high-capacity node similar to those now being developed for the DES coadd production campaign.   

We are delighted to be able to use the configurable GPgrid resources for our DES production processing campaigns!

- Brian P. Yanny