Gehrke R., Kopmann A., Wintersberger E., Beckmann F.

in Synchrotron Radiation News, 28 (2015) 36-42. DOI:10.1080/08940886.2015.1013420


© Taylor & Francis. The Helmholtz Association is the largest scientific organization in Germany. It operates all major German research infrastructures involved in research with photons, neutrons, and ions. These are DESY in Hamburg; the Karlsruhe Institute of Technology (KIT); the Research Centre Jülich (FZJ); the Helmholtz Centres in Geesthacht (HZG), Berlin (HZB), and Dresden-Rossendorf (HZDR); and the GSI Centre for research with heavy ions in Darmstadt. In common, all these centers are facing similar challenges related to dramatically increasing data rates and volumes generated with more and more powerful radiation sources together with larger and faster detectors. On the other hand, each center has its own specific portfolio of long-lasting technical expertise in areas like data analysis, information technology, or hardware development. Therefore, it was obvious to address the challenges by acting in concert. This was the main motivation in 2010 for the launch of a joint project among the partners called the “High Data Rate Processing and Analysis Initiative (HDRI).” The initiative is organized into three basic work packages: “Data Management,” “Real-time Data Processing,” and “Data Analysis, Modelling, and Simulation.” The aim is to carry out the development of methods, hardware components, and software for data acquisition, real-time and offline analysis, documentation and archiving, and for remote access to data. The solutions are finally meant to be integrated at the various experimental stations and thus have to be versatile and flexible to cope with the heterogeneous requirements of the different experiments. The claim to create standard solutions makes it mandatory to closely collaborate with large international activities in the field of data handling, like the European PaNdata project (see article in this issue), but also with vendors of detectors, data evaluation software, etc., as well as with corresponding standardization bodies.

Shkarin R., Ametova E., Chilingaryan S., Dritschler T., Kopmann A., Mirone A., Shkarin A., Vogelgesang M., Tsapko S.

in Fundamenta Informaticae, 141 (2015) 245-258. DOI:10.3233/FI-2015-1274


© 2015 Fundamenta Informaticae 141.On-line monitoring of synchrotron 3D-imaging experiments requires very fast tomographic reconstruction. Direct Fourier methods (DFM) have the potential to be faster than standard Filtered Backprojection. We have evaluated multiple DFMs using various interpolation techniques. We compared reconstruction quality and studied the parallelization potential. A method using Direct Fourier Inversion (DFI) and a sinc-based interpolation was selected and parallelized for execution on GPUs. Several optimization steps were considered to boost the performance. Finally we evaluated the achieved performance for the latest generation of GPUs from NVIDIA and AMD. The results show that tomographic reconstruction with a throughput of more than 1.5 GB/sec on a single GPU is possible.

Shkarin A., Ametova E., Chilingaryan S., Dritschler T., Kopmann A., Vogelgesang M., Shkarin R., Tsapko S.

in Fundamenta Informaticae, 141 (2015) 259-274. DOI:10.3233/FI-2015-1275


© 2015 Fundamenta Informaticae 141. The recent developments in detector technology made possible 4D (3D + time) X-ray microtomographywith high spatial and time resolutions. The resolution and duration of such experiments is currently limited by destructive X-ray radiation. Algebraic reconstruction technique (ART) can incorporate a priori knowledge into a reconstruction model that will allow us to apply some approaches to reduce an imaging dose and keep a good enough reconstruction quality. However, these techniques are very computationally demanding. In this paper we present a framework for ART reconstruction based on OpenCL technology. Our approach treats an algebraic method as a composition of interacting blocks which performdifferent tasks, such as projection selection, minimization, projecting and regularization. These tasks are realised using multiple algorithms differing in performance, the quality of reconstruction, and the area of applicability. Our framework allows to freely combine algorithms to build the reconstruction chain. All algorithms are implemented with OpenCL and are able to run on a wide range of parallel hardware. As well the framework is easily scalable to clustered environment with MPI. We will describe the architecture of ART framework and evaluate the quality and performance on latest generation of GPU hardware from NVIDIA and AMD.