Rota L., Balzer M., Caselle M., Kudella S., Weber M., Mozzanica A., Hiller N., Nasse M.J., Niehues G., Schonfeldt P., Gerth C., Steffen B., Walther S., Makowski D., Mielczarek A.

in 2016 IEEE-NPSS Real Time Conference, RT 2016 (2016), 7543157. DOI:10.1109/RTC.2016.7543157

Abstract

© 2016 IEEE. We developed a fast linear array detector to improve the acquisition rate and the resolution of Electro-Optical Spectral Decoding (EOSD) experimental setups currently installed at several light sources. The system consists of a detector board, an FPGA readout board and a high-Throughput data link. InGaAs or Si sensors are used to detect near-infrared (NIR) or visible light. The data acquisition, the operation of the detector board and its synchronization with synchrotron machines are handled by the FPGA. The readout architecture is based on a high-Throughput PCI-Express data link. In this paper we describe the system and we present preliminary measurements taken at the ANKA storage ring. A line-rate of 2.7 Mlps (lines per second) has been demonstrated.

T. Baumbach, V. Altapova, D. Hänschke, T. dos Santos Rolo, A. Ershov, L. Helfen, T. van de Kamp, J.-T. Reszat, M. Weber, M. Caselle, M. Balzer, S. Chilingaryan, A. Kopmann, I. Dalinger, A. Myagotin, V. Asadchikov, A. Buzmakov, S. Tsapko, I. Tsapko, V. Vichugov, M. Sukhodoev, UFO collaboration

Final report, BMBF Programme: “Development and Use of Accelerator-Based Photon Sources”, 2016

Executive summary

Recent progress in X-ray optics, detector technology, and the tremendous increase of processing speed of commodity computational architectures gave rise to a paradigm shift in synchrotron X-ray imaging. In order to explore these technologies within the two UFO projects the UFO experimental station for ultra-fast X-ray imaging has been developed. Key components, an intelligent detector system, vast computational power, and sophisticated algorithms have been designed, optimized and integrated for best overall performance. New methods like 4D cine-tomography for in-vivo measurements have been established. This online assessment of sample dynamics not only made active image-based control possible, but also resulted in unprecedented image quality and largely increased throughput. Typically 400-500 high-quality datasets with 3D images and image sequences are recorded with the UFO experimental station during a beam time of about 3-4 days.

A flexible and fully automated sample environment and a detector system for a set of up to three complementary cameras has been realized. It can be equipped with commercial available scientific visible-light cameras or a custom UFO camera. To support academic sensor development a novel platform for scientific cameras, the UFO camera framework, has been developed. It is a unique rapid-prototyping environment to turn scientific image sensors into intelligent smart camera systems. All beamline components, sample environment, detector station and the computing infrastructure are seamlessly integrates into the high-level control system “Concert” designed for online data evaluation and feedback control.

As a new element computing nodes for online data assessment have been introduced in UFO. A powerful computing infrastructure based on GPUs and real-time storage has been developed. Optimized reconstruction algorithms reach a throughput of several GB/s with a single GPU server. For scalability also clusters are supported. Highly optimized reconstruction and image processing algorithms are key for real-time monitoring and efficient data analysis. In order to manage these algorithms the UFO parallel computing framework has been designed. It supports the implementation of efficient algorithms as well as the development of data processing workflows based on these. The library of optimized algorithms supports all modalities of operation at the UFO experimental station: tomography laminography and diffraction imaging as well as numerous pre- and post-processing steps.

The results of the UFO project have been reported at several national and international workshops and conferences. The UFO project contributes with developments like the UFO- camera framework or its GPU computing environment to other hard- and software projects in the synchrotron community (e.g. Tango Control System, High Data Rate Processing and Analysis Initiative, Nexus data format, Helmholtz Detector Technology and Systems Initiative DTS). Further follow-up projects base on the UFO results and improve imaging methods (like STROBOS-CODE) or add sophisticated analysis environments (like ASTOR).

The UFO project has successfully developed key components for ultra-fast X-ray imaging and serves as an example for future data intense applications. It demonstrates KIT’s role as technology center for novel synchrotron instrumentation.

Losel P., Heuveline V.

in Progress in Biomedical Optics and Imaging – Proceedings of SPIE, 9784 (2016), 97842L. DOI:10.1117/12.2216202

Abstract

© 2016 SPIE. Inspired by the diffusion of a particle, we present a novel approach for performing a semiautomatic segmentation of tomographic images in 3D, 4D or higher dimensions to meet the requirements of high-throughput measurements in a synchrotron X-ray microtomograph. Given a small number of 2D-slices with at least two manually labeled segments, one can either analytically determine the probability that an intelligently weighted random walk starting at one labeled pixel will be at a certain time at a specific position in the dataset or determine the probability approximately by performing several random walks. While the weights of a random walk take into account local information at the starting point, the random walk itself can be in any dimension. Starting a great number of random walks in each labeled pixel, a voxel in the dataset will be hit by several random walks over time. Hence, the image can be segmented by assigning each voxel to the label where the random walks most likely started from. Due to the high scalability of random walks, this approach is suitable for high throughput measurements. Additionally, we describe an interactively adjusted active contours slice by slice method considering local information, where we start with one manually labeled slice and move forward in any direction. This approach is superior with respect to accuracy towards the diffusion algorithm but inferior in the amount of tedious manual processing steps. The methods were applied on 3D and 4D datasets and evaluated by means of manually labeled images obtained in a realistic scenario with biologists.

Vogelgesang M., Farago T., Morgeneyer T.F., Helfen L., Dos Santos Rolo T., Myagotin A., Baumbach T.

in Journal of Synchrotron Radiation, 23 (2016) 1254-1263. DOI:10.1107/S1600577516010195

Abstract

© 2016 International Union of Crystallography.Real-time processing of X-ray image data acquired at synchrotron radiation facilities allows for smart high-speed experiments. This includes workflows covering parameterized and image-based feedback-driven control up to the final storage of raw and processed data. Nevertheless, there is presently no system that supports an efficient construction of such experiment workflows in a scalable way. Thus, here an architecture based on a high-level control system that manages low-level data acquisition, data processing and device changes is described. This system is suitable for routine as well as prototypical experiments, and provides specialized building blocks to conduct four-dimensional in situ, in vivo and operando tomography and laminography.

Schultze, Felix

Master thesis, Faculty of Computer Science, Karlsruhe Institute of Technology, 2015.

Abstract

An ever increasing number of large tomographic images is recorded at synchrotron facilities world wide. Due to the drastic increase of data volumes, there is a recent trend to provide data analysis services at the facilities as well. The ASTOR project aims to realize a cloud-based infrastructure for remote data analysis and visualization of tomographic data. A key component is a web-based data browser to select data sets and request a virtual machine for analysis of this data. One of the challenges is to provide a fast preview of 3D volumes but also 3D sequences. Since a standard data sets exceed 10 gigabytes, standard visualization techniques are not feasible and new data reduction techniques have to be developed.

 

First assessor: Prof. Dr.-Ing. Carsten Dachsbacher
Second assessor: Dr. Suren Chilingaryan

Supervised by  Dr. Andreas Kopmann

Shkarin A., Ametova E., Chilingaryan S., Dritschler T., Kopmann A., Vogelgesang M., Shkarin R., Tsapko S.

in Fundamenta Informaticae, 141 (2015) 259-274. DOI:10.3233/FI-2015-1275

Abstract

© 2015 Fundamenta Informaticae 141. The recent developments in detector technology made possible 4D (3D + time) X-ray microtomographywith high spatial and time resolutions. The resolution and duration of such experiments is currently limited by destructive X-ray radiation. Algebraic reconstruction technique (ART) can incorporate a priori knowledge into a reconstruction model that will allow us to apply some approaches to reduce an imaging dose and keep a good enough reconstruction quality. However, these techniques are very computationally demanding. In this paper we present a framework for ART reconstruction based on OpenCL technology. Our approach treats an algebraic method as a composition of interacting blocks which performdifferent tasks, such as projection selection, minimization, projecting and regularization. These tasks are realised using multiple algorithms differing in performance, the quality of reconstruction, and the area of applicability. Our framework allows to freely combine algorithms to build the reconstruction chain. All algorithms are implemented with OpenCL and are able to run on a wide range of parallel hardware. As well the framework is easily scalable to clustered environment with MPI. We will describe the architecture of ART framework and evaluate the quality and performance on latest generation of GPU hardware from NVIDIA and AMD.

Shkarin R., Ametova E., Chilingaryan S., Dritschler T., Kopmann A., Mirone A., Shkarin A., Vogelgesang M., Tsapko S.

in Fundamenta Informaticae, 141 (2015) 245-258. DOI:10.3233/FI-2015-1274

Abstract

© 2015 Fundamenta Informaticae 141.On-line monitoring of synchrotron 3D-imaging experiments requires very fast tomographic reconstruction. Direct Fourier methods (DFM) have the potential to be faster than standard Filtered Backprojection. We have evaluated multiple DFMs using various interpolation techniques. We compared reconstruction quality and studied the parallelization potential. A method using Direct Fourier Inversion (DFI) and a sinc-based interpolation was selected and parallelized for execution on GPUs. Several optimization steps were considered to boost the performance. Finally we evaluated the achieved performance for the latest generation of GPUs from NVIDIA and AMD. The results show that tomographic reconstruction with a throughput of more than 1.5 GB/sec on a single GPU is possible.

Lewkowicz, Alexander

Internship report, Institute for Data Processing and Electronics, Karlsruhe Institute of Technology, 2014.

Abstract

High speed tracking of fluorescent nano particles enables scientists to study the drying process of fluids. A better understanding of this drying process will help develop new techniques to obtain homogeneous surfaces. Images are recorded via CMOS cameras to observe the particle flow. The challenge is to find particles 3rd coordinate from a 2D image. Depending on the distance to the objective lens of the microscope, rings of different radii appear in the images. By detecting the rings radii and coordinates, both velocity and 3D trajectories can be established for each particle. To achieve almost real-time particle tracking, highly parallel systems, such as GPUs, are used.

Supervised by  Dr. Suren Chilingaryan

Vogelgesang, Matthias

PhD thesis, Faculty of Computer Science, Karlsruhe Institute of Technology, 2014.

Abstract

Moore’s law stays the driving force behind higher chip integration density and an ever- increasing number of transistors. However, the adoption of massively parallel hardware architectures widens the gap between the potentially available microprocessor performance and the performance a developer can make use of. is thesis tries to close this gap by solving the problems that arise from the challenges of achieving optimal performance on parallel compute systems, allowing developers and end-users to use this compute performance in a transparent manner and using the compute performance to enable data-driven processes.

A general solution cannot realistically achieve optimal operation which is why we will focus on streamed data processing in this thesis. Data streams lend themselves to describe high-throughput data processing tasks such as audio and video processing. With this specific data stream use case, we can systematically improve the existing designs and optimize the execution from the instruction-level parallelism up to node-level task parallelism. In particular, we want to focus on X-ray imaging applications used at synchrotron light sources. These large-scale facilities provide an X-ray beam that enables scanning samples at much higher spatial and temporal resolution compared to conventional X-ray sources. The increased data rate inevitably requires highly parallel processing systems as well as an optimized data acquisition and control environment.

To solve the problem of high-throughput streamed data processing we developed, modeled and evaluated system architectures to acquire and process data streams on parallel and heterogeneous compute systems. We developed a method to map general task descriptions onto heterogeneous compute systems and execute them with optimizations for local multi-machines and clusters of multi-user compute nodes. We also proposed an source-to-source translation system to simplify the development of task descriptions.

We have shown that it is possible to acquire and compute tomographic reconstructions on a heterogeneous compute system consisting of CPUs and GPUs in soft real-time. The end-user’s only responsibility is to describe the problem correctly. With the proposed system architectures, we paved the way for novel in-situ and in-vivo experiments and a much smarter experiment setup in general. Where existing experiments depend on a static environment and process sequence, we established the possibility to control the experiment setup in a closed feedback loop.

 

First assessor: Prof. Dr. Achim Streit
Second assessor: Prof. Dr. Marc Weber

Van De Kamp T., Dos Santos Rolo T., Vagovic P., Baumbach T., Riedel A.

in PLoS ONE, 9 (2014), e102355. DOI:10.1371/journal.pone.0102355

Abstract

Digital surface mesh models based on segmented datasets have become an integral part of studies on animal anatomy and functional morphology; usually, they are published as static images, movies or as interactive PDF files. We demonstrate the use of animated 3D models embedded in PDF documents, which combine the advantages of both movie and interactivity, based on the example of preserved Trigonopterus weevils. The method is particularly suitable to simulate joints with largely deterministic movements due to precise form closure. We illustrate the function of an individual screw-and-nut type hip joint and proceed to the complex movements of the entire insect attaining a defence position. This posture is achieved by a specific cascade of movements: Head and legs interlock mutually and with specific features of thorax and the first abdominal ventrite, presumably to increase the mechanical stability of the beetle and to maintain the defence position with minimal muscle activity. The deterministic interaction of accurately fitting body parts follows a defined sequence, which resembles a piece of engineering. © 2014 van de Kamp et al.