Selected papers on our projects.
Tan Jerome, Nicholas
PhD thesis, Faculty of Electrical Engineering and Information Technology, Karlsruhe Institute of Technology, 2019.
Exploring large and complex data sets is a crucial factor in a digital library framework. To find a specific data set within a large repository, visualisation can help to validate the content apart from the textual description. However, even with the existing visual tools, the difficulty of large-scale data concerning their size and heterogeneity impedes building visualisation as part of the digital library framework, thus hindering the effectiveness of large-scale data exploration.
The scope of this research focuses on managing Big Data and eventually visualising the core information of the data itself. Specifically, I study three large-scale experiments that feature two Big Data challenges: large data size (Volume) and heterogeneous data (Variety), and provide the final visualisation through the web browser in which the size of the input data has to be reduced while preserving the vital information. Despite the intimidating size, i.e., approximately 30 GB, and the complexity of the data, i.e., about 100 parameters per timestamp, I demonstrated how to provide a comprehensive overview of each data set at an interactive rate where the system response time is less than 1 s—visualising gigabytes of data, and visualising multifaceted data in a single representation. For better data shar- ing, I selected a web-based system which serves as a ubiquitous platform for the domain experts. Being a useful collaborative tool, I also address the shortcomings related to limited bandwidth latency and various client hardware.
In this thesis, I present a design of web-based Big Data visualisation systems based on the data state reference model. Also, I develop frameworks that can process and output multi- dimensional data sets. For any Big Data feature, I propose a standard design guideline that helps domain experts to build their data visualisation. I introduce the use of texture-based images as the primary data object where the images are loaded in the texture memory of the client’s GPU for final visualisation. The visualisation ensures high interactivity since the data resides in the client’s memory. In particular, the interactivity of the system enables domain experts to narrow their search or analysis by using a top-down methodological ap- proach. Also, I provide four use case studies to examine the feasibility of the proposed design concepts: (1) analysing multi-spectral imagery, (2) Doppler wind lidar, (3) ultra- sound computer tomography, and (4) X-ray computer tomography. These case studies show the challenges of dealing with Big Data such as large data size or disperse data sets.
To this end, this dissertation contributes to a better understanding of web-based Big Data visualisation by using the proposed design guideline. I show that domain experts appreciate the WAVE, BORA, and 3D optimal viewpoint finder frameworks as tools to understand and explore their data sets. Mainly, the frameworks help them to build and customise their visualisation system. Although specific customisation is necessary for the different application, the effort is worthwhile, and it helps domain experts to understand their vast amounts of data better. The BORA framework fits perfectly in any time series data repositories where no programming knowledge is required. The WAVE framework serves as a web-based data exploration system. The 3D optimal viewpoint finder framework helps to generate 2D images from 3D data, where the 2D image is based on the 3D scene with optimal view angle. To cope with increasing data rates, a general hierarchical organisation of data is necessary to extract valuable information from data sets.
First assessor: Prof. Dr. M. Weber
Second assessor: Prof. Dr. W. Nahm
van de Kamp T., Schwermann A.H., dos Santos Rolo T., Losel P.D., Engler T., Etter W., Farago T., Gottlicher J., Heuveline V., Kopmann A., Mahler B., Mors T., Odar J., Rust J., Tan Jerome N., Vogelgesang M., Baumbach T., Krogmann L.
in Nature Communications, 9 (2018), 3325. DOI:10.1038/s41467-018-05654-y
© 2018, The Author(s). About 50% of all animal species are considered parasites. The linkage of species diversity to a parasitic lifestyle is especially evident in the insect order Hymenoptera. However, fossil evidence for host–parasitoid interactions is extremely rare, rendering hypotheses on the evolution of parasitism assumptive. Here, using high-throughput synchrotron X-ray microtomography, we examine 1510 phosphatized fly pupae from the Paleogene of France and identify 55 parasitation events by four wasp species, providing morphological and ecological data. All species developed as solitary endoparasitoids inside their hosts and exhibit different morphological adaptations for exploiting the same hosts in one habitat. Our results allow systematic and ecological placement of four distinct endoparasitoids in the Paleogene and highlight the need to investigate ecological data preserved in the fossil record.
Cavadini P., Weinhold H., Tonsmann M., Chilingaryan S., Kopmann A., Lewkowicz A., Miao C., Scharfer P., Schabel W.
in Experiments in Fluids, 59 (2018), 61. DOI:10.1007/s00348-017-2482-z
© 2018, Springer-Verlag GmbH Germany, part of Springer Nature. To understand the effects of inhomogeneous drying on the quality of polymer coatings, an experimental setup to resolve the occurring flow field throughout the drying film has been developed. Deconvolution microscopy is used to analyze the flow field in 3D and time. Since the dimension of the spatial component in the direction of the line-of-sight is limited compared to the lateral components, a multi-focal approach is used. Here, the beam of light is equally distributed on up to five cameras using cubic beam splitters. Adding a meniscus lens between each pair of camera and beam splitter and setting different distances between each camera and its meniscus lens creates multi-focality and allows one to increase the depth of the observed volume. Resolving the spatial component in the line-of-sight direction is based on analyzing the point spread function. The analysis of the PSF is computational expensive and introduces a high complexity compared to traditional particle image velocimetry approaches. A new algorithm tailored to the parallel computing architecture of recent graphics processing units has been developed. The algorithm is able to process typical images in less than a second and has further potential to realize online analysis in the future. As a prove of principle, the flow fields occurring in thin polymer solutions drying at ambient conditions and at boundary conditions that force inhomogeneous drying are presented.
A N Danilewski, J Becker, T Baumbach, D Hänschke, A Kopmann, V Asadchikov, M Kovalchuk
Final report, BMBF Programme: “Development and Use of Accelerator-Based Photon Sources (2014)”
Project duration: 01.10.2014 – 30.09.2017
Within the STROBOS-CODE project, partners from two German (KIT, University Freiburg (UFREI)) and two Russian (Shubnikov Institute of Crystallography (SHUB), Kurchatov Institute (KUR)) institutions developed and optimized a novel methodology for correlative 2D, 3D, and 4D characterization of crystalline materials, based on X-ray diffraction imaging. In short, the joint work comprised the theoretical description of the measurement principles, the derivation of the measurement procedures, the specification, design, and construction of the corresponding instrumentation, the formulation and implementation of the data analysis algorithms, as well as the experimental demonstration of the methodology.
The driving application within the project is the in situ investigation of crystals and devices, aiming for a fundamental understanding of structure, nucleation, arrangement, propagation, and extension of defects like dislocations or cracks. In this context, the results of the STROBOS-CODE project will open new perspectives to improve prediction, control, and avoidance of critical defects during the industrial growth and processing of technologically relevant crystals, in particular semi-conductor wafers e.g. for microelectronic devices or solar cells. For several selected use cases, the methodology developed within the STROBOS-CODE project has already been demonstrated, successfully.
The core element of the methodical development is X-ray imaging based on Bragg diffraction contrast, which is highly sensitive to local elastic and plastic deformation of crystal lattices as typically associated with crystal defects. Based on the results of the preceding UFO project and on the prior development of the basic 3D X-ray diffraction laminography (XDL), within STROBOS-CODE an advanced methodology has been made available for correlative characterization and with in situ capabilities. The correlative analysis of data obtained by complementary techniques like X-ray white-beam topography or visible light microscopy now allows creating an unprecedented comprehensive picture of crystalline defects like dislocation networks.
The adaption and optimization of laminographic 3D reconstruction algorithms to the specific requirements of X-ray Bragg diffraction contrast imaging has been performed. Aiming for in situ ca-pabilities, particular interest was put on the reduction of the number of projections required for XDL reconstruction, which could be successfully reduced from about 700 to 50-100 by the appropriate utilization of DART-based algorithms. This progress in data processing resulted in a substantial reduction of the measurement time, for the first time enabling quasi in situ characterization of dislocation dynamics with 3D XDL scans interleaved with a step-wise thermal treatment of the investigated samples.
A concept for a mobile instrument suited for general purpose full-field X-ray diffraction imaging, referred to as the CODE station, has been worked out. A first prototype was successfully realized, served as a test instrument, and enabled first experiments. The results were used to further improve the methodology. Despite the constraints due to the compact and lightweight design, an angular precision and stability of all critical elements better than 1.5e-4 degree was successfully demonstrated. The components of the final instrumentation have been specified and ordered and the final assembly will be performed by UFREI during its extended project run-time until 2019. Afterwards, the highly flexible and mobile CODE-instrumentation will be available for routine experiments at all suitable synchrotron end stations, like e.g. at PETRA III or at the ESRF.
For the STROBOS-instrumentation in Moscow a modular camera system has been developed, constructed, implemented, and tested in cooperation with the Russian partners. It is designed to enable a continuous data streaming with up to 5GB/s. Depending on the application case, differ-ent image sensors can be installed: Sensors with 2, 4, and 20 megapixel are presently available, with a read-out speed of up to 330 frames/s. A sub-zero cooling system has been developed and the camera has been mechanically and electrically integrated for use within the STROBOS set-up.
Within the STROBOS-CODE project, several experiments demonstrated successfully the unique capabilities of the proposed concept for the instrumentation as well as of the developed methodology for correlative and quasi in situ characterization of crystal defects in technologically relevant samples like semiconductor wafers. An unprecedented, comprehensive picture of the onset of thermally driven plastic deformation of silicon wafers could be obtained and provided novel insight into the involved mechanisms. Moreover, for the first time the dynamics of dislocation nucleation and evolution could be monitored in 3D by interleaving XDL measurements and controlled step-wise annealing. Finally, also the applicability of the developed diffraction imaging methodology to higher absorbing materials could be demonstrated by the 3D visualization of dislocation cell structures in a GaAs wafer.
The results of the STROBOS-CODE project have been reported at several national and international workshops and conferences and in more than 10 peer-reviewed publications. In close collaboration, the German and Russian partners have advanced the development on the field of photon science and the methodological progress for large-scale facilities. The methodology and instrumentation developed (improve the research infrastructure at the Kurchatov Institute (STROBOS) and at KIT and other synchrotron infrastructures (CODE). The consortium created for the STROBOS-CODE project will exists beyond the project duration and will extend the cooperation and partnership of Russian and German institutions, in particular on the field of X-ray analytics, algorithms, image processing, and the characterization of crystalline materials and components. In Germany, STROBOS-CODE supports the research partnership of KIT and UFREI within the joint virtual institute “BIRD” and intensifies and strengthens their close collaboration on the field of materials science and microsystem technology.
Asadchikov V., Buzmakov A., Chukhovskii F., Dyachkova I., Zolotov D., Danilewsky A., Baumbach T., Bode S., Haaga S., Hanschke D., Kabukcuoglu M., Balzer M., Caselle M., Suvorov E.
in Journal of Applied Crystallography (2018). DOI:10.1107/S160057671801419X
Â© International Union of Crystallography, 2018 This article describes complete characterization of the polygonal dislocation half-loops (PDHLs) introduced by scratching and subsequent bending of an Si(111) crystal. The study is based on the X-ray topo-tomography technique using both a conventional laboratory setup and the high-resolution X-ray image-detecting systems at the synchrotron facilities at KIT (Germany) and ESRF (France). Numerical analysis of PDHL images is performed using the Takagiâ€“Taupin equations and the simultaneous algebraic reconstruction technique (SART) tomographic algorithm.
Hanschke D., Danilewsky A., Helfen L., Hamann E., Baumbach T.
in Physical Review Letters, 119 (2017), 215504. DOI:10.1103/PhysRevLett.119.215504
© 2017 American Physical Society. Correlated x-ray diffraction imaging and light microscopy provide a conclusive picture of three-dimensional dislocation arrangements on the micrometer scale. The characterization includes bulk crystallographic properties like Burgers vectors and determines links to structural features at the surface. Based on this approach, we study here the thermally induced slip-band formation at prior mechanical damage in Si wafers. Mobilization and multiplication of preexisting dislocations are identified as dominating mechanisms, and undisturbed long-range emission from regenerative sources is discovered.
Schmelzle S., Heethoff M., Heuveline V., Losel P., Becker J., Beckmann F., Schluenzen F., Hammel J.U., Kopmann A., Mexner W., Vogelgesang M., Jerome N.T., Betz O., Beutel R., Wipfler B., Blanke A., Harzsch S., Hornig M., Baumbach T., Van De Kamp T.
in Proceedings of SPIE – The International Society for Optical Engineering, 10391 (2017), 103910P. DOI:10.1117/12.2275959
© 2017 SPIE. Beamtime and resulting SRμCT data are a valuable resource for researchers of a broad scientific community in life sciences. Most research groups, however, are only interested in a specific organ and use only a fraction of their data. The rest of the data usually remains untapped. By using a new collaborative approach, the NOVA project (Network for Online Visualization and synergistic Analysis of tomographic data) aims to demonstrate, that more efficient use of the valuable beam time is possible by coordinated research on different organ systems. The biological partners in the project cover different scientific aspects and thus serve as model community for the collaborative approach. As proof of principle, different aspects of insect head morphology will be investigated (e.g., biomechanics of the mouthparts, and neurobiology with the topology of sensory areas). This effort is accomplished by development of advanced analysis tools for the ever-increasing quantity of tomographic datasets. In the preceding project ASTOR, we already successfully demonstrated considerable progress in semi-automatic segmentation and classification of internal structures. Further improvement of these methods is essential for an efficient use of beam time and will be refined in the current NOVAproject. Significant enhancements are also planned at PETRA III beamline p05 to provide all possible contrast modalities in x-ray imaging optimized to biological samples, on the reconstruction algorithms, and the tools for subsequent analyses and management of the data. All improvements made on key technologies within this project will in the long-term be equally beneficial for all users of tomography instrumentations.
PhD thesis, Faculty of Electrical Engineering and Information Technology, Karlsruhe Institute of Technology, 2017.
In modern particle accelerators, a precise control of the particle beam is essential for the correct operation of the facility. The experimental observation of the beam behavior relies on dedicated techniques, which are often described by the term “beam diagnostics”. Cutting-edge beam diagnostics systems, in particular several experimental setups currently installed at KIT’s synchrotron light source ANKA, employ line scan detectors to characterize and monitor the beam parameters precisely. Up to now, the experimental resolution of these setups has been limited by the line rate of existing detectors, which is limited to a few hundreds of kHz.
This thesis addresses this limitation with the development a novel line scan detector system named KALYPSO – KArlsruhe Linear arraY detector for MHz rePetition-rate SpectrOscopy. The goal is to provide scientists at ANKA with a complete detector system which will enable real-time measurements at MHz repetition rates. The design of both front-end and back-end electronics suitable for beam diagnostic experiments is a challenging task, because the detector must achieve low-noise performance at high repetition rates and with a large number of channels. Moreover, the detector system must sustain continuous data taking and introduce low-latency. To meet these stringent requirements, several novel components have been developed by the author of this thesis, such as a novel readout ASIC and a high-performance DAQ system.
The front-end ASIC has been designed to readout different types of microstrip sensors for the detection of visible and near-infrared light. The ASIC is composed of 128 analog channels which are operated in parallel, plus additional mixed-signal stages which interface external devices. Each channel consists of a Charge Sensitive Amplifier (CSA), a Correlated Double Sampling (CDS) stage and a channel buffer. Moreover, a high-speed output driver has been implemented to interface directly an off-chip ADC. The first version of the ASIC with a reduced number of channels has been produced in a 110 nm CMOS technology. The chip is fully functional and achieves a line rate of 12 MHz with an equivalent noise charge of 417 electrons when connected to a detector capacitance of 1.3 pF.
Moreover, a dedicated DAQ system has been developed to connect directly FPGA readout cards and GPU computing nodes. The data transfer is handled by a novel DMA engine implemented on FPGA. The performance of the DMA engine compares favorably with the current state-of-the-art, achieving a throughput of more than 7 GB/s and latencies as low as 2 us. The high-throughput and low-latency performance of the DAQ system enables real-time data processing on GPUs, as it has been demonstrated with extensive measurements. The DAQ system is currently integrated with KALYPSO and with other detector systems developed at the Institute for Data Processing and Electronics (IPE).
In parallel with the development of the ASIC, a first version of the KALYPSO detector system has been produced. This version is based on a Si or InGaAs microstrip sensor with 256 channels and on the GOTTHARD chip. A line rate of 2.7 MHz has been achieved, and experimental measurements have established KALYPSO as a powerful line scan detector operating at high line rates. The final version of the KALYPSO detector system, which will achieve a line rate of 10 MHz, is anticipated for early 2018.
Finally, KALYPSO has been installed at two different experimental setups at ANKA during several commissioning campaigns. The KALYPSO detector system allowed scientists to observe the beam behavior with unprecedented experimental resolution. First exciting and widely recognized scientific results were obtained at ANKA and at the European XFEL, demonstrating the benefits brought by the KALYPSO detector system in modern beam diagnostics.
First assessor: Prof. Dr. M. Weber
Second assessor: Prof. Dr.-Ing. Dr. h.c. J. Becker
Mohr H., Dritschler T., Ardila L.E., Balzer M., Caselle M., Chilingaryan S., Kopmann A., Rota L., Schuh T., Vogelgesang M., Weber M.
in Journal of Instrumentation, 12 (2017), C04019. DOI:10.1088/1748-0221/12/04/C04019
© 2017 IOP Publishing Ltd and Sissa Medialab srl. In this work, we investigate the use of GPUs as a way of realizing a low-latency, high-throughput track trigger, using CMS as a showcase example. The CMS detector at the Large Hadron Collider (LHC) will undergo a major upgrade after the long shutdown from 2024 to 2026 when it will enter the high luminosity era. During this upgrade, the silicon tracker will have to be completely replaced. In the High Luminosity operation mode, luminosities of 5-7 × 1034 cm-2s-1 and pileups averaging at 140 events, with a maximum of up to 200 events, will be reached. These changes will require a major update of the triggering system. The demonstrated systems rely on dedicated hardware such as associative memory ASICs and FPGAs. We investigate the use of GPUs as an alternative way of realizing the requirements of the L1 track trigger. To this end we implemeted a Hough transformation track finding step on GPUs and established a low-latency RDMA connection using the PCIe bus. To showcase the benefits of floating point operations, made possible by the use of GPUs, we present a modified algorithm. It uses hexagonal bins for the parameter space and leads to a more truthful representation of the possible track parameters of the individual hits in Hough space. This leads to fewer duplicate candidates and reduces fake track candidates compared to the regular approach. With data-transfer latencies of 2 μs and processing times for the Hough transformation as low as 3.6 μs, we can show that latencies are not as critical as expected. However, computing throughput proves to be challenging due to hardware limitations.
PhD thesis, Faculty of Computer Science, Karlsruhe Institute of Technology, 2017.
X-ray imaging experiments shed light on internal material structures. The success of an experiment depends on the properly selected experimental conditions, mechanics and the behavior of the sample or process under study. Up to now, there is no autonomous data acquisition scheme which would enable us to conduct a broad range of X-ray imaging experiments driven by image-based feedback. This thesis aims to close this gap by solving problems related to the selection of experimental parameters, fast data processing and automatic feedback to the experiment based on image metrics applied to the processed data.
In order to determine the best initial experimental conditions, we study the X-ray image formation principles and develop a framework for their simulation. It enables us to conduct a broad range of X-ray imaging experiments by taking into account many physical principles of the full light path from the X-ray source to the detector. Moreover, we focus on various sample geometry models and motion, which allows simulations of experiments such as 4D time-resolved tomography.
We further develop an autonomous data acquisition scheme which is able to fine-tune the initial conditions and control the experiment based on fast image analysis. We focus on high-speed experiments which require significant data processing speed, especially when the control is based on compute-intensive algorithms. We employ a highly parallelized framework to implement an efficient 3D reconstruction algorithm whose output is plugged into various image metrics which provide information about the acquired data. Such metrics are connected to a decision-making scheme which controls the data acquisition hardware in a closed loop.
We demonstrate the simulation framework accuracy by comparing virtual and real grating interferometry experiments. We also look into the impact of imaging conditions on the accuracy of the filtered back projection algorithm and how it can guide the optimization of experimental conditions. We also show how simulation together with ground truth can help to choose data processing parameters for motion estimation by a high-speed experiment.
We demonstrate the autonomous data acquisition system on an in-situ tomographic experiment, where it optimizes the camera frame rate based on tomographic reconstruction. We also use our system to conduct a high-throughput tomography experiment, where it scans many similar biological samples, finds the tomographic rotation axis for every sample and reconstructs a full 3D volume on-the-fly for quality assurance. Furthermore, we conduct an in-situ laminography experiment studying crack formation in a material. Our system performs the data acquisition and reconstructs a central slice of the sample to check its alignment and data quality.
Our work enables selection of the optimal initial experimental conditions based on high-fidelity simulations, their fine-tuning during a real experiment and its automatic control based on fast data analysis. Such a data acquisition scheme enables novel high-speed and in-situ experiments which cannot be controlled by a human operator due to high data rates.
First assessor: Prof. Dr.-Ing. R. Dillmann
Second assessor: Prof. Dr. Tilo Baumbach