PhD thesis, Faculty of Computer Science, Karlsruhe Institute of Technology, 2017.
X-ray imaging experiments shed light on internal material structures. The success of an experiment depends on the properly selected experimental conditions, mechanics and the behavior of the sample or process under study. Up to now, there is no autonomous data acquisition scheme which would enable us to conduct a broad range of X-ray imaging experiments driven by image-based feedback. This thesis aims to close this gap by solving problems related to the selection of experimental parameters, fast data processing and automatic feedback to the experiment based on image metrics applied to the processed data.
In order to determine the best initial experimental conditions, we study the X-ray image formation principles and develop a framework for their simulation. It enables us to conduct a broad range of X-ray imaging experiments by taking into account many physical principles of the full light path from the X-ray source to the detector. Moreover, we focus on various sample geometry models and motion, which allows simulations of experiments such as 4D time-resolved tomography.
We further develop an autonomous data acquisition scheme which is able to fine-tune the initial conditions and control the experiment based on fast image analysis. We focus on high-speed experiments which require significant data processing speed, especially when the control is based on compute-intensive algorithms. We employ a highly parallelized framework to implement an efficient 3D reconstruction algorithm whose output is plugged into various image metrics which provide information about the acquired data. Such metrics are connected to a decision-making scheme which controls the data acquisition hardware in a closed loop.
We demonstrate the simulation framework accuracy by comparing virtual and real grating interferometry experiments. We also look into the impact of imaging conditions on the accuracy of the filtered back projection algorithm and how it can guide the optimization of experimental conditions. We also show how simulation together with ground truth can help to choose data processing parameters for motion estimation by a high-speed experiment.
We demonstrate the autonomous data acquisition system on an in-situ tomographic experiment, where it optimizes the camera frame rate based on tomographic reconstruction. We also use our system to conduct a high-throughput tomography experiment, where it scans many similar biological samples, finds the tomographic rotation axis for every sample and reconstructs a full 3D volume on-the-fly for quality assurance. Furthermore, we conduct an in-situ laminography experiment studying crack formation in a material. Our system performs the data acquisition and reconstructs a central slice of the sample to check its alignment and data quality.
Our work enables selection of the optimal initial experimental conditions based on high-fidelity simulations, their fine-tuning during a real experiment and its automatic control based on fast data analysis. Such a data acquisition scheme enables novel high-speed and in-situ experiments which cannot be controlled by a human operator due to high data rates.
First assessor: Prof. Dr.-Ing. R. Dillmann
Second assessor: Prof. Dr. Tilo Baumbach
Zuber M., Laass M., Hamann E., Kretschmer S., Hauschke N., Van De Kamp T., Baumbach T., Koenig T.
in Scientific Reports, 7 (2017), 41413. DOI:10.1038/srep41413
© 2017 The Author(s). Non-destructive imaging techniques can be extremely useful tools for the investigation and the assessment of palaeontological objects, as mechanical preparation of rare and valuable fossils is precluded in most cases. However, palaeontologists are often faced with the problem of choosing a method among a wide range of available techniques. In this case study, we employ X-ray computed tomography (CT) and computed laminography (CL) to study the first fossil xiphosuran from the Muschelkalk (Middle Triassic) of the Netherlands. The fossil is embedded in micritic limestone, with the taxonomically important dorsal shield invisible, and only the outline of its ventral part traceable. We demonstrate the complementarity of CT and CL which offers an excellent option to visualize characteristic diagnostic features. We introduce augmented laminography to correlate complementary information of the two methods in Fourier space, allowing to combine their advantages and finally providing increased anatomical information about the fossil. This method of augmented laminography enabled us to identify the xiphosuran as a representative of the genus Limulitella.
Caselle M., Perez L.E.A., Balzer M., Kopmann A., Rota L., Weber M., Brosi M., Steinmann J., Brundermann E., Muller A.-S.
in Journal of Instrumentation, 12 (2017), C01040. DOI:10.1088/1748-0221/12/01/C01040
© 2017 IOP Publishing Ltd and Sissa Medialab srl. This paper presents a novel data acquisition system for continuous sampling of ultra-short pulses generated by terahertz (THz) detectors. Karlsruhe Pulse Taking Ultra-fast Readout Electronics (KAPTURE) is able to digitize pulse shapes with a sampling time down to 3 ps and pulse repetition rates up to 500 MHz. KAPTURE has been integrated as a permanent diagnostic device at ANKA and is used for investigating the emitted coherent synchrotron radiation in the THz range. A second version of KAPTURE has been developed to improve the performance and flexibility. The new version offers a better sampling accuracy for a pulse repetition rate up to 2 GHz. The higher data rate produced by the sampling system is processed in real-time by a heterogeneous FPGA and GPU architecture operating up to 6.5 GB/s continuously. Results in accelerator physics will be reported and the new design of KAPTURE be discussed.
Losel P., Heuveline V.
in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10129 LNCS (2017) 121-128. DOI:10.1007/978-3-319-52280-7_12
© Springer International Publishing AG 2017. Segmenting the blood pool and myocardium from a 3D cardiovascular magnetic resonance (CMR) image allows to create a patient-specific heart model for surgical planning in children with complex congenital heart disease (CHD). Implementation of semi-automatic or automatic segmentation algorithms is challenging because of a high anatomical variability of the heart defects, low contrast, and intensity variations in the images. Therefore, manual segmentation is the gold standard but it is labor-intensive. In this paper we report the set-up and results of a highly scalable semi-automatic diffusion algorithm for image segmentation. The method extrapolates the information from a small number of expert manually labeled reference slices to the remaining volume. While results of most semi-automatic algorithms strongly depend on well-chosen but usually unknown parameters this approach is parameter-free. Validation is performed on twenty 3D CMR images.
Fedi G., Magazzu G., Palla F., Bilei G.M., Checcucci B., Gentsos C., Magalotti D., Storchi L., Balzer M.N., Tcherniakhowski D., Sander O., Baulieu G., Galbit G.C., Viret S., Modak A., Chowdhury S.R.
in Proceedings of Science, 2017-September (2017).
© Copyright owned by the author(s) under the terms of the Creative Commons. A Real-Time demonstrator based on the ATCA Pulsar-IIB custom board and on the Pattern Recognition Mezzanine (PRM) board has been developed as a flexible platform to test and characterize low-latency algorithms for track reconstruction and L1 Trigger generation in future High Energy Physics experiments. The demonstrator has been extensively used to test and characterize the Track-Trigger algorithms and architecture based on the use of the Associative Memory ASICs and of the PRM cards. The flexibility of the demonstrator makes it suitable to explore other solutions fully based on a high-performance FPGA device.
Cieri D. et al.
in Proceedings of Science, 2017-September (2017).
© Copyright owned by the author(s) under the terms of the Creative Commons. A new tracking detector is under development for use by the CMS experiment at the High-Luminosity LHC (HL-LHC). A crucial component of this upgrade will be the ability to reconstruct within a few microseconds all charged particle tracks with transverse momentum above 3 GeV, so they can be used in the Level-1 trigger decision. A concept for an FPGA-based track finder using a fully time-multiplexed architecture is presented, where track candidates are reconstructed using a projective binning algorithm based on the Hough Transform followed by a track fitting based on the linear regression technique. A hardware demonstrator using MP7 processing boards has been assembled to prove the entire system, from the output of the tracker readout boards to the reconstruction of tracks with fitted helix parameters. It successfully operates on one eighth of the tracker solid angle at a time, processing events taken at 40 MHz, each with up to 200 superimposed proton-proton interactions, whilst satisfying latency constraints. The demonstrated track-reconstruction system, the chosen architecture, the achievements to date and future options for such a system will be discussed.
Rota L., Caselle M., Norbert Balzer M., Weber M., Mozzanica A., Schmitt B.
in Proceedings of Science, 2017-September (2017).
© Copyright owned by the author(s) under the terms of the Creative Commons. We present a front-end readout ASIC developed for a new family of ultra-fast 1D imaging detectors operating at frame rates of up to 12 MHz. The ASIC, realized in 110 nm CMOS technology, is designed to be compatible with different semiconductor sensors. The final chip will contain up to 128 channels, each consisting of a Charge-Sensitive Amplifier, a noise shaper based on a fully-differential Correlated Double Sampling stage and a Sample-and-Hold buffer. The differential channels are connected through 8:1 analog multiplexers to the output drivers, which directly interface external analog-to-digital converters. A first prototype with a limited number of channels have been characterized with a Si microstrip detector. When operated at the maximum frame-rate of 12 MHz, the ASIC exhibits an Equivalent Noise Charge of 417 electrons with a detector capacitance of 1.3 pF.
Bergmann T., Balzer M., Hopp T., Van De Kamp T., Kopmann A., Jerome N.T., Zapf M.
in VISIGRAPP 2017 – Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 3 (2017) 330-334.
Copyright © 2017 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved. The computer gaming industry is traditionally the moving power and spirit in the development of computer visualization hardware and software. This year, affordable and high quality virtual reality headsets became available and the science community is eager to get benefit from it. This paper describes first experiences in adapting the new hardware for three different visualization use cases. In all three examples existing visualization pipelines were extended by virtual reality technology. We describe our approach, based on the HTC Vive VR headset, the open source software Blender and the Unreal Engine 4 game engine. The use cases are from three different fields: large-scale particle physics research, X-ray-imaging for entomology research and medical imaging with ultrasound computer tomography. Finally we discuss benefits and limits of the current virtual reality technology and present an outlook to future developments.
Jerome N.T., Chilingaryan S., Shkarin A., Kopmann A., Zapf M., Lizin A., Bergmann T.
in VISIGRAPP 2017 – Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 3 (2017) 152-163.
Copyright © 2017 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved.With data sets growing beyond terabytes or even petabytes in scientific experiments, there is a trend of keeping data at storage facilities and providing remote cloud-based services for analysis. However, accessing these data sets remotely is cumbersome due to additional network latency and incomplete metadata description. To ease data browsing on remote data archives, our WAVE framework applies an intelligent cache management to provide scientists with a visual feedback on the large data set interactively. In this paper, we present methods to reduce the data set size while preserving visual quality. Our framework supports volume rendering and surface rendering for data inspection and analysis. Furthermore, we enable a zoom-on-demand approach, where a selected volumetric region is reloaded with higher details. Finally, we evaluated the WAVE framework using a data set from the entomology science research.
Lautner S., Lenz C., Hammel J., Moosmann J., Kuhn M., Caselle M., Vogelgesang M., Kopmann A., Beckmann F.
in Proceedings of SPIE – The International Society for Optical Engineering, 10391 (2017), 1039118. DOI:10.1117/12.2287221
© 2017 SPIE. Water transport from roots to shoots is a vital necessity in trees in order to sustain their photosynthetic activity and, hence, their physiological activity. The vascular tissue in charge is the woody body of root, stem and branches. In gymnosperm trees, like spruce trees (Picea abies (L.) Karst.), vascular tissue consists of tracheids: elongated, protoplast- free cells with a rigid cell wall that allow for axial water transport via their lumina. In order to analyze the over-all water transport capacity within one growth ring, time-consuming light microscopy analysis of the woody sample still is the conventional approach for calculating tracheid lumen area. In our investigations at the Imaging Beamline (IBL) operated by the Helmholtz-Zentrum Geesthacht (HZG) at PETRA III storage ring of the Deutsches Elektronen-Synchrotron DESY, Hamburg, we applied SRμCT on small wood samples of spruce trees in order to visualize and analyze size and formation of xylem elements and their respective lumina. The selected high-resolution phase-contrast technique makes full use of the novel 20 MPixel CMOS area detector developed within the cooperation of HZG and the Karlsruhe data by light microscopy analysis and, hence, prove, that μCT is a most appropriate method to gain valid information on xylem cell structure and tree water transport capacity.