Cecilia A., Baecker A., Hamann E., Rack A., van de Kamp T., Gruhl F.J., Hofmann R., Moosmann J., Hahn S., Kashef J., Bauer S., Farago T., Helfen L., Baumbach T.

in Materials Science and Engineering C, 71 (2017) 465-472. DOI:10.1016/j.msec.2016.10.038

Abstract

© 2016 Prostate cancer (PCa) currently is the second most diagnosed cancer in men and the second most cause of cancer death after lung cancer in Western societies. This sets the necessity of modelling prostatic disorders to optimize a therapy against them. The conventional approach to investigating prostatic diseases is based on two-dimensional (2D) cell culturing. This method, however, does not provide a three-dimensional (3D) environment, therefore impeding a satisfying simulation of the prostate gland in which the PCa cells proliferate. Cryogel scaffolds represent a valid alternative to 2D culturing systems for studying the normal and pathological behavior of the prostate cells thanks to their 3D pore architecture that reflects more closely the physiological environment in which PCa cells develop. In this work the 3D morphology of three potential scaffolds for PCa cell culturing was investigated by means of synchrotron X-ray computed micro tomography (SXCμT) fitting the according requirements of high spatial resolution, 3D imaging capability and low dose requirements very well. In combination with mechanical tests, the results allowed identifying an optimal cryogel architecture, meeting the needs for a well-suited scaffold to be used for 3D PCa cell culture applications. The selected cryogel was then used for culturing prostatic lymph node metastasis (LNCaP) cells and subsequently, the presence of multi-cellular tumor spheroids inside the matrix was demonstrated again by using SXCμT.

Farago, Tomas

PhD thesis, Faculty of Computer Science, Karlsruhe Institute of Technology, 2017.

Abstract

X-ray imaging experiments shed light on internal material structures. The success of an experiment depends on the properly selected experimental conditions, mechanics and the behavior of the sample or process under study. Up to now, there is no autonomous data acquisition scheme which would enable us to conduct a broad range of X-ray imaging experiments driven by image-based feedback. This thesis aims to close this gap by solving problems related to the selection of experimental parameters, fast data processing and automatic feedback to the experiment based on image metrics applied to the processed data.

In order to determine the best initial experimental conditions, we study the X-ray image formation principles and develop a framework for their simulation. It enables us to conduct a broad range of X-ray imaging experiments by taking into account many physical principles of the full light path from the X-ray source to the detector. Moreover, we focus on various sample geometry models and motion, which allows simulations of experiments such as 4D time-resolved tomography.

We further develop an autonomous data acquisition scheme which is able to fine-tune the initial conditions and control the experiment based on fast image analysis. We focus on high-speed experiments which require significant data processing speed, especially when the control is based on compute-intensive algorithms. We employ a highly parallelized framework to implement an efficient 3D reconstruction algorithm whose output is plugged into various image metrics which provide information about the acquired data. Such metrics are connected to a decision-making scheme which controls the data acquisition hardware in a closed loop.

We demonstrate the simulation framework accuracy by comparing virtual and real grating interferometry experiments. We also look into the impact of imaging conditions on the accuracy of the filtered back projection algorithm and how it can guide the optimization of experimental conditions. We also show how simulation together with ground truth can help to choose data processing parameters for motion estimation by a high-speed experiment.

We demonstrate the autonomous data acquisition system on an in-situ tomographic experiment, where it optimizes the camera frame rate based on tomographic reconstruction. We also use our system to conduct a high-throughput tomography experiment, where it scans many similar biological samples, finds the tomographic rotation axis for every sample and reconstructs a full 3D volume on-the-fly for quality assurance. Furthermore, we conduct an in-situ laminography experiment studying crack formation in a material. Our system performs the data acquisition and reconstructs a central slice of the sample to check its alignment and data quality.

Our work enables selection of the optimal initial experimental conditions based on high-fidelity simulations, their fine-tuning during a real experiment and its automatic control based on fast data analysis. Such a data acquisition scheme enables novel high-speed and in-situ experiments which cannot be controlled by a human operator due to high data rates.

First assessor: Prof. Dr.-Ing. R. Dillmann
Second assessor: Prof. Dr. Tilo Baumbach

Zuber M., Laass M., Hamann E., Kretschmer S., Hauschke N., Van De Kamp T., Baumbach T., Koenig T.

in Scientific Reports, 7 (2017), 41413. DOI:10.1038/srep41413

Abstract

© 2017 The Author(s). Non-destructive imaging techniques can be extremely useful tools for the investigation and the assessment of palaeontological objects, as mechanical preparation of rare and valuable fossils is precluded in most cases. However, palaeontologists are often faced with the problem of choosing a method among a wide range of available techniques. In this case study, we employ X-ray computed tomography (CT) and computed laminography (CL) to study the first fossil xiphosuran from the Muschelkalk (Middle Triassic) of the Netherlands. The fossil is embedded in micritic limestone, with the taxonomically important dorsal shield invisible, and only the outline of its ventral part traceable. We demonstrate the complementarity of CT and CL which offers an excellent option to visualize characteristic diagnostic features. We introduce augmented laminography to correlate complementary information of the two methods in Fourier space, allowing to combine their advantages and finally providing increased anatomical information about the fossil. This method of augmented laminography enabled us to identify the xiphosuran as a representative of the genus Limulitella.

Caselle M., Perez L.E.A., Balzer M., Kopmann A., Rota L., Weber M., Brosi M., Steinmann J., Brundermann E., Muller A.-S.

in Journal of Instrumentation, 12 (2017), C01040. DOI:10.1088/1748-0221/12/01/C01040

Abstract

© 2017 IOP Publishing Ltd and Sissa Medialab srl. This paper presents a novel data acquisition system for continuous sampling of ultra-short pulses generated by terahertz (THz) detectors. Karlsruhe Pulse Taking Ultra-fast Readout Electronics (KAPTURE) is able to digitize pulse shapes with a sampling time down to 3 ps and pulse repetition rates up to 500 MHz. KAPTURE has been integrated as a permanent diagnostic device at ANKA and is used for investigating the emitted coherent synchrotron radiation in the THz range. A second version of KAPTURE has been developed to improve the performance and flexibility. The new version offers a better sampling accuracy for a pulse repetition rate up to 2 GHz. The higher data rate produced by the sampling system is processed in real-time by a heterogeneous FPGA and GPU architecture operating up to 6.5 GB/s continuously. Results in accelerator physics will be reported and the new design of KAPTURE be discussed.

Losel P., Heuveline V.

in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10129 LNCS (2017) 121-128. DOI:10.1007/978-3-319-52280-7_12

Abstract

© Springer International Publishing AG 2017. Segmenting the blood pool and myocardium from a 3D cardiovascular magnetic resonance (CMR) image allows to create a patient-specific heart model for surgical planning in children with complex congenital heart disease (CHD). Implementation of semi-automatic or automatic segmentation algorithms is challenging because of a high anatomical variability of the heart defects, low contrast, and intensity variations in the images. Therefore, manual segmentation is the gold standard but it is labor-intensive. In this paper we report the set-up and results of a highly scalable semi-automatic diffusion algorithm for image segmentation. The method extrapolates the information from a small number of expert manually labeled reference slices to the remaining volume. While results of most semi-automatic algorithms strongly depend on well-chosen but usually unknown parameters this approach is parameter-free. Validation is performed on twenty 3D CMR images.

M. Heethoff, V. Heuveline, H. Hartenstein, W. Mexner, T. van de Kamp, A. Kopmann

Final report, BMBF Programme: “Erforschung kondensierter Materie”, 2016.

Executive summary

Die Synchrotron-Röntgentomographie ist eine einzigartige Abbildungsmethode zur Untersuchung innerer Strukturen – insbesondere in undurchsichtigen Proben. In den letzten Jahren konnte die räumliche und zeitliche Auflösung der Methode stark verbessert werden. Die Auswertung der Datensätze ist allerdings bedingt durch ihre Größe und die Komplexität der abgebildeten Strukturen herausfordernd. Der Verbund für Funktionsmorphologie und Systematik hat sich mit dem Projekt ASTOR das Ziel gesetzt, den Zugang zur Röntgentomographie durch eine integrierte Analyseumgebung für biologische Nutzer zu erleichtern.
Durch den interdisziplinären Zusammenschluss von Biologen, Informatikern, Mathematikern und Ingenieuren war es möglich, die gesamte Datenverarbeitungskette zu betrachten. Es sind weitgehend automatisierte Datenverarbeitungs- und -transfermethoden entstanden. Die tomographischen Aufnahmen werden online rekonstruiert und in die ASTOR Analyseumgebung transferiert. Die Daten stehen anschließend über virtuelle Rechner den Nutzern sowohl bei ANKA als auch außerhalb zur Verfügung. Ein Autorisierungsschema für den Zugriff wurde erarbeitet. Die Analyseinfrastruktur besteht aus einem temporären Datenspeicher, dem Virtualisierungsserver, sowie der Anbindung an Beamlines und Langzeitarchiv. Die Analyseumgebung bietet neben kostenintensiven kommerziellen Programmen neu entwickelte Werkzeuge an. Hervorzuheben sind hier die ASTOR- Segmentierungsfunktionen, die den bislang sehr zeit- und arbeitsintensiven Arbeitsschritt um ein Vielfaches beschleunigen. Die automatische Segmentierung lässt sich transparent über in nur wenigen Schichten markierte Bereiche steuern und erzielt ein bislang unerreichtes automatisches Segmentierungsergebnis.
Die Analyseumgebung hat sich als sehr effizient für die Datenauswertung und Methodenentwicklung erwiesen. Neben den Antragstellern wird das System inzwischen von weiteren Nutzern erfolgreich eingesetzt. Im Verlauf des Projektes wurde in mehreren Strahlzeiten ein umfangreicher Satz an Beispielaufnahmen über einen breiten Bereich von Organismen aufgenommen. Ausgewählte Proben wurden als Vorlage für die Methodenentwicklung segmentiert und klassifiziert. Im Verlauf des Projektes konnte die Zahl der Aufnahmen innerhalb einer Messwoche auf zunächst 400 und zum Schluss sogar auf bis zu 1000 drastisch erhöht werden.
Mit ASTOR ist es gelungen, eine durchgehende Analyseumgebung aufzubauen, und damit den nächsten Schritt im Ausbau solcher Experimentiereinrichtungen aufzuzeigen. Für die gewählte Anwendung, die Funktionsmorphologie, ist es erstmals möglich, auch quantitative Reihenuntersuchungen an kleinen Organismen durchzuführen. Die Auswertesystematik ist nicht auf diese Anwendung beschränkt, sondern vielmehr ein generelles Beispiel für datenintensive Experimente. Das ebenfalls von der BMBF-Verbundforschung geförderte Projekt NOVA setzt die begonnenen Aktivitäten in diesem Sinne fort und beabsichtigt durch synergistische Zusammenarbeit einen offenen Datenkatalog für eine gesamte Community zu erstellen.

Steinmann J.L., Blomley E., Brosi M., Brundermann E., Caselle M., Hesler J.L., Hiller N., Kehrer B., Mathis Y.-L., Nasse M.J., Raasch J., Schedler M., Schonfeldt P., Schuh M., Schwarz M., Siegel M., Smale N., Weber M., Muller A.-S.

in Physical Review Letters, 117 (2016), 174802. DOI:10.1103/PhysRevLett.117.174802

Abstract

© 2016 American Physical Society. Using arbitrary periodic pulse patterns we show the enhancement of specific frequencies in a frequency comb. The envelope of a regular frequency comb originates from equally spaced, identical pulses and mimics the single pulse spectrum. We investigated spectra originating from the periodic emission of pulse trains with gaps and individual pulse heights, which are commonly observed, for example, at high-repetition-rate free electron lasers, high power lasers, and synchrotrons. The ANKA synchrotron light source was filled with defined patterns of short electron bunches generating coherent synchrotron radiation in the terahertz range. We resolved the intensities of the frequency comb around 0.258 THz using the heterodyne mixing spectroscopy with a resolution of down to 1 Hz and provide a comprehensive theoretical description. Adjusting the electron’s revolution frequency, a gapless spectrum can be recorded, improving the resolution by up to 7 and 5 orders of magnitude compared to FTIR and recent heterodyne measurements, respectively. The results imply avenues to optimize and increase the signal-to-noise ratio of specific frequencies in the emitted synchrotron radiation spectrum to enable novel ultrahigh resolution spectroscopy and metrology applications from the terahertz to the x-ray region.

Mohr, Hannes

Master Thesis, Faculty for Physics, Karlsruhe Institute of Technology, 2016.

Abstract

In this work we present an evaluation of GPUs as a possible L1 Track Trigger for the High Luminosity LHC, effective after Long Shutdown 3 around 2025.

The novelty lies in presenting an implementation based on calculations done entirely in software, in contrast to currently discussed solutions relying on specialized hardware, such as FPGAs and ASICs.
Our solution relies on using GPUs for the calculation instead, offering floating point calculations as well as flexibility and adaptability. Normally the involved data transfer latencies make GPUs unfeasible for use in low latency environments. To this end we use a data transfer scheme based on RDMA technology. This mitigates the normally involved overheads.
We based our efforts on previous work by the collaboration of the KIT and the English track trigger group [An FPGA-based track finder for the L1 trigger of the CMS experiment at the high luminosity LHC] whose algorithm was implemented on FPGAs.
In addition to the Hough transformation used regularly, we present our own version of the algorithm based on a hexagonal layout of the binned parameter space. With comparable computational latency and workload, the approach produces significantly less fake track candidates than the traditionally used method. This comes at a cost of efficiency of around 1 percent.

This work focuses on the track finding part of the proposed L1 Track Trigger and only looks at the result of a least squares fit to make an estimate of the performance of said seeding step. We furthermore present our results in terms of overall latency of this novel approach.

While not yet competitive, our implementation has surpassed initial expectations and are on the same order of magnitude as the FPGA approach in terms of latencies. Some caveats apply at the moment. Ultimately, more recent technology, not yet available to us in the current discussion will have to be tested and benchmarked to come to a more complete assessment of the feasibility of GPUs as a means of track triggering
at the High-Luminosity-LHC’s CMS experiment.

 

First assessor: Prof. Dr. Marc Weber
Second assessor: Prof. Dr. Ulrich Husemann

Supervised by Dipl.-Inform. Timo Dritschler

Bergmann T., Balzer M., Bormann D., Chilingaryan S.A., Eitel K., Kleifges M., Kopmann A., Kozlov V., Menshikov A., Siebenborn B., Tcherniakhovski D., Vogelgesang M., Weber M.

in 2015 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2015 (2016), 7581841. DOI:10.1109/NSSMIC.2015.7581841

Abstract

© 2015 IEEE. The EDELWEISS experiment, located in the underground laboratory LSM (France), is one of the leading experiments using cryogenic germanium (Ge) detectors for a direct search for dark matter. For the EDELWEISS-III phase, a new scalable data acquisition (DAQ) system was designed and built, based on the ‘IPE4 DAQ system’, which has already been used for several experiments in astroparticle physics.

Harbaum T., Seboui M., Balzer M., Becker J., Weber M.

in Proceedings – 24th IEEE International Symposium on Field-Programmable Custom Computing Machines, FCCM 2016 (2016) 184-191, 7544775. DOI:10.1109/FCCM.2016.52

Abstract

© 2016 IEEE. Modern high-energy physics experiments such as the Compact Muon Solenoid experiment at CERN produce an extraordinary amount of data every 25ns. To handle a data rate of more than 50Tbit/s a multi-level trigger system is required, which reduces the data rate. Due to the increased luminosity after the Phase-II-Upgrade of the LHC, the CMS tracking system has to be redesigned. The current trigger system is unable to handle the resulting amount of data after this upgrade. Because of the latency of a few microseconds the Level 1 Track Trigger has to be implemented in hardware. State-of-the-art pattern recognition filter the incoming data by template matching on ASICs with a content addressable memory architecture. An implementation on an FPGA, which replaces the content addressable memory of the ASIC, has not been possible so far. This paper presents a new approach to a content addressable memory architecture, which allows an implementation of an FPGA based design. By combining filtering and track finding on an FPGA design, there are many possibilities of adjusting the two algorithms to each other. There is more flexibility enabled by the FPGA architecture in contrast to the ASIC. The presented design minimizes the stored data by logic to optimally utilize the available resources of an FPGA. Furthermore, the developed design meets the strong timing constraints and possesses the required properties of the content addressable memory.