Lautner S., Lenz C., Hammel J., Moosmann J., Kuhn M., Caselle M., Vogelgesang M., Kopmann A., Beckmann F.
in Proceedings of SPIE – The International Society for Optical Engineering, 10391 (2017), 1039118. DOI:10.1117/12.2287221
© 2017 SPIE. Water transport from roots to shoots is a vital necessity in trees in order to sustain their photosynthetic activity and, hence, their physiological activity. The vascular tissue in charge is the woody body of root, stem and branches. In gymnosperm trees, like spruce trees (Picea abies (L.) Karst.), vascular tissue consists of tracheids: elongated, protoplast- free cells with a rigid cell wall that allow for axial water transport via their lumina. In order to analyze the over-all water transport capacity within one growth ring, time-consuming light microscopy analysis of the woody sample still is the conventional approach for calculating tracheid lumen area. In our investigations at the Imaging Beamline (IBL) operated by the Helmholtz-Zentrum Geesthacht (HZG) at PETRA III storage ring of the Deutsches Elektronen-Synchrotron DESY, Hamburg, we applied SRμCT on small wood samples of spruce trees in order to visualize and analyze size and formation of xylem elements and their respective lumina. The selected high-resolution phase-contrast technique makes full use of the novel 20 MPixel CMOS area detector developed within the cooperation of HZG and the Karlsruhe data by light microscopy analysis and, hence, prove, that μCT is a most appropriate method to gain valid information on xylem cell structure and tree water transport capacity.
Jerome N.T., Chilingaryan S., Shkarin A., Kopmann A., Zapf M., Lizin A., Bergmann T.
in VISIGRAPP 2017 – Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 3 (2017) 152-163.
Copyright © 2017 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved.With data sets growing beyond terabytes or even petabytes in scientific experiments, there is a trend of keeping data at storage facilities and providing remote cloud-based services for analysis. However, accessing these data sets remotely is cumbersome due to additional network latency and incomplete metadata description. To ease data browsing on remote data archives, our WAVE framework applies an intelligent cache management to provide scientists with a visual feedback on the large data set interactively. In this paper, we present methods to reduce the data set size while preserving visual quality. Our framework supports volume rendering and surface rendering for data inspection and analysis. Furthermore, we enable a zoom-on-demand approach, where a selected volumetric region is reloaded with higher details. Finally, we evaluated the WAVE framework using a data set from the entomology science research.
Bergmann T., Balzer M., Hopp T., Van De Kamp T., Kopmann A., Jerome N.T., Zapf M.
in VISIGRAPP 2017 – Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 3 (2017) 330-334.
Copyright © 2017 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved. The computer gaming industry is traditionally the moving power and spirit in the development of computer visualization hardware and software. This year, affordable and high quality virtual reality headsets became available and the science community is eager to get benefit from it. This paper describes first experiences in adapting the new hardware for three different visualization use cases. In all three examples existing visualization pipelines were extended by virtual reality technology. We describe our approach, based on the HTC Vive VR headset, the open source software Blender and the Unreal Engine 4 game engine. The use cases are from three different fields: large-scale particle physics research, X-ray-imaging for entomology research and medical imaging with ultrasound computer tomography. Finally we discuss benefits and limits of the current virtual reality technology and present an outlook to future developments.
Rota L., Caselle M., Norbert Balzer M., Weber M., Mozzanica A., Schmitt B.
in Proceedings of Science, 2017-September (2017).
© Copyright owned by the author(s) under the terms of the Creative Commons. We present a front-end readout ASIC developed for a new family of ultra-fast 1D imaging detectors operating at frame rates of up to 12 MHz. The ASIC, realized in 110 nm CMOS technology, is designed to be compatible with different semiconductor sensors. The final chip will contain up to 128 channels, each consisting of a Charge-Sensitive Amplifier, a noise shaper based on a fully-differential Correlated Double Sampling stage and a Sample-and-Hold buffer. The differential channels are connected through 8:1 analog multiplexers to the output drivers, which directly interface external analog-to-digital converters. A first prototype with a limited number of channels have been characterized with a Si microstrip detector. When operated at the maximum frame-rate of 12 MHz, the ASIC exhibits an Equivalent Noise Charge of 417 electrons with a detector capacitance of 1.3 pF.
Cieri D. et al.
in Proceedings of Science, 2017-September (2017).
© Copyright owned by the author(s) under the terms of the Creative Commons. A new tracking detector is under development for use by the CMS experiment at the High-Luminosity LHC (HL-LHC). A crucial component of this upgrade will be the ability to reconstruct within a few microseconds all charged particle tracks with transverse momentum above 3 GeV, so they can be used in the Level-1 trigger decision. A concept for an FPGA-based track finder using a fully time-multiplexed architecture is presented, where track candidates are reconstructed using a projective binning algorithm based on the Hough Transform followed by a track fitting based on the linear regression technique. A hardware demonstrator using MP7 processing boards has been assembled to prove the entire system, from the output of the tracker readout boards to the reconstruction of tracks with fitted helix parameters. It successfully operates on one eighth of the tracker solid angle at a time, processing events taken at 40 MHz, each with up to 200 superimposed proton-proton interactions, whilst satisfying latency constraints. The demonstrated track-reconstruction system, the chosen architecture, the achievements to date and future options for such a system will be discussed.
Fedi G., Magazzu G., Palla F., Bilei G.M., Checcucci B., Gentsos C., Magalotti D., Storchi L., Balzer M.N., Tcherniakhowski D., Sander O., Baulieu G., Galbit G.C., Viret S., Modak A., Chowdhury S.R.
in Proceedings of Science, 2017-September (2017).
© Copyright owned by the author(s) under the terms of the Creative Commons. A Real-Time demonstrator based on the ATCA Pulsar-IIB custom board and on the Pattern Recognition Mezzanine (PRM) board has been developed as a flexible platform to test and characterize low-latency algorithms for track reconstruction and L1 Trigger generation in future High Energy Physics experiments. The demonstrator has been extensively used to test and characterize the Track-Trigger algorithms and architecture based on the use of the Associative Memory ASICs and of the PRM cards. The flexibility of the demonstrator makes it suitable to explore other solutions fully based on a high-performance FPGA device.
Losel P., Heuveline V.
in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10129 LNCS (2017) 121-128. DOI:10.1007/978-3-319-52280-7_12
© Springer International Publishing AG 2017. Segmenting the blood pool and myocardium from a 3D cardiovascular magnetic resonance (CMR) image allows to create a patient-specific heart model for surgical planning in children with complex congenital heart disease (CHD). Implementation of semi-automatic or automatic segmentation algorithms is challenging because of a high anatomical variability of the heart defects, low contrast, and intensity variations in the images. Therefore, manual segmentation is the gold standard but it is labor-intensive. In this paper we report the set-up and results of a highly scalable semi-automatic diffusion algorithm for image segmentation. The method extrapolates the information from a small number of expert manually labeled reference slices to the remaining volume. While results of most semi-automatic algorithms strongly depend on well-chosen but usually unknown parameters this approach is parameter-free. Validation is performed on twenty 3D CMR images.
Steinmann J.L., Blomley E., Brosi M., Brundermann E., Caselle M., Hesler J.L., Hiller N., Kehrer B., Mathis Y.-L., Nasse M.J., Raasch J., Schedler M., Schonfeldt P., Schuh M., Schwarz M., Siegel M., Smale N., Weber M., Muller A.-S.
in Physical Review Letters, 117 (2016), 174802. DOI:10.1103/PhysRevLett.117.174802
© 2016 American Physical Society. Using arbitrary periodic pulse patterns we show the enhancement of specific frequencies in a frequency comb. The envelope of a regular frequency comb originates from equally spaced, identical pulses and mimics the single pulse spectrum. We investigated spectra originating from the periodic emission of pulse trains with gaps and individual pulse heights, which are commonly observed, for example, at high-repetition-rate free electron lasers, high power lasers, and synchrotrons. The ANKA synchrotron light source was filled with defined patterns of short electron bunches generating coherent synchrotron radiation in the terahertz range. We resolved the intensities of the frequency comb around 0.258 THz using the heterodyne mixing spectroscopy with a resolution of down to 1 Hz and provide a comprehensive theoretical description. Adjusting the electron’s revolution frequency, a gapless spectrum can be recorded, improving the resolution by up to 7 and 5 orders of magnitude compared to FTIR and recent heterodyne measurements, respectively. The results imply avenues to optimize and increase the signal-to-noise ratio of specific frequencies in the emitted synchrotron radiation spectrum to enable novel ultrahigh resolution spectroscopy and metrology applications from the terahertz to the x-ray region.
Bergmann T., Balzer M., Bormann D., Chilingaryan S.A., Eitel K., Kleifges M., Kopmann A., Kozlov V., Menshikov A., Siebenborn B., Tcherniakhovski D., Vogelgesang M., Weber M.
in 2015 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2015 (2016), 7581841. DOI:10.1109/NSSMIC.2015.7581841
© 2015 IEEE. The EDELWEISS experiment, located in the underground laboratory LSM (France), is one of the leading experiments using cryogenic germanium (Ge) detectors for a direct search for dark matter. For the EDELWEISS-III phase, a new scalable data acquisition (DAQ) system was designed and built, based on the ‘IPE4 DAQ system’, which has already been used for several experiments in astroparticle physics.
Harbaum T., Seboui M., Balzer M., Becker J., Weber M.
in Proceedings – 24th IEEE International Symposium on Field-Programmable Custom Computing Machines, FCCM 2016 (2016) 184-191, 7544775. DOI:10.1109/FCCM.2016.52
© 2016 IEEE. Modern high-energy physics experiments such as the Compact Muon Solenoid experiment at CERN produce an extraordinary amount of data every 25ns. To handle a data rate of more than 50Tbit/s a multi-level trigger system is required, which reduces the data rate. Due to the increased luminosity after the Phase-II-Upgrade of the LHC, the CMS tracking system has to be redesigned. The current trigger system is unable to handle the resulting amount of data after this upgrade. Because of the latency of a few microseconds the Level 1 Track Trigger has to be implemented in hardware. State-of-the-art pattern recognition filter the incoming data by template matching on ASICs with a content addressable memory architecture. An implementation on an FPGA, which replaces the content addressable memory of the ASIC, has not been possible so far. This paper presents a new approach to a content addressable memory architecture, which allows an implementation of an FPGA based design. By combining filtering and track finding on an FPGA design, there are many possibilities of adjusting the two algorithms to each other. There is more flexibility enabled by the FPGA architecture in contrast to the ASIC. The presented design minimizes the stored data by logic to optimally utilize the available resources of an FPGA. Furthermore, the developed design meets the strong timing constraints and possesses the required properties of the content addressable memory.