Stevanovic U., Caselle M., Chilingaryan S., Herth A., Kopmann A., Vogelgesang M., Balzer M., Weber M.

in Conference on Design and Architectures for Signal and Image Processing, DASIP (2012) 383-384, 6385417.

Abstract

The first prototype of a high-speed camera with embedded image processing has been developed. Beside high frame rate and high through-put, the camera introduces a novel self triggering architecture to increase the frame rate and to reduce the amount of received data. The camera is intended for synchrotron ultra-fast X-ray radiography and tomography, but it’s concept is also suitable for other fields. © 2012 ECSI.

Bergmann T., Bormann D., Howe M.A., Kleifges M., Kopmann A., Kunka N., Menshikov A., Tcherniakhovski D.

in 2012 18th IEEE-NPSS Real Time Conference, RT 2012 (2012), 6418197. DOI:10.1109/RTC.2012.6418197

Abstract

Our group at KIT has been developing data acquisition (DAQ) systems for many years mainly for large physics experiments like the KATRIN neutrino experiment or the Pierre Auger cosmic ray observatory. The DAQ systems were continuously enhanced as new technologies became available. The core of the DAQ systems are field programmable gate arrays (FPGAs). Trigger functions running on the FPGAs select relevant events out of the permanent data stream of the ADCs and pass it over PCI bus to a embedded Linux computer for further analysis and storage. Modern experiments have raising requirements in both data rate and complexity of trigger and analysis function. To achieve a flexible and fast data link we developed a PCI to PCI Express (PCIe) adapter board which can be connected to any PC equipped with a standard PCIe plug-in adapter. We use this adapter to replace the embedded Linux system and to connect external GPU servers directly to the DAQ system. With this powerful data processing facility at the end of the data chain we can run complex third level trigger functions, reconstruction algorithms and analysis calculations. With PCIe as fast data link and GPU computing together with the well established FPGA unit we achieved a substantial enhancement of our DAQ system. © 2012 IEEE.

Haas D., Mexner W., Spangenberg T., Cecilia A., Vagovic P., Kopmann A., Balzer M., Vogelgesang M., Pasic H., Chilingaryan S.

in PCaPAC 2012 – 9th International Workshop on Personal Computers and Particle Accelerator Controls (2012) 103-105.

Abstract

X-ray imaging permits to spatially resolve the 2D and 3D structure in materials and organisms, which is crucial for the understanding of their properties. Additional temporal resolution of structure evolution gives access to dynamics of processes and allows to understand functionality of devices and organisms with the goal to optimize technological processes. Such time-resolved dynamic analysis of micro-sized structures is now possible by aid of ultrafast tomography, as being developed at the TopoTomo beamline of the synchrotron light source ANKA. At TopoTomo, the whole experimental workflow has been significantly improved in order to decrease the total duration of a tomography experiment down to the range of minutes. To meet these requirements, detectors and the computing infrastructure have been optimized, comprising a Tango-based control system for ultra fast tomography with a data throughput of several 100 MB/s. Multi-GPU based computing allows for high speed data processing by using a special reconstruction scheme. Furthermore the data management infrastructure will allow for a life cycle management of data sets accumulating several TByte/day. The final concept will also be part of the IMAGE beamline, which is going to be installed in 2013. © 2012 by the respective authors.

Balzer M., Caselle M., Chilingaryian S., Herth A., Kopmann A., Stevanovic U., Vogelgesang M., Rolo T.D.S.

in SEI 2012 – 103. Tagung der Studiengruppe Elektronische Instrumentierung im Fruhjahr 2012 (2012) 121-132.

Chilingaryan S., Kopmann A., Mirone A., Dos Santos Rolo T., Vogelgesang M.

in SC’11 – Proceedings of the 2011 High Performance Computing Networking, Storage and Analysis Companion, Co-located with SC’11 (2011) 51-52. DOI:10.1145/2148600.2148624

Abstract

X-ray tomography has been proven to be a valuable tool for understanding internal, otherwise invisible, mechanisms in biology and other fields. Recent advances in digital detector technology enabled investigation of dynamic processes in 3D with a temporal resolution down to the milliseconds range. Unfortunately it requires computationally intensive recon- struction algorithms with long post-processing times. We have optimized the reconstruction software employed at the micro-tomography beamlines at KIT and ESRF. Using a 4 stage pipelined architecture and the computational power of modern graphic cards, we were able to reduce the processing time by a factor 75 with a single server. The time required to reconstruct a typical 3D image is reduced down to several seconds only and online visualization is possible for the first time.Copyright is held by the author/owner(s).

Chilingaryan S., Mirone A., Hammersley A., Ferrero C., Helfen L., Kopmann A., Dos Santos Rolo T., Vagovic P.

in IEEE Transactions on Nuclear Science, 58 (2011) 1447-1455, 5766797. DOI:10.1109/TNS.2011.2141686

Abstract

Advances in digital detector technology leads presently to rapidly increasing data rates in imaging experiments. Using fast two-dimensional detectors in computed tomography, the data acquisition can be much faster than the reconstruction if no adequate measures are taken, especially when a high photon flux at synchrotron sources is used. We have optimized the reconstruction software employed at the micro-tomography beamlines of our synchrotron facilities to use the computational power of modern graphic cards. The main paradigm of our approach is the full utilization of all system resources. We use a pipelined architecture, where the GPUs are used as compute coprocessors to reconstruct slices, while the CPUs are preparing the next ones. Special attention is devoted to minimize data transfers between the host and GPU memory and to execute memory transfers in parallel with the computations. We were able to reduce the reconstruction time by a factor 30 and process a typical data set of 20 GB in 40 seconds. The time needed for the first evaluation of the reconstructed sample is reduced significantly and quasi real-time visualization is now possible. © 2006 IEEE.

Chilingaryan S., Kopmann A., Mirone A., Dos Santos Rolo T.

in Conference Record – 2010 17th IEEE-NPSS Real Time Conference, RT10 (2010), 5750342. DOI:10.1109/RTC.2010.5750342

Abstract

Current imaging experiments at synchrotron beam lines often lack a real-time data assessment. X-ray imaging cameras installed at synchrotron facilities like ANKA provide millions of pixels, each with a resolution of 12 bits or more, and take up to several thousand frames per second. A given experiment can produce data sets of multiple gigabytes in a few seconds. Up to now the data is stored in local memory, transferred to mass storage, and then processed and analyzed off-line. The data quality and thus the success of the experiment, can, therefore, only be judged with a substantial delay, which makes an immediate monitoring of the results impossible. To optimize the usage of the micro-tomography beam-line at ANKA we have ported the reconstruction software to modern graphic adapters which offer an enormous amount of calculation power. We were able to reduce the reconstruction time from multiple hours to just a few minutes with a sample dataset of 20 GB. Using the new reconstruction software it is possible to provide a near real-time visualization and significantly reduce the time needed for the first evaluation of the reconstructed sample. The main paradigm of our approach is 100% utilization of all system resources. The compute intensive parts are offloaded to the GPU. While the GPU is reconstructing one slice, the CPUs are used to prepare the next one. A special attention is devoted to minimize data transfers between the host and GPU memory and to execute I/O operations in parallel with the computations. It could be shown that for our application not the computational part but the data transfers are now limiting the speed of the reconstruction. Several changes in the architecture of the DAQ system are proposed to overcome this second bottleneck. The article will introduce the system architecture, describe the hardware platform in details, and analyze performance gains during the first half year of operation. © 2010 IEEE.

Phillips D.G., Bergmann T., Corona T.J., Frankle F., Howe M.A., Kleifges M., Kopmann A., Leber M., Menshikov A., Tcherniakhovski D., Vandevender B., Wall B., Wilkerson J.F., Wustling S.

in IEEE Nuclear Science Symposium Conference Record (2010) 1399-1403, 5874002. DOI:10.1109/NSSMIC.2010.5874002

Abstract

This article will describe the procedures used to validate and characterize the combined hardware and software DAQ system of the KATRIN experiment. The Mk4 DAQ Electronics is the latest version in a series of field programmable gate array (FPGA)-based electronics developed at the Karlsruhe Institute of Technology’s Institute of Data Processing and Electronics (IPE). This system will serve as the primary detector readout in the KATRIN experiment. The KATRIN data acquisition software is a MacOS X application called ORCA (Object-oriented Real-time Control and Acquisition), which includes a powerful scripting language called ORCAScript. This article will also describe how ORCAScript is used in the validation and characterization tests of the Mk4 DAQ electronics system. © 2010 IEEE.

Chilingaryan S., Beglarian A., Kopmann A., Vocking S.

in Journal of Physics: Conference Series, 219 (2010), 042034. DOI:10.1088/1742-6596/219/4/042034

Abstract

During operation of high energy physics experiments a big amount of slow control data is recorded. It is necessary to examine all collected data checking the integrity and validity of measurements. With growing maturity of AJAX technologies it becomes possible to construct sophisticated interfaces using web technologies only. Our solution for handling time series, generally slow control data, has a modular architecture: backend system for data analysis and preparation, a web service interface for data access and a fast AJAX web display. In order to provide fast interactive access the time series are aggregated over time slices of few predefined lengths. The aggregated values are stored in the temporary caching database and, then, are used to create generalizing data plots. These plots may include indication of data quality and are generated within few hundreds of milliseconds even if very high data rates are involved. The extensible export subsystem provides data in multiple formats including CSV, Excel, ROOT, and TDMS. The search engine can be used to find periods of time where indications of selected sensors are falling into the specified ranges. Utilization of the caching database allows performing most of such lookups within a second. Based on this functionality a web interface facilitating fast (Google-maps style) navigation through the data has been implemented. The solution is at the moment used by several slow control systems at Test Facility for Fusion Magnets (TOSKA) and Karlsruhe Tritium Neutrino (KATRIN). © 2010 IOP Publishing Ltd.

Chilingaryan S.

in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 5667 LNCS (2009) 21-34. DOI:10.1007/978-3-642-04205-8_4

Abstract

The XML technologies have brought a lot of new ideas and abilities in the field of information management systems. Nowadays, XML is used almost everywhere: from small configuration files to multi-gigabyte archives of measurements. Many network services are using XML as transport protocol. XML based applications are utilizing multiple XML technologies to simplify software development: DOM is used to create and navigate XML documents, XSD schema is used to check consistency and validity, XSL simplifies transformation between different formats, XML Encryption and Signature establishes secure and trustworthy way of information exchange and storage. These technologies are provided by multiple commercial and open source libraries which are significantly varied in features and performance. Moreover, some libraries are optimized to certain tasks and, therefore, the actual library performance could significantly vary depending on the type of data processed. XMLBench project was started to provide comprehensive comparison of available XML toolkits in their functionality and ability to sustain required performance. The main target was fast C and C++ libraries able to work on multiple platforms. The applied tests compare different aspects of XML processing and are run on few auto-generated data sets emulating library usage for different tasks. The details of test setup and achieved results will be presented. © 2009 Springer Berlin Heidelberg.