Cecilia A., Rack A., Douissard P.-A., Martin T., Dos Santos Rolo T., Vagovic P., Pelliccia D., Couchaud M., Dupre K., Baumbach T.

in Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 633 (2011). DOI:10.1016/j.nima.2010.06.192

Abstract

Within the framework of an FP6 project (SCINTAX)1 we developed a new thin film single crystal scintillator for high resolution X-ray imaging based on a layer of modified LSO (Lu2SiO5) grown by liquid phase epitaxy (LPE) on a dedicated substrate. In this work we present the characterisation of the scintillating LSO films in terms of optical and scintillation properties as well as spatial resolution performances. The obtained results are discussed and compared with the performances of the thin scintillating films commonly used in synchrotron-based micro-imaging applications. © 2010 Elsevier B.V. All rights reserved.

Danilewsky A.N., Wittge J., Croell A., Allen D., McNally P., Vagovic P., Dos Santos Rolo T., Li Z., Baumbach T., Gorostegui-Colinas E., Garagorri J., Elizalde M.R., Fossati M.C., Bowen D.K., Tanner B.K.

in Journal of Crystal Growth, 318 (2011) 1157-1163. DOI:10.1016/j.jcrysgro.2010.10.199

Abstract

White beam X-ray diffraction imaging (topography) with an optimised CCD-detector system is used to monitor in-situ and in real time the nucleation, growth and movement of dislocations in silicon at high temperatures. It can be shown, that damage like microcracks and the surrounding strain fields in a wafer act as sources for dislocation loops, which end in slip bands far away from the source. The dislocations are arranged in channels of parallel {1 1 1} glide planes, which become visible as bands of parallel surface steps when the dislocations slip out on the back or front sides of the wafer. The width of such a channel or band depend on the dimensions of the damaged volume where the dislocations nucleate. This can be explained with a simple geometrical model. © 2010 Elsevier B.V.

Chilingaryan S., Kopmann A., Mirone A., Dos Santos Rolo T.

in Conference Record – 2010 17th IEEE-NPSS Real Time Conference, RT10 (2010), 5750342. DOI:10.1109/RTC.2010.5750342

Abstract

Current imaging experiments at synchrotron beam lines often lack a real-time data assessment. X-ray imaging cameras installed at synchrotron facilities like ANKA provide millions of pixels, each with a resolution of 12 bits or more, and take up to several thousand frames per second. A given experiment can produce data sets of multiple gigabytes in a few seconds. Up to now the data is stored in local memory, transferred to mass storage, and then processed and analyzed off-line. The data quality and thus the success of the experiment, can, therefore, only be judged with a substantial delay, which makes an immediate monitoring of the results impossible. To optimize the usage of the micro-tomography beam-line at ANKA we have ported the reconstruction software to modern graphic adapters which offer an enormous amount of calculation power. We were able to reduce the reconstruction time from multiple hours to just a few minutes with a sample dataset of 20 GB. Using the new reconstruction software it is possible to provide a near real-time visualization and significantly reduce the time needed for the first evaluation of the reconstructed sample. The main paradigm of our approach is 100% utilization of all system resources. The compute intensive parts are offloaded to the GPU. While the GPU is reconstructing one slice, the CPUs are used to prepare the next one. A special attention is devoted to minimize data transfers between the host and GPU memory and to execute I/O operations in parallel with the computations. It could be shown that for our application not the computational part but the data transfers are now limiting the speed of the reconstruction. Several changes in the architecture of the DAQ system are proposed to overcome this second bottleneck. The article will introduce the system architecture, describe the hardware platform in details, and analyze performance gains during the first half year of operation. © 2010 IEEE.

Phillips D.G., Bergmann T., Corona T.J., Frankle F., Howe M.A., Kleifges M., Kopmann A., Leber M., Menshikov A., Tcherniakhovski D., Vandevender B., Wall B., Wilkerson J.F., Wustling S.

in IEEE Nuclear Science Symposium Conference Record (2010) 1399-1403, 5874002. DOI:10.1109/NSSMIC.2010.5874002

Abstract

This article will describe the procedures used to validate and characterize the combined hardware and software DAQ system of the KATRIN experiment. The Mk4 DAQ Electronics is the latest version in a series of field programmable gate array (FPGA)-based electronics developed at the Karlsruhe Institute of Technology’s Institute of Data Processing and Electronics (IPE). This system will serve as the primary detector readout in the KATRIN experiment. The KATRIN data acquisition software is a MacOS X application called ORCA (Object-oriented Real-time Control and Acquisition), which includes a powerful scripting language called ORCAScript. This article will also describe how ORCAScript is used in the validation and characterization tests of the Mk4 DAQ electronics system. © 2010 IEEE.

Chilingaryan S., Beglarian A., Kopmann A., Vocking S.

in Journal of Physics: Conference Series, 219 (2010), 042034. DOI:10.1088/1742-6596/219/4/042034

Abstract

During operation of high energy physics experiments a big amount of slow control data is recorded. It is necessary to examine all collected data checking the integrity and validity of measurements. With growing maturity of AJAX technologies it becomes possible to construct sophisticated interfaces using web technologies only. Our solution for handling time series, generally slow control data, has a modular architecture: backend system for data analysis and preparation, a web service interface for data access and a fast AJAX web display. In order to provide fast interactive access the time series are aggregated over time slices of few predefined lengths. The aggregated values are stored in the temporary caching database and, then, are used to create generalizing data plots. These plots may include indication of data quality and are generated within few hundreds of milliseconds even if very high data rates are involved. The extensible export subsystem provides data in multiple formats including CSV, Excel, ROOT, and TDMS. The search engine can be used to find periods of time where indications of selected sensors are falling into the specified ranges. Utilization of the caching database allows performing most of such lookups within a second. Based on this functionality a web interface facilitating fast (Google-maps style) navigation through the data has been implemented. The solution is at the moment used by several slow control systems at Test Facility for Fusion Magnets (TOSKA) and Karlsruhe Tritium Neutrino (KATRIN). © 2010 IOP Publishing Ltd.

Chilingaryan S.

in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 5667 LNCS (2009) 21-34. DOI:10.1007/978-3-642-04205-8_4

Abstract

The XML technologies have brought a lot of new ideas and abilities in the field of information management systems. Nowadays, XML is used almost everywhere: from small configuration files to multi-gigabyte archives of measurements. Many network services are using XML as transport protocol. XML based applications are utilizing multiple XML technologies to simplify software development: DOM is used to create and navigate XML documents, XSD schema is used to check consistency and validity, XSL simplifies transformation between different formats, XML Encryption and Signature establishes secure and trustworthy way of information exchange and storage. These technologies are provided by multiple commercial and open source libraries which are significantly varied in features and performance. Moreover, some libraries are optimized to certain tasks and, therefore, the actual library performance could significantly vary depending on the type of data processed. XMLBench project was started to provide comprehensive comparison of available XML toolkits in their functionality and ability to sustain required performance. The main target was fast C and C++ libraries able to work on multiple platforms. The applied tests compare different aspects of XML processing and are run on few auto-generated data sets emulating library usage for different tasks. The details of test setup and achieved results will be presented. © 2009 Springer Berlin Heidelberg.