Danilewsky A.N., Wittge J., Croell A., Allen D., McNally P., Vagovic P., Dos Santos Rolo T., Li Z., Baumbach T., Gorostegui-Colinas E., Garagorri J., Elizalde M.R., Fossati M.C., Bowen D.K., Tanner B.K.

in Journal of Crystal Growth, 318 (2011) 1157-1163. DOI:10.1016/j.jcrysgro.2010.10.199


White beam X-ray diffraction imaging (topography) with an optimised CCD-detector system is used to monitor in-situ and in real time the nucleation, growth and movement of dislocations in silicon at high temperatures. It can be shown, that damage like microcracks and the surrounding strain fields in a wafer act as sources for dislocation loops, which end in slip bands far away from the source. The dislocations are arranged in channels of parallel {1 1 1} glide planes, which become visible as bands of parallel surface steps when the dislocations slip out on the back or front sides of the wafer. The width of such a channel or band depend on the dimensions of the damaged volume where the dislocations nucleate. This can be explained with a simple geometrical model. © 2010 Elsevier B.V.

Chilingaryan S., Kopmann A., Mirone A., Dos Santos Rolo T.

in Conference Record – 2010 17th IEEE-NPSS Real Time Conference, RT10 (2010), 5750342. DOI:10.1109/RTC.2010.5750342


Current imaging experiments at synchrotron beam lines often lack a real-time data assessment. X-ray imaging cameras installed at synchrotron facilities like ANKA provide millions of pixels, each with a resolution of 12 bits or more, and take up to several thousand frames per second. A given experiment can produce data sets of multiple gigabytes in a few seconds. Up to now the data is stored in local memory, transferred to mass storage, and then processed and analyzed off-line. The data quality and thus the success of the experiment, can, therefore, only be judged with a substantial delay, which makes an immediate monitoring of the results impossible. To optimize the usage of the micro-tomography beam-line at ANKA we have ported the reconstruction software to modern graphic adapters which offer an enormous amount of calculation power. We were able to reduce the reconstruction time from multiple hours to just a few minutes with a sample dataset of 20 GB. Using the new reconstruction software it is possible to provide a near real-time visualization and significantly reduce the time needed for the first evaluation of the reconstructed sample. The main paradigm of our approach is 100% utilization of all system resources. The compute intensive parts are offloaded to the GPU. While the GPU is reconstructing one slice, the CPUs are used to prepare the next one. A special attention is devoted to minimize data transfers between the host and GPU memory and to execute I/O operations in parallel with the computations. It could be shown that for our application not the computational part but the data transfers are now limiting the speed of the reconstruction. Several changes in the architecture of the DAQ system are proposed to overcome this second bottleneck. The article will introduce the system architecture, describe the hardware platform in details, and analyze performance gains during the first half year of operation. © 2010 IEEE.