SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Eerola Paula) ;pers:(Mjörnmark Ulf)"

Sökning: WFRF:(Eerola Paula) > Mjörnmark Ulf

  • Resultat 1-10 av 10
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Abat, E., et al. (författare)
  • The ATLAS Transition Radiation Tracker (TRT) proportional drift tube: design and performance
  • 2008
  • Ingår i: Journal of Instrumentation. - 1748-0221. ; 3:2
  • Tidskriftsartikel (refereegranskat)abstract
    • A straw proportional counter is the basic element of the ATLAS Transition Radiation Tracker (TRT). Its detailed properties as well as the main properties of a few TRT operating gas mixtures are described. Particular attention is paid to straw tube performance in high radiation conditions and to its operational stability.
  •  
2.
  • Abat, E., et al. (författare)
  • The ATLAS TRT barrel detector
  • 2008
  • Ingår i: Journal of Instrumentation. - 1748-0221. ; 3
  • Tidskriftsartikel (refereegranskat)abstract
    • The ATLAS TRT barrel is a tracking drift chamber using 52,544 individual tubular drift tubes. It is one part of the ATLAS Inner Detector, which consists of three sub-systems: the pixel detector spanning the radius range 4 to 20 cm, the semiconductor tracker (SCT) from 30 to 52 cm, and the transition radiation tracker ( TRT) from 56 to 108 cm. The TRT barrel covers the central pseudo-rapidity region |eta| < 1, while the TRT endcaps cover the forward and backward eta regions. These TRT systems provide a combination of continuous tracking with many measurements in individual drift tubes ( or straws) and of electron identification based on transition radiation from fibers or foils interleaved between the straws themselves. This paper describes the recently-completed construction of the TRT Barrel detector, including the quality control procedures used in the fabrication of the detector.
  •  
3.
  • Abat, E., et al. (författare)
  • The ATLAS TRT electronics
  • 2008
  • Ingår i: Journal of Instrumentation. - 1748-0221. ; 3:6
  • Tidskriftsartikel (refereegranskat)abstract
    • The ATLAS inner detector consists of three sub-systems: the pixel detector spanning the radius range 4cm-20cm, the semiconductor tracker at radii from 30 to 52 cm, and the transition radiation tracker (TRT), tracking from 56 to 107 cm. The TRT provides a combination of continuous tracking with many projective measurements based on individual drift tubes (or straws) and of electron identification based on transition radiation from fibres or foils interleaved between the straws themselves. This paper describes the on and off detector electronics for the TRT as well as the TRT portion of the data acquisition (DAQ) system.
  •  
4.
  • Almehed, Sverker, et al. (författare)
  • Regional research exploitation of the LHC: A case-study of the required computing resources
  • 2002
  • Ingår i: Computer Physics Communications. - 0010-4655. ; 145:3, s. 341-350
  • Tidskriftsartikel (refereegranskat)abstract
    • A simulation study to evaluate the required computing resources for a research exploitation of the Large Hadron Collider (LHC) has been performed. The evaluation,vas done as a case study. assuming existence of a Nordic regional centre and using the requirements for per-forming a specific physics analysis as a yard-stick. Other input parameters were: assumption for the distribution of researchers at the institutions involved, an analysis model, and two different functional structures of the computing resources.
  •  
5.
  • Capeans, M., et al. (författare)
  • Recent aging studies for the ATLAS transition radiation tracker
  • 2004
  • Ingår i: IEEE Transactions on Nuclear Science. ; 51, s. 960-967
  • Konferensbidrag (refereegranskat)abstract
    • The transition radiation tracker (TRT) is one of the three subsystems of the inner detector of the ATLAS experiment. It is designed to operate for 10 yr at the LHC, with integrated charges of /spl sim/10 C/cm of wire and radiation doses of about 10 Mrad and 2/spl times/10/sup 14/ neutrons/cm/sup 2/. These doses translate into unprecedented ionization currents and integrated charges for a large-scale gaseous detector. This paper describes studies leading to the adoption of a new ionization gas regime for the ATLAS TRT. In this new regime, the primary gas mixture is 70%Xe-27%CO/sub 2/-3%O/sub 2/. It is planned to occasionally flush and operate the TRT detector with an Ar-based ternary mixture, containing a small percentage of CF/sub 4/, to remove, if needed, silicon pollution from the anode wires. This procedure has been validated in realistic conditions and would require a few days of dedicated operation. This paper covers both performance and aging studies with the new TRT gas mixture.
  •  
6.
  • Cwetanski, P, et al. (författare)
  • Acceptance tests and criteria of the ATLAS transition radiation tracker
  • 2005
  • Ingår i: IEEE Transactions on Nuclear Science. - 0018-9499. ; 52:6, s. 2911-2916
  • Tidskriftsartikel (refereegranskat)abstract
    • The Transition Radiation Tracker (TRT) sits at the outermost part of the ATLAS Inner Detector, encasing the Pixel Detector and the Semi-Conductor Tracker (SCT). The TRT combines charged particle track reconstruction with electron identification capability. This is achieved by layers of xenonfilled straw tubes with periodic radiator foils or fibers providing TR photon emission. The design and choice of materials have been optimized to cope with the harsh operating conditions at the LHC, which are expected to lead to an accumulated radiation dose of 10 Mrad and a neutron fluence of up to 2 . 10(14) n/cm(2) after ten years of operation. The TRT comprises a barrel containing 52 000 axial straws and two end-cap parts with 320 000 radial straws. The total of 420 000 electronic channels (two channels per barrel straw) allows continuous tracking with many projective measurements (more than 30 straw hits per track). The assembly of the barrel modules in the US has recently been completed, while the end-cap wheel construction in Russia has reached the 50% mark. After testing at the production sites and shipment to CERN, all modules and wheels undergo a series of quality and conformity measurements. These acceptance tests survey dimensions, wire tension, gas-tightness, high-voltage stability and gas-gain uniformity along each individual straw. This paper gives details on the acceptance criteria and measurement methods. An overview of the most important results obtained to-date is also given.
  •  
7.
  •  
8.
  • Åkesson, Torsten, et al. (författare)
  • ATLAS computing: Technical Design Report
  • 2005
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • The ATLAS Computing Model embraces the Grid paradigm and a high degree of decentralization and sharing of computing resources. The required level of computing resources means that off-site facilities will be vital to the operation of ATLAS in a way that was not the case for previous CERN-based experiments. The primary event processing occurs at CERN in a Tier-0 facility. The RAW data is archived at CERN and copied (along with the primary processed data) to the Tier-1 facilities around the world. These facilities archive the raw data, provide the reprocessing capacity, provide access to the various processed versions, and allow scheduled analysis of the processed data by physics analysis groups. Derived datasets produced by the physics groups are copied to the Tier-2 facilities for further analysis. The Tier-2 facilities also provide the simulation capacity for the experiment, with the simulated data housed at Tier-1s. In addition, Tier-2 centres will provide analysis facilities, and some will provide the capacity to produce calibrations based on processing raw data. A CERN Analysis Facility provides an additional analysis capacity, with an important role in the calibration and algorithmic development work. ATLAS has adopted an object-oriented approach to software, based primarily on the C++ programming language, but with some components implemented using FORTRAN and Java. A component-based model has been adopted, whereby applications are built up from collections of plug-compatible components based on a variety of configuration files. This capability is supported by a common framework that provides common data-processing support. This approach results in great flexibility in meeting the basic processing needs of the experiment, and also for responding to changing requirements throughout its lifetime. The heavy use of abstract interfaces allows for different implementations to be provided, supporting different persistency technologies, or optimized for the offline or high-level trigger environments. The Athena framework is an enhanced version of the Gaudi framework that was originally developed by the LHCb experiment, but is now a common ATLAS-LHCb project. Major design principles are the clear separation of data and algorithms, and of transient (in-memory) and persistent (in-file) data. All levels of processing of ATLAS data, from high-level trigger to event simulation, reconstruction and analysis, take place within the Athena framework; in this way it is easier for code developers and users to test and run algorithmic code, with the assurance that all geometry and conditions data will be the same for all types of applications (simulation, reconstruction, analysis, visualization). One of the principal challenges for ATLAS computing is to develop and operate a data storage and management infrastructure able to meet the demands of a yearly data volume of O(10 PB) utilized by data processing and analysis activities spread around the world. The ATLAS Computing Model establishes the environment and operational requirements that ATLAS data-handling systems must support, and, together with the operational experience gained to date in test beams and data challenges, provides the primary guidance for the development of the data management systems. The ATLAS Databases and Data Management Project (DB Project) leads and coordinates ATLAS activities in these areas, with a scope encompassing technical databases (detector production, installation and survey data), detector geometry, online/TDAQ databases, conditions databases (online and offline), event data, offline processing configuration and book-keeping, distributed data management, and distributed database and data management services. The project is responsible for ensuring the coherent development, integration, and operational capability of the distributed database and data management software and infrastructure for ATLAS across these areas. The ATLAS Computing Model foresees the distribution of raw and processed data to Tier-1 and Tier-2 centres, so as to be able to exploit fully the computing resources that are made available to the Collaboration. Additional computing resources will be available for data processing and analysis at Tier-3 centres and other computing facilities to which ATLAS may have access. A complex set of tools and distributed services, enabling the automatic distribution and processing of the large amounts of data, has been developed and deployed by ATLAS in cooperation with the LHC Computing Grid (LCG) Project and with the middleware providers of the three large Grid infrastructures we use: EGEE, OSG and NorduGrid. The tools are designed in a flexible way, in order to have the possibility to extend them to use other types of Grid middleware in the future. These tools, and the service infrastructure on which they depend, were initially developed in the context of centrally managed, distributed Monte Carlo production exercises. They will be re-used wherever possible to create systems and tools for individual users to access data and compute resources, providing a distributed analysis environment for general usage by the ATLAS Collaboration. The first version of the production system was deployed in summer 2004 and has been used since the second half of 2004. It was used for Data Challenge 2, for the production of simulated data for the 5th ATLAS Physics Workshop (Rome, June 2005) and for the reconstruction and analysis of the 2004 Combined Test-Beam data. The main computing operations that ATLAS will have to run comprise the preparation, distribution and validation of ATLAS software, and the computing and data management operations run centrally on Tier-0, Tier-1s and Tier-2s. The ATLAS Virtual Organization will allow production and analysis users to run jobs and access data at remote sites using the ATLAS-developed Grid tools. In the past few years the Computing Model has been tested and developed by running Data Challenges of increasing scope and magnitude, as was proposed by the LHC Computing Review in 2001. We have run two major Data Challenges since 2002 and performed other massive productions in order to provide simulated data to the physicists and to reconstruct and analyse real data coming from test-beam activities; this experience is now useful in setting up the operations model for the start of LHC data-taking in 2007. The Computing Model, together with the knowledge of the resources needed to store and process each ATLAS event, gives rise to estimates of required resources that can be used to design and set up the various facilities. It is not assumed that all Tier-1s or Tier-2s will be of the same size; however, in order to ensure a smooth operation of the Computing Model, all Tier-1s should have broadly similar proportions of disk, tape and CPU, and the same should apply for the Tier-2s. The organization of the ATLAS Software & Computing Project reflects all areas of activity within the project itself. Strong high-level links have been established with other parts of the ATLAS organization, such as the T-DAQ Project and Physics Coordination, through cross-representation in the respective steering boards. The Computing Management Board, and in particular the Planning Officer, acts to make sure that software and computing developments take place coherently across sub-systems and that the project as a whole meets its milestones. The International Computing Board assures the information flow between the ATLAS Software & Computing Project and the national resources and their Funding Agencies.
  •  
9.
  • Åkesson, Torsten, et al. (författare)
  • ATLAS High-Level Trigger, Data Acquisition and Controls Technical Design Report
  • 2003
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • This Technical Design Report (TDR) for the High-level Trigger (HLT), Data Acquisition (DAQ) and Controls of the ATLAS experiment builds on the earlier documents published on these systems: Trigger Performance Status Report, DAQ, EF, LVL2 and DCS Technical Progress Report, and High-Level Triggers, DAQ and DCS Technical Proposal. Much background and preparatory work relevant to this TDR is referenced in the above documents. In addition, a large amount of detailed technical documentation has been produced in support of this TDR. These documents are referenced in the appropriate places in the following chapters. This section introduces the overall organization of the document. The following sections give an overview of the principal system requirements and functions, as well as a brief description of the principal data types used in the Trigger/DAQ (TDAQ) system. The document has been organized into four parts: Part I — Global View Chapters 2, 3 and 4 address the principal system and experiment parameters which define the main requirements of the HLT, DAQ and Controls system. The global system operations, and the physics requirements and event selection strategy are also addressed. Chapter 5 defines the overall architecture of the system and analyses the requirements of its principal components, while Chapters 6 and 7 address more specific fault-tolerance and monitoring issues. Part II — System Components This part describes in more detail the principal components and functions of the system. Chapter 8 addresses the final prototype design and performance of the Data Flow component. These are responsible for the transport of event data from the output of the detector Read Out Links (ROLs) via the HLT system (where event selection takes place) to mass storage. Chapter 9 explains the decomposition of the HLT into a second level trigger (LVL2) and an Event Filter (EF). It details the design of the data flow within the HLT, the specifics of the HLT system supervision, and the design and implementation of the Event Selection Software (ESS). Chapter 10 addresses the Online Software which is responsible for the run control and DAQ supervision of the entire TDAQ and detector systems during data taking. It is also responsible for miscellaneous services such as error reporting, run parameter accessibility, and histogramming and monitoring support. Chapter 11 describes the Detector Control System (DCS), responsible for the control and supervision of all the detector hardware and of the services and the infrastructure of the experiment. The DCS is also the interface point for information exchange between ATLAS and the LHC accelerator. Chapter 12 draws together the various aspects of experiment control detailed in previous chapters and examines several use-cases for the overall operation and control of the experiment, including: data-taking operations, calibration runs, and operations required outside data-taking. Part III — System Performance Chapter 13 addresses the physics selection. The tools used for physics selection are described along with the event-selection algorithms and their performance. Overall HLT output rates and sizes are also discussed. An initial analysis of how ATLAS will handle the first year of running from the point of view of TDAQ is presented. Chapter 14 discusses the overall performance of the HLT/DAQ system from various points of view, namely the HLT performance as evaluated in dedicated testbeds, the overall performance of the TDAQ system in a testbed of ~10% ATLAS size, and functional tests of the system in the detector test beam environment. Data from these various testbeds are also used to calibrate a detailed discrete-event -simulation model of data flow in the full-scale system. Part IV — Organization and Planning Chapter 15 discusses quality-assurance issues and explains the software-development process employed. Chapter 16 presents the system costing and staging scenario. Chapter 17 presents the overall organization of the project and general system-resource issues. Chapter 18 presents the short-term HLT/DAQ work-plan for the next phase of the project as well as the global development schedule up to LHC turn-on in 2007.
  •  
10.
  • Åkesson, Torsten, et al. (författare)
  • High transverse momentum physics at the large hadron collider
  • 2002
  • Ingår i: EPJ direct. - 1435-3725. ; 4:CN1, s. 1-61
  • Forskningsöversikt (refereegranskat)abstract
    • This note summarizes many detailed physics studies done by the ATLAS and CMSCollab orations for the LHC, concentrating on processes involving the production of high mass states. These studies show that the LHC should be able to elucidate the mechanism of electroweak symmetry breaking and to study a variety of other topics related to physics at the TeV scale. In particular, a Higgs boson with couplings given by the Standard Model is observable in several channels over the full range of allowed masses. Its mass and some of its couplings will be determined. If supersymmetry is relevant to electroweak interactions, it will be discovered and the properties of many supersymmetric particles elucidated. Other new physics, such as the existence of massive gauge bosons and extra dimensions can be searched for extending existing limits by an order of magnitude or more.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 10

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy