SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Eerola Paula) ;conttype:(scientificother)"

Sökning: WFRF:(Eerola Paula) > Övrigt vetenskapligt/konstnärligt

  • Resultat 1-10 av 11
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • De Angelis, Alessandro, et al. (författare)
  • An investigation of screwiness in hadronic final states from DELPHI
  • 1998
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • A recent theoretical model by Andersson et al. proposes that soft gluons order themselves in the form of a helix at the end of the QCD cascades. The Authors of the model present a measure of the rapidity-azimuthal angle correlation, which they call screwiness. We searched for such a signal in DELPHI data, and found no evidence for screwiness.
  •  
2.
  • Eerola, Paula, et al. (författare)
  • Accelerator-based infrastructures in the fields of particle and nuclear physics
  • 2020
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • The Council for Research Infrastructures (RFI) within the Swedish Research Council (Vetenskapsrådet) commits a significant part of its annual budget to accelerator-based infrastructures in particle and nuclear physics. The funding covers membership fees, running costs and investments. The Swedish activities in these fields are mainly focused on CERN (Geneva, Switzerland) and FAIR (Darmstadt, Germany). In 2019, RFI decided to commission an investigation and landscape analysis of the research infrastructures they fund in these fields. The report is meant to support the Council’s work in ensuring that these funds are strategically well-spent and of maximum benefit to the research community. A panel of seven experts from the Nordic countries have worked on the task with the aid of data from relevant documentation, hearings, interviews and questionnaires. The report contains several concrete recommendations given from the authors to RFI.
  •  
3.
  • Eerola, Paula, et al. (författare)
  • Atlas Data-Challenge 1 on NorduGrid
  • 2003
  • Ingår i: Proceedings of CHEP 2003.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • The first LHC application ever to be executed in a computational Grid environment is the so-called ATLAS Data-Challenge 1, more specifically, the part assigned to the Scandinavian members of the ATLAS Collaboration. Taking advantage of the NorduGrid testb
  •  
4.
  • Eerola, Paula, et al. (författare)
  • Building a Production Grid in Scandinavia
  • 2003
  • Ingår i: IEEE Internet Computing. - : IEEE. ; 7:4, s. 27-35
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)abstract
    • The aim of the NorduGrid project is to build and operate a production grid infrastructure in Scandinavia and Finland. By developing innovative middleware solutions, it enables a 24/7 production-level test bed. Through a common access layer, NorduGrid conn
  •  
5.
  • Eerola, Paula, et al. (författare)
  • The NorduGrid architecture and tools
  • 2003
  • Ingår i: Proceedings of CHEP 2003.
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • The NorduGrid project designed a Grid architecture with the primary goal to meet the requirements of production tasks of the LHC experiments. While it is meant to be a rather generic Grid system, it puts emphasis on batch processing suitable for problems
  •  
6.
  • Eerola, Paula, et al. (författare)
  • The NorduGrid production Grid infrastructure, status and plans
  • 2003
  • Ingår i: Proc. Fourth International Workshop on Grid Computing. ; , s. 158-165
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • NorduGrid offers reliable Grid services for academic users over an increasing set of computing & storage resources spanning through the Nordic countries Denmark, Finland, Norway and Sweden. A small group of scientists has already been using the NorduGrid
  •  
7.
  •  
8.
  • Smirnova, Oxana, et al. (författare)
  • The NorduGrid Architecture And Middleware for Scientific Applications
  • 2003
  • Ingår i: Lecture Notes in Computer Science. - Berlin, Heidelberg : Springer Berlin Heidelberg. ; 2657, s. 264-273
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • The NorduGrid project operates a production Grid infrastructure in Scandinavia and Finland using own innovative middleware solutions. The resources range from small test clusters at academic institutions to large farms at several supercomputer centers, a
  •  
9.
  • Åkesson, Torsten, et al. (författare)
  • ATLAS computing: Technical Design Report
  • 2005
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • The ATLAS Computing Model embraces the Grid paradigm and a high degree of decentralization and sharing of computing resources. The required level of computing resources means that off-site facilities will be vital to the operation of ATLAS in a way that was not the case for previous CERN-based experiments. The primary event processing occurs at CERN in a Tier-0 facility. The RAW data is archived at CERN and copied (along with the primary processed data) to the Tier-1 facilities around the world. These facilities archive the raw data, provide the reprocessing capacity, provide access to the various processed versions, and allow scheduled analysis of the processed data by physics analysis groups. Derived datasets produced by the physics groups are copied to the Tier-2 facilities for further analysis. The Tier-2 facilities also provide the simulation capacity for the experiment, with the simulated data housed at Tier-1s. In addition, Tier-2 centres will provide analysis facilities, and some will provide the capacity to produce calibrations based on processing raw data. A CERN Analysis Facility provides an additional analysis capacity, with an important role in the calibration and algorithmic development work. ATLAS has adopted an object-oriented approach to software, based primarily on the C++ programming language, but with some components implemented using FORTRAN and Java. A component-based model has been adopted, whereby applications are built up from collections of plug-compatible components based on a variety of configuration files. This capability is supported by a common framework that provides common data-processing support. This approach results in great flexibility in meeting the basic processing needs of the experiment, and also for responding to changing requirements throughout its lifetime. The heavy use of abstract interfaces allows for different implementations to be provided, supporting different persistency technologies, or optimized for the offline or high-level trigger environments. The Athena framework is an enhanced version of the Gaudi framework that was originally developed by the LHCb experiment, but is now a common ATLAS-LHCb project. Major design principles are the clear separation of data and algorithms, and of transient (in-memory) and persistent (in-file) data. All levels of processing of ATLAS data, from high-level trigger to event simulation, reconstruction and analysis, take place within the Athena framework; in this way it is easier for code developers and users to test and run algorithmic code, with the assurance that all geometry and conditions data will be the same for all types of applications (simulation, reconstruction, analysis, visualization). One of the principal challenges for ATLAS computing is to develop and operate a data storage and management infrastructure able to meet the demands of a yearly data volume of O(10 PB) utilized by data processing and analysis activities spread around the world. The ATLAS Computing Model establishes the environment and operational requirements that ATLAS data-handling systems must support, and, together with the operational experience gained to date in test beams and data challenges, provides the primary guidance for the development of the data management systems. The ATLAS Databases and Data Management Project (DB Project) leads and coordinates ATLAS activities in these areas, with a scope encompassing technical databases (detector production, installation and survey data), detector geometry, online/TDAQ databases, conditions databases (online and offline), event data, offline processing configuration and book-keeping, distributed data management, and distributed database and data management services. The project is responsible for ensuring the coherent development, integration, and operational capability of the distributed database and data management software and infrastructure for ATLAS across these areas. The ATLAS Computing Model foresees the distribution of raw and processed data to Tier-1 and Tier-2 centres, so as to be able to exploit fully the computing resources that are made available to the Collaboration. Additional computing resources will be available for data processing and analysis at Tier-3 centres and other computing facilities to which ATLAS may have access. A complex set of tools and distributed services, enabling the automatic distribution and processing of the large amounts of data, has been developed and deployed by ATLAS in cooperation with the LHC Computing Grid (LCG) Project and with the middleware providers of the three large Grid infrastructures we use: EGEE, OSG and NorduGrid. The tools are designed in a flexible way, in order to have the possibility to extend them to use other types of Grid middleware in the future. These tools, and the service infrastructure on which they depend, were initially developed in the context of centrally managed, distributed Monte Carlo production exercises. They will be re-used wherever possible to create systems and tools for individual users to access data and compute resources, providing a distributed analysis environment for general usage by the ATLAS Collaboration. The first version of the production system was deployed in summer 2004 and has been used since the second half of 2004. It was used for Data Challenge 2, for the production of simulated data for the 5th ATLAS Physics Workshop (Rome, June 2005) and for the reconstruction and analysis of the 2004 Combined Test-Beam data. The main computing operations that ATLAS will have to run comprise the preparation, distribution and validation of ATLAS software, and the computing and data management operations run centrally on Tier-0, Tier-1s and Tier-2s. The ATLAS Virtual Organization will allow production and analysis users to run jobs and access data at remote sites using the ATLAS-developed Grid tools. In the past few years the Computing Model has been tested and developed by running Data Challenges of increasing scope and magnitude, as was proposed by the LHC Computing Review in 2001. We have run two major Data Challenges since 2002 and performed other massive productions in order to provide simulated data to the physicists and to reconstruct and analyse real data coming from test-beam activities; this experience is now useful in setting up the operations model for the start of LHC data-taking in 2007. The Computing Model, together with the knowledge of the resources needed to store and process each ATLAS event, gives rise to estimates of required resources that can be used to design and set up the various facilities. It is not assumed that all Tier-1s or Tier-2s will be of the same size; however, in order to ensure a smooth operation of the Computing Model, all Tier-1s should have broadly similar proportions of disk, tape and CPU, and the same should apply for the Tier-2s. The organization of the ATLAS Software & Computing Project reflects all areas of activity within the project itself. Strong high-level links have been established with other parts of the ATLAS organization, such as the T-DAQ Project and Physics Coordination, through cross-representation in the respective steering boards. The Computing Management Board, and in particular the Planning Officer, acts to make sure that software and computing developments take place coherently across sub-systems and that the project as a whole meets its milestones. The International Computing Board assures the information flow between the ATLAS Software & Computing Project and the national resources and their Funding Agencies.
  •  
10.
  • Åkesson, Torsten, et al. (författare)
  • ATLAS High-Level Trigger, Data Acquisition and Controls Technical Design Report
  • 2003
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • This Technical Design Report (TDR) for the High-level Trigger (HLT), Data Acquisition (DAQ) and Controls of the ATLAS experiment builds on the earlier documents published on these systems: Trigger Performance Status Report, DAQ, EF, LVL2 and DCS Technical Progress Report, and High-Level Triggers, DAQ and DCS Technical Proposal. Much background and preparatory work relevant to this TDR is referenced in the above documents. In addition, a large amount of detailed technical documentation has been produced in support of this TDR. These documents are referenced in the appropriate places in the following chapters. This section introduces the overall organization of the document. The following sections give an overview of the principal system requirements and functions, as well as a brief description of the principal data types used in the Trigger/DAQ (TDAQ) system. The document has been organized into four parts: Part I — Global View Chapters 2, 3 and 4 address the principal system and experiment parameters which define the main requirements of the HLT, DAQ and Controls system. The global system operations, and the physics requirements and event selection strategy are also addressed. Chapter 5 defines the overall architecture of the system and analyses the requirements of its principal components, while Chapters 6 and 7 address more specific fault-tolerance and monitoring issues. Part II — System Components This part describes in more detail the principal components and functions of the system. Chapter 8 addresses the final prototype design and performance of the Data Flow component. These are responsible for the transport of event data from the output of the detector Read Out Links (ROLs) via the HLT system (where event selection takes place) to mass storage. Chapter 9 explains the decomposition of the HLT into a second level trigger (LVL2) and an Event Filter (EF). It details the design of the data flow within the HLT, the specifics of the HLT system supervision, and the design and implementation of the Event Selection Software (ESS). Chapter 10 addresses the Online Software which is responsible for the run control and DAQ supervision of the entire TDAQ and detector systems during data taking. It is also responsible for miscellaneous services such as error reporting, run parameter accessibility, and histogramming and monitoring support. Chapter 11 describes the Detector Control System (DCS), responsible for the control and supervision of all the detector hardware and of the services and the infrastructure of the experiment. The DCS is also the interface point for information exchange between ATLAS and the LHC accelerator. Chapter 12 draws together the various aspects of experiment control detailed in previous chapters and examines several use-cases for the overall operation and control of the experiment, including: data-taking operations, calibration runs, and operations required outside data-taking. Part III — System Performance Chapter 13 addresses the physics selection. The tools used for physics selection are described along with the event-selection algorithms and their performance. Overall HLT output rates and sizes are also discussed. An initial analysis of how ATLAS will handle the first year of running from the point of view of TDAQ is presented. Chapter 14 discusses the overall performance of the HLT/DAQ system from various points of view, namely the HLT performance as evaluated in dedicated testbeds, the overall performance of the TDAQ system in a testbed of ~10% ATLAS size, and functional tests of the system in the detector test beam environment. Data from these various testbeds are also used to calibrate a detailed discrete-event -simulation model of data flow in the full-scale system. Part IV — Organization and Planning Chapter 15 discusses quality-assurance issues and explains the software-development process employed. Chapter 16 presents the system costing and staging scenario. Chapter 17 presents the overall organization of the project and general system-resource issues. Chapter 18 presents the short-term HLT/DAQ work-plan for the next phase of the project as well as the global development schedule up to LHC turn-on in 2007.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 11
Typ av publikation
rapport (4)
konferensbidrag (4)
tidskriftsartikel (2)
doktorsavhandling (1)
Typ av innehåll
Författare/redaktör
Eerola, Paula (9)
Smirnova, Oxana (7)
Konya, Balazs (5)
Ekelöf, Tord (5)
Ellert, Mattias (5)
Wäänänen, Anders (5)
visa fler...
Konstantinov, Aleksa ... (5)
Ould-Saada, Farid (5)
Hansen, John Renner (5)
Nielsen, Jakob Langg ... (5)
Åkesson, Torsten (2)
Hedberg, Vincent (2)
Jarlskog, Göran (2)
Mjörnmark, Ulf (2)
Lundberg, Björn (2)
Almehed, Sverker (2)
Myklebust, Trond (2)
Öhman, Henrik (1)
Hellman, Sten (1)
Eerola, Hannaleena (1)
Nevanlinna, Heli (1)
Blomqvist, Carl (1)
Botner, Olga (1)
Marklund, Mattias, 1 ... (1)
Ferretti, Gabriele, ... (1)
Puistola, Ulla (1)
Winqvist, Robert (1)
Ringnér, Markus (1)
Sarantaus, Laura (1)
Vehmanen, Paula (1)
Kainu, Tommi (1)
Huusko, Pia (1)
Kallioniemi, Olli-P (1)
De Angelis, Alessand ... (1)
Tuisku, Miika (1)
Stapnes, Steinar (1)
Siem, Sunniva (1)
Gaardhøje, Jens Jørg ... (1)
Herrala, Juha (1)
Vinter, Brian (1)
Trent, Jeffrey (1)
Vahteristo, Pia (1)
Blanco, Guillermo (1)
Jones, MaryPat (1)
Gildea, Derek (1)
Rapakko, Katrin (1)
Juo, Suh-Hang Hank (1)
Gillanders, Elizabet ... (1)
Allinen, Minna (1)
Markey, Carol (1)
visa färre...
Lärosäte
Uppsala universitet (7)
Lunds universitet (4)
Chalmers tekniska högskola (1)
Språk
Engelska (11)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (8)
Medicin och hälsovetenskap (2)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy