GENERAL
ABOUT INFN
ORGANISATION CHART
INFN HEADQUARTERS
PRESS OFFICE
ADMINISTRATION
PHONE BOOK
JOB OPPORTUNITIES
 ACTIVITIES
PARTICLE PHYSICS
ASTROPARTICLE PHYSICS
NUCLEAR PHYSICS
THEORETICAL PHYSICS
TECHNOLOGY
EXPERIMENTS
INFN PUBLICATIONS
THESIS
 SERVICES
PORTALE INFN
AGENDA INFN
EDUCATIONAL
WEBCAST
MULTIMEDIA
EU FP7
 

Trasparenza valutazione e merito  

DataWeb Support ticket  -  e-mail  

 

  26-04-2005: LHC COMPUTING CENTRES JOIN FORCES FOR GLOBAL GRID CHALLENGE 
 COMPLETE LIST 



25 April 2005 – Today, in a significant milestone for scientific grid computing, eight major computing centres successfully completed a challenge to sustain a continuous data flow of 600 megabytes per second on average for 10 days from Cern in Geneva, Switzerland to seven sites in Europe and the US. The total amount of data transmitted during this challenge - 500 terabytes - would take about 250 years to download using a typical 512 kilobit per second household broadband connection.

This exercise was part of a series of service challenges designed to test the global computing infrastructure for the Large Hadron Collider (Lhc) currently being built at Cern to study the fundamental properties of subatomic particles and forces. The service challenge participants included Brookhaven National Laboratory and Fermilab in the US, Forschungszentrum Karlsruhe in Germany, Ccin2p3 in France, Infn-Cnaf in Italy, Sara/Nikhef in the Netherlands and Rutherford Appleton Laboratory in the UK.

“This service challenge is a key step on the way to managing the torrents of data anticipated from the Lhc,” said Jamie Shiers, manager of the service challenges at Cern. “When the Lhc starts operating in 2007, it will be the most data-intensive physics instrument on the planet, producing more than 1500 megabytes of data every second for over a decade.”

The goal of Lhc computing is to use a world-wide grid infrastructure of computing centres to provide sufficient computational, storage and network resources to fully exploit the scientific potential of the four major Lhc experiments: Alice, Atlas, Cms and Lhcb. The infrastructure relies on several national and regional science grids. The service challenge used resources from the Lhc Computing Grid (Lcg) project, the Enabling Grids for E-SciencE (Egee) project, Grid3/Open Science Grid (Osg), InfnGrid and GridPP.

Lhc scientists designed a series of service challenges to ramp up to the level of computing capacity, reliability and ease of use that will be required by the worldwide community of over 6000 scientists working on the Lhc experiments. During Lhc operation, the major computing centres involved in the Grid infrastructure will collectively store the data from all four Lhc experiments. Scientists working at over two hundred other computing facilities in universities and research laboratories around the globe, where much of the data analysis will be carried out, will access the data via the Grid.

Fermilab Computing Division head Vicky White welcomed the results of the service challenge.

“High energy physicists have been transmitting large amounts of data around the world for years,” White said. “But this has usually been in relatively brief bursts and between two sites. Sustaining such high rates of data for days on end to multiple sites is a breakthrough, and augurs well for achieving the ultimate goals of LHC computing.”

Nikhrf physicist and Grid Deployment Board chairman Kors Bos concurred.

“The challenge here is not just the inherently distributed nature of the Grid infrastructure for the Lhc,” Bos said, “but also the need to get large numbers of institutes and individuals, all with existing commitments, to work together on an incredibly aggressive timescale.”

The current service challenge is the second in a series of four leading up to Lhc operations in 2007. It exceeded expectations by sustaining roughly one-third of the ultimate data rate from the Lhc, and reaching peak rates of over 800 MB/s. This success was facilitated by the underlying high-speed networks, including Dfn, Garr, Geant, ESnet, Lhcnet, NetherLight, Renater, and UKLight.

The next service challenge, due to start in the summer, will extend to many other computing centres and aim at a three-month period of stable operations. That challenge will allow many of the scientists involved to test their computing models for handling and analyzing the data from the Lhc experiments.


 RELATED SITES 
http://www.cern.ch/lcg/
http://grid.infn.it/

 RECENT NEWS  
26-03-2013: OPERA observed a third neutrino tau
08-06-2012: Neutrinos sent from CERN to Gran Sasso respect the cosmic speed limit
06-06-2012: OPERA OBSERVES THE SECOND TAU NEUTRINO
18-05-2012: FEL, A SUPER LASER FOR SUPERB
15-05-2012: Research on neutrinos allows the discovery of vortices in the abysses of the eastern Mediterranean
16-03-2012: The Icarus experiment measures the neutrino speed: even neutrinos are not faster than light
23-02-2012: NEUTRINOS: STATEMENT ISSUED BY THE OPERA COLLABORATION
13-12-2011: ato attuale della ricerca dell’Higgs
18-11-2011: NEW TESTS CONFIRM THE RESULTS OF OPERA ON THE NEUTRINO VELOCITY. BUT IT IS NOT YET THE FINAL CONFIRMATION
26-10-2011: Fernando Ferroni appointed as the president of the Italian National Institute for Nuclear Physics

[Back]

 

Ufficio Comunicazione Infn - Piazza dei Caprettari, 70 - 00186 Roma
Tel: 06 68 68 162 - Fax: 06 68 307 944 - email: comunicazione@presid.infn.it

F.M . F.E.

 

 

 

 

 

 CONFERENCES
11-12-2017
TRANSVERSITY 2017
13-12-2017
SM&FT 2017 : THE XVII WORKSHOP ON STATISTICAL MECHANICS AND NONPERTURBATIVE FIELD THEORY
 REVIEWS AREA
PRESS REVIEWS
VIDEO REVIEWS
 PRESS RELEASE
26-03-2013
OPERA OBSERVED A THIRD NEUTRINO TAU
08-06-2012
NEUTRINOS SENT FROM CERN TO GRAN SASSO RESPECT THE COSMIC SPEED LIMIT
06-06-2012
OPERA OBSERVES THE SECOND TAU NEUTRINO
Ricerca Italiana