The amount of data we're talking about is big. A team of high-energy physicists led by those at the California Institute of Technology reached a transfer rate of 339 gigabits per second (Gbps). That is the equivalent of moving four million gigabytes or one million full-length movies per day.
That achievement is pretty astounding on its own, but the records didn't stop there. The high-energy physicists worked with computer scientists and network engineers to not only double the data transfer speed over last year, but also to set a new record for two way transfer on a single link. They sent data at 187 Gbps between Victoria, Canada and Salt Lake City.
We all know computing is becoming data intensive as we watch our computers get smaller but attain greater data storage and network transfers. So take what we work with day to day and apply it to scientific research. High energy physics, astrophysics, genomic, meteorology and global climate tracking all require huge amounts of data transfer for us to use modeling and processing in real time in order to expand our knowledge.
A classic example of the need for high data speeds can easily be illustrated by the work done with the Large Hadron Collider (LHC) as it narrowed in on the particle thought to be the Higgs Boson. To do its work it relied on a global network of computers and data storage facilities that transferred more than 100 petabytes (100 million gigabytes) in one year alone.
That's really just the tip of the iceberg of data. As work with the LHC continues to grow, so do other data intensive programs in other areas of research. The more we know, the more we need to know — and somehow that information needs to be stored and transferred for our study. It's estimated that as experimentation with the LHC continues, vast amounts of data will be produced that live in the exabyte range (a billion gigabytes).
The team also proved new state-of-the-art methods such as software defined intercontinental networks and direct interconnections between computer memories over the network between Pasadena and Salt Lake City. After all, having incredible data speeds is one huge achievement, but the networks to transfer them need to evolve as well.
Harvey Newman, professor of physics at Caltech, and team leader was quoted in PhysOrg:
"By sharing our methods and tools with scientists in many fields, we aim to further enable the next round of scientific discoveries, taking full advantage of 100-Gbps networks now, and higher-speed networks in the near future. In particular, we hope that these developments will afford physicists and students throughout the world the opportunity to participate directly in the LHC's next round of discoveries as they emerge."
Despite just setting the records, the scientific team is not resting on their laurels. They are already looking towards next year when they can use data technologies that are just beginning to be developed. The Caltech team estimates that with the new technology, they could even reach 1 terabit per second (a thousand Gpbs) data transfers over long range networks.
The research team demonstrated this year's achievements at the SuperComputing 2012 (SC12) conference held in early November in Salt Lake City. The team included members from Caltech, the University of Victoria, the University of Michigan and the Brookhaven National Lab, Vanderbilt University. For more information on their demonstration and ongoing work you can visit supercomputing.caltech.edu/.