San Diego Supercomputer Center (2014) defines supercomputer that works very fast. Supercomputer of a decade back is not powerful enough in today’s world to be called supercomputer of the modern age. Scientific advances keep on adding faster and faster supercomputers that can perform the job in less and less time. Today’s supercomputers are operating at petascale levels. One petaflop speed means that the computer can perform one quadrillion calculations per second. A petabyte means one quadrillion bytes in storage capacity. In terms of pages, it will be equivalent to 250 billion pages of text.
History of Supercomputers
- The first supercomputer was put to use at Columbia University between 1954 and 1963 to find missile trajectories. The supercomputer was known as the IBM Naval Ordnance Research Calculator capable of performing 15,000 operations per second (Columbia University, 2013).
- The supercomputers came to exist due to need to calculate most complicated calculations that otherwise were quite cumbersome and could not be done in a limited period.
- Xie et al. (2010) emphasize that evolution in computer components facilitated the development of supercomputers. Beginning with an early stage development, it evolved into vector supercomputer stage then flourished to massive parallel processing supercomputer stage and then evolved further to the cluster stage. Ever since late ’60s, the performance of supercomputers has improved almost 10 times per spell of four years surpassing even Moore’s law.
Advantages and disadvantages of supercomputers
Advantages of supercomputers are speed, accuracy, and capability to charter in unknown territories such as space-research, discovering genomes coding that are otherwise very difficult to perform. On disadvantages side, its high cost and huge power consumption can be cited. High power consumption leads to heat generation that in turn, increases frequent replacement of the important parts increasing its maintenance cost. Due to fast advances in computer field today’s supercomputer can be redundant as time passes by (Newman, 2011).
Supercomputer Sites and their Locations
Some of the major modern supercomputers installed between 2010 and 2012 are mentioned as per the following (Top 500, 2014).
Roadrunner: Los Alamos National Laboratory
- An IBM system installed at the U.S. Department of Energy’s Los Alamos National Laboratory in 2008 is able to compute at the rate of 1.026 petaflop per second. The system is nicknamed as “Roadrunner”. It is important to note that this is one of the most energy efficient systems during the time.
- Jaguar; Oak ridge National
The Cray XT5 supercomputer known as Jaguar is located at the Department of Energy’s Oak Ridge Leadership Computing Facility. The clock speed of this computer is 1.759petaflop per second.
- Tianhe-1A: National Supercomputing Center in Tianjin, China
This supercomputer center is located at the National Supercomputer Center in Tianjin, China. The clock speed is 2.57 petaflop per second.
- K Computer: RIKEN Advanced Institute for Computational Science at Kobe, Japan
This is a Japanese computer capable of performing 8.16 petaflops per second.
- Sequoia: Lawrence Livermore National Laboratory, the US
This is the IBM BlueGene/Q system installed at the Department of Energy’s Lawrence Livermore National Laboratory. This supercomputer achieved the clockspeed of 16.32 petaflop per second.
Uses of Supercomputers
The Square-Kilometer Array (SKA) project aims at developing the largest radio telescope in the world for space research. This will be done by establishing a series of hundreds of smaller radio telescopes in the area of one-square kilometer. The purpose is to gather a vast amount of data extensively deep into the space. This will unfold how galaxies evolve and secrets about dark matter and energy. The SKA project will also discover if there are any indications of life in the galaxy. The project will be operative after 2023 with construction to begin in 2018. A highly powerful supercomputer is needed to decipher and analyze the vast amount of data available through SKA radio telescope. It is important to note that scientific research centers dedicated in space research are currently employing supercomputers. Example is the Commonwealth Scientific and Industrial Research Organization (CSIRO) Pawsey Centre in Perth, Australia where a Cray XC30 Supercomputer is already involved in the research work (Boas, 2013).
San Diego Supercomputer Center (SDSC) at the University of California, San Diego along with San Diego State University (SDSU) have been able to create the largest earthquake simulation using supercomputer technology. The computing and visualization expertise for the simulation are to be provided by the SDSC. This is the most detailed simulation pertaining to seismic disturbance conducted in terms of floating point operations. This will open up new possibilities with regard to reducing potential loss of life and property from the massive earthquakes. The simulations will be computed at the petascale level (SDSC, 2010).
Predicting weather and global climate change has been of paramount importance in the last few years as timely forecast helps in suggesting remedial measures. Hundreds of variables keep on exerting influence on climate change. It is difficult to grasp or know the resultant effect of all these variables without using the computational power of supercomputers. Past data helps in simulation and prepare an appropriate climate model. A model was created in 2008 at Brookhaven National Laboratory in New York utilizing supercomputers to explain the impact of clouds on temperature (Pappas, 2010).
Hurricanes are forecasted using supercomputers. For example, the TACC’s supercomputer ‘Ranger’ collected data from the National Oceanographic and Atmospheric Agency and provided clues regarding Hurricane Ike’s possible route in 2008. This supercomputer has a computing power of 579 trillion calculations per second. According to the report, Ranger helped improved the forecast by 15 percent (Pappas, 2010).
Nuclear Weapons Testing
Supercomputers are also used for nuclear weapon testing because several governments including the US have forbidden the testing of nuclear weapons. Computer simulations are used for the purpose. For example, Sequoia, an IBM supercomputer at Lawrence Livermore National Laboratory in California capable of performing 20,000 trillion calculations per second is deployed to create simulations of nuclear explosions eliminating the need of real-world nuke explosions (Pappas, 2010).
Future of Supercomputers
Future supercomputers will have processing power that will be measured in exascale. One exaflop speed means one quintillion calculations per second. That means future supercomputer will function or perform 1000 times faster than today’s supercomputers. It will be a bit of challenge for the scientists to break the exascale barrier in its design and configuration that are not power guzzlers. With today’s design, an exascale level supercomputer installation will need a separate power generating station (San Diego Supercomputer Center, 2014). Xie et al. (2010) predict that 1 exaflop supercomputers will arrive in 2018.
Over and above discussed uses of supercomputer, it is also used in medical researches including cancer research, genome sequencing simulations and even in human brain mapping. Currently, efforts are on to build a supercomputer that can predict the future. European Union has agreed for funding of £900m to build a supercomputer that will be known as the Living Earth Simulator (LES) to predict unusual but significant events. The LES will simulate everything on this planet that will include even tweets in social media besides economic data and government statistics to anticipate recession or financial crisis. Experts believe that human behavior plays a pivotal role behind several kinds of problems that include war, economic crisis, or climate change. The LES will store entire live information available on this earth and process the data to predict the future. This means that supercomputers are now entering into an unchartered territory in the coming years.
- Boas, B. (2013). Supercomputing to Impact Future of Deep Space Exploration. Retrieved
- February 7, 2014 from http://blog.cray.com/?p=6486
- Columbia University (2013). The IBM Naval Ordnance Research Calculator. columbia.edu
- Retrieved February 7, 2014 from http://www.columbia.edu/cu/computinghistory/norc.html
- Dailymail (2011). Get ready for the supercomputer that can predict the future (even the next
- recession) as EU prepares £900m funding.dailymail.co.uk. Retrieved February 7, 2014
- http://www.dailymail.co.uk/sciencetech/article-2069775/Get-ready-supercomputer-predict- future-EU-prepares-900m-funding.html
- Matlis, J. (2008). A brief history of supercomputers. Computerworld. International Data Group.
- Newman, A. (2011). Why supercomputing Matters. Serverwath.com. Retrieved February 7, 2014
- from http://www.serverwatch.com/server-trends/why-supercomputing-matters.html
- Pappas, S. (2010). 9 Super-Cool Uses for Supercomputers. Livescience. Retrieved February 7,
- 2014 from http://www.livescience.com/6392-9-super-cool-supercomputers.html
- SDSC (2010). SDSC Leads Supercomputing Efforts in Creating Largest-Ever Earthquake
- Simulation. Retrieved February 7, 2014 from http://www.sdsc.edu/News%20Items/PR081910_m8_earthqua.html
- San Diego Supercomputer Center (2014). Common supercomputer terms and (simple)
- definitions. sdsc.edu. Retrieved February 7, 2014 from http://www.sdsc.edu/news/glossary.html
- Top 500 (2014). Super computer Sites. top500.org. Retrieved February 7, 2014 from
- Xie, X., Fang, X., HU, S., WU, D., (2010). Evolution of Supercomputers. Frontiers of Computer
- Science in China. 4 (4). 428-436.