What is a Supercomputer?
A supercomputer is one with computing capabilities superior to a common and desktop computer. It is used for specific purposes. Today the term supercomputer is being replaced by a high-performance computing environment. Since supercomputer is a set of powerful computers linked together to increase their working power and performance. By 2019, the fastest supercomputer operated at approximately 148 petaflops (one petaflop, in the jargon of computing, means that they perform more than 1000 billion operations per second).
History of Supercomputer:
The supercomputer was introduced in 1970 and was designed primarily by Seymour Cray at Control Data Corporation (CDC), which dominated the market during that time until Cray left CDC to form his own company, Cray Research. With this new company, he continued to dominate the market with his new designs, obtaining the highest podium in super-computing for five consecutive years (1985-1990). In 1980, a large number of competing companies entered the market in parallel with the creation of the minicomputer, but many of them disappeared in the mid-1990. Today’s supercomputers tend to become the ordinary computer of tomorrow. The first CDC machines were simply very fast scalar processors, and many of the new competitors developed their scalar processors at a low price to penetrate the market.
In the mid-1980, machines with a modest number of vector processors were seen working in parallel, which became a standard. The typical number of processors was in the range of 4 to 16. In early 1990, attention shifted from vector processors to massively parallel processor systems with thousands of “ordinary” CPUs. Currently, parallel designs are based on server-class microprocessors. Examples of such processors are PowerPC, Opteron or Xeon, and most modern supercomputers today are highly-tuned computer clusters using common processors combined with special interconnections.
Until now the use and generation of them have been limited to military, government, academic or business organizations.
These are used for intensive calculation tasks, such as problems involving quantum physics, weather prediction, climate change research, molecule modeling, physical simulations such as the simulation of airplanes or cars in the wind (also known as Computational Fluid Dynamics ), simulation of the detonation of nuclear weapons and investigation in nuclear fusion.
Japan created the first supercomputer petaflops the MDGrape-3, but only for particular purposes, then IBM from the USA created the roadrunner, also of 1 petaflop, China the Milky Way One of 1.2 petaflops and US Cray. the Jaguar of 1.7 or 1.8 petaflop, which is the fastest at the end of 2009. The fastest supercomputer at the end of 2010 was the Chinese Tianhe 1A with a maximum speed of 2.5 petaflops.
Cooling Systems:
Many of the CPUs used in today’s supercomputers dissipate 10 times more heat than a common stove disk. Some designs need to cool the multiple CPUs to -85 ° C (-185 ° F).
To be able to cool multiple CPUs at such temperatures requires a large power consumption. For example, a new supercomputer called Aquasar will have a top speed of 10 teraflops. Meanwhile, the power consumption of a single rack of this supercomputer consumes about 10 kW. As a comparison, a Blue Gene L / P supercomputer rack consumes about 40 kW.
The average consumption of a supercomputer in the list of the 500 fastest supercomputers in the world is around 257 kW.
For the Aquasar supercomputer, which will be installed at the Swiss Federal Technological Institute (ETH), a new liquid cooling design will be used. It will take 10 liters of water that will flow at a rate of 29.5 liters per minute.
One of the innovations in this design is the cooling systems normally isolate the liquid from the CPU and heat transfer occurs through convection from the CPU’s metal cover through a generally copper or another thermally material adapter conductive. The innovation consists of a new design in which water arrives directly to the CPU through capillary tubes so that heat transmission is more efficient.
In the case of ETH in Switzerland, the heat extracted from the supercomputer will be recycled to heat rooms within the same university.
In 2019, the second supercomputer (American Sierra) on the TOP500 list consumed half the energy of the third on the list (Sunway TaihuLight).
Features of Supercomputer
Processing speed: Billions of floating-point instructions per second.
Users at once: Up to thousands, in a wide network environment.
Size: Require special facilities and industrial air conditioning.
Difficulty in use: Only for specialists.
Usual clients: Large research centers.
Social penetration: Practically nil.
Social impact: Very important in the field of research, since it provides calculations at high processing speed, allowing, for example, to calculate in sequence the human genome, number π, develop calculations of physical problems leaving a very low margin of error, etc.
Parks installed: Less than a thousand worldwide.
Hardware: Main operating operation
Uses:
Supercomputers are used to address very complex problems that cannot be done in the physical world well, either because they are dangerous, involve incredibly small or incredibly large things. Here are some examples:
Through the use of a supercomputer, researchers model the past climate and the current climate and predict the future climate.
Scientists investigating outer space and its properties use supercomputers to simulate stellar interiors, simulate the stellar evolution of stars (supernova events, collapse of molecular clouds, etc.), perform cosmological simulations and model space weather.
Scientists use supercomputers to simulate how a tsunami could affect a particular coast or city.
Supercomputers are used to test the aerodynamics of the most recent military aircraft.
Supercomputers are being used to model how proteins are folded and how that folding can affect people suffering from Alzheimer’s disease, cystic fibrosis and many types of cancer.
Supercomputers are used to model nuclear explosions, limiting the need for true nuclear tests.
Supercomputer Operating System:
The first supercomputer did not have a built-in operating system, with which each headquarters, laboratory, etc. who used one of these supercomputers was responsible for having to develop a specific OS and tailored to its supercomputer. For example, the first considered supercomputer in history, the CDC 6600, used the operating system called Chippewa (COS, which is also named as the Cray operating system). This operating system was quite simple and its main feature was to be able to control the different tasks of the computer system, in this way it was achieved that the different tasks always had what they required to carry out their end. Following this operating system, new ones emerged, such as the so-called Kronos, Scope or Nos.
The Kronos operating system was implemented during the 70s and its main feature is that different tasks can access at the same time something that required several to carry out their task. The CDC SCOPE operating system (Supervisory Control Of Program Execution) was used during the 60s, its main feature is that it allows you to control all system tasks.
The operating system Nos (Network Operating System) can be said to be the one that replaced the previous two during the 70s. Its characteristics were very similar to that of its predecessor Kronos, but what was specifically sought with Nos was to have a common operating system in all the creations of CDC (Control Data Corporation) that there were. In the 80s, We were replaced by Nos / Ve (Network Operating System / Virtual Environment), which unlike its predecessor had a virtual memory. In the late 1980s, the so-called modern operating systems or those that have been the basis of the current, UNIX-based operating systems began to be implemented in supercomputers. The first was the so-called UNICOS, which were the ones that emerged strongly during that decade.
Linux is the predominant OS at present due to several factors. Its cost is zero and has a generic kernel. It has great scalability that allows it to adapt to large loads easily. Its installation is based on small modules, which each one does a task: with this it is achieved that if one is modified does not affect the others, its code is also open which allows that at any time we can modify this before any change that is wanted or arises in the supercomputer; Another point is that behind it has a large community that gives support, and finally allows us to test network configuration without the need to restart the system.
Example of Supercomputer:
JAGUAR, NEBULAE, IBM’s ROADRUNNER, KRAKEN, JUQUEEN, PLEIADES, TIANHE-1, Cray-I, Cray-II, CYBER205, TITAN, SEQUOIA, MIRA, SuperMUC, FERMI.