The pitfalls of supercomputing
Monday, 10 September, 2007
Choosing a supercomputer for some fields of electronic research presents a conundrum for researchers, thanks to practical limitations inherent in every option.
Scientists typically aren't bright enough to perform the vast quantity of intense calculations that must be done to answer their research questions.
To this end, researchers usually rely on some kind of high-performance computer (HPC), often known as a supercomputer.
Traditionally, the computers used in such computational research are classified as one of two types: capability computers, which function as single machines with many constituent processors, or server clusters, which comprise lots of smaller machines, each of which has a small number of processors within.
The reason for this division is that different research problems are best approached in different manners.
For example, a problem that must be solved in a particular sequence is best calculated by a shared-memory supercomputer — also known as a capability computer.
Other research may be best performed by a server cluster. The constituent machines of a cluster have the ability to work independently from one another; therefore, parts of a larger problem can be broken down into their component processes, with each process allotted to one or several of the machines within the cluster.
Theoretically, single shared-memory supercomputers perform best on tasks requiring serial computation, whereas server clusters are best used on problems, which can be broken down into component parts.
In reality, it is not as simple as the theory would have you believe. As HPC provider SGI observes, both singular supercomputers and server clusters have problems when used alone. This is particularly true of server clusters.
Clusters typically suffer from what's known as 'server-sprawl': the increasing amount of physical space devoted to servers, as more and more servers are added. They are also particularly inefficient regarding energy consumption and demand expensive methods of cooling.
However, they are necessary. Some problems are best solved with a cluster. Attempting to use a singular supercomputer for a task best suitable to a cluster will result in a solution that is even less effective than simply using a cluster.
Not content with simply maligning the state of things, SGI also claims to have a solution. No man is an island, according to one 17th century poet, and SGI would have us believe that no HPC should be, either.
SGI's solution is a hybrid system that marries a singular shared-memory supercomputer with a server cluster.
According to SGI Asia Pacific vice president Bill Trestrail, such a system enjoys the benefits of both types of systems and minimises their downfalls, particularly those of the cluster.
An example of such hybridisation in practice is in the Queensland University of Technology (QUT).
QUT has installed an HPC system comprising a 96-processor shared-memory supercomputer and a 112 processor server cluster. While not strictly a truly integrated system, the combination functions similarly to SGI's hybrid.
Of course, the aim of any system like this is to enable researchers to focus on their work, rather than fiddle with or worry about the technology involved.
Dr Joseph Young, manager, HPC and research support, seems hopeful:
"These new HPC technologies will greatly enhance research throughput and help QUT to deliver innovation and research outcomes faster than ever before."
Unlocking next-gen chip efficiency
By studying how heat moves through ultra-thin metal layers, researchers have provided a...
Ancient, 3D paper art helps shape modern wireless tech
Researchers have used ancient 3D paper art, known as kirigami, to create tuneable radio antennas...
Hidden semiconductor activity spotted by researchers
Researchers have discovered that the material that a semiconductor chip device is built on,...