Once the rarefied realm of weapons makers and code breakers, supercomputing now faces a defining moment. Egged on by a new government need for computing power, this exotic technology is leaving the “nukes and spooks” for more everyday uses. Researchers and commercial customers are running everything from digital simulations to salary databases on machines that rely not on expensive custom microchips but off-the-shelf technology. In short, supercomputing is getting down to business.

Five years ago there were a dozen supercomputer manufacturers hoping to cash in on variants of two main ways of thinking. In one corner was Seymour Cray. His company, Cray Research, designed custom chips called vector processors, which sold mostly to the U.S. government. On the other side was W. Daniel Hillis. As a grad student at MIT, Hillis invented “massively parallel processing,” which uses thousands of cheap processors to work on tiny bits of a larger solution. He founded his company, Thinking Machines, in 1988.

Then the market went south. Computer chips became commodities instead of boutique items, making vector machines cost more to develop than they sold for. The cold war ended, and the government cut back. And massively parallel systems were still hard to program because each processor had its own memory. As a result of the turmoil, this year Silicon Graphics (SGI) merged with Cray Research to claim half of the world supercomputer market. Like other large manufacturers, the new entity is committed to a concept called “scalable architecture.” It plans to use its current lines of microprocessors to build larger machines.

The companies were already headed toward parallelism. Now they use the same processors in systems from workstations to supercomputers. Instead of redesigning processors, they just add more; applications that run on a two-processor system run even faster with 128. That’s sealable architecture. “One of the things you get with scalability is a different kind of economies,” says Willy Shih, vice president of marketing for SGI’s Sealable Systems Group. “Our Origin 2000 line starts as a small system at about $12,000, but using more commodity building blocks, you can build everything up to the largest computer in the world.”

The manufacturers hope that will set up a cycle of demand. The more installed hardware, the bigger the software market. The more software available, the more people want a souped-up computer. Even now researchers have visions of designing drugs by programming intricate protein molecules and predicting the weather by modeling huge chunks of atmosphere. Industry wants virtual cars to test aerodynamics and safety on the cheap; they’re already doing high-powered accounting. “It’ll be cheaper for someone to sit down at a workstation connected to a supercomputer and simulate a design problem than to go into a workshop and build a prototype,” says Mark Bregman, general manager of IBM’s RS/6000 division.

Best of all, the government is paying to develop the machines. This September President Clinton signed the Comprehensive Nuclear Test Ban he had committed to last year. The Department of Energy then had no way to know flits weapons worked. The “physics package”–the part that blows up–ages in hideously complicated ways on American nukes. “They were designed to push some boundaries, but people assumed when they did this that they could test,” says Gil Weigand, head of Energy’s Accelerated Strategic Computing Initiative. ASCI’s goal is “science-based stockpile stewardship” - simulating with supercomputers what detonation used to test.

Computers powerful enough to model the nuclear life cycle, teraflops machines, weren’t due until 2025. So ASCI started paying companies to pick up the pace. ASCI’s budget has rocketed from $85 million this year to $295 million for fiscal 1999. But what to spend the money on? “It became very clear to us early on that the only way we were going to get to these performance numbers was just to bunch lots of processors together,” says Alex Larzelere, director of Energy’s Office of Strategic Computing. Hence Intel’s Option Red, as well as the even faster Pacific Blue, an IBM RS/6000 SP, and Mountain Blue, a four-teraflops Cray.

Not everyone is biased toward scalability. Burton Smith, a founder of Tera Computer Computer, says there’s still room for a system with custom-designed chips. Tera’s trick is in the way those chips get at data. Scheduled for delivery to the San Diego Supercomputer Center early next year, the Tera uses a small number of custom processors, each with more than a hundred “virtual processors” that alternate between calculating and waiting for data. “We believe the dynamics of the situation can be changed by having machines that are easy to program and broadly applicable,” Smith says.

He’s not alone. Fujitsu and NEC, Japan’s largest supercomputer makers, both still sell a lot of vector machines. And while the world supercomputer market is reported to be about $2 billion, that doesn’t count the computers used by intelligence organizations. “That’s where a big part of the market is,” says SGI’s Shih. “There’s a room somewhere where they have multiple hundreds of millions of dollars’ worth of vector processors. Even I don’t know what they do.”

So is it a world split between vector and scalable? “There was certainly a long period when there were things you’d do better with a horse than a car,” says Hillis, now at Disney. “But the people who were experts in horses believed it was true for a lot longer than it was.” Scalable supercomputer makers are now moving toward “distributed shared memory” architectures that will supposedly be as easy to program as a PC, but have all the power of a supercomputer. And why not? The National Science Foundation is already funding research toward a quadrillion operations a second. Get ready for petaflops.