The global race to build more powerful supercomputers is focused on the next big milestone: a supercomputer capable of performing 1 million trillion floating-point operations per second (1 exaflops). Such a system will require a big overhaul of how these machines compute, how they move data, and how they’re programmed. It’s a process that might not reach its goal for eight years. But the seeds of future success are being designed into two machines that could arrive in just two years.
China and Japan each seem focused on building an exascale supercomputer by 2020. But the United States probably won’t build its first practical exascale supercomputer until 2023 at the earliest, experts say. To hit that target, engineers will need to do three things. First they’ll need new computer architectures capable of combining tens of thousands of CPUs and graphics-processor-based accelerators. Engineers will also need to deal with the growing energy costs required to move data from a supercomputer’s memory to the processors. Finally, software developers will have to learn how to build programs that can make use of the new architecture.
“To some degree it depends on how much money a country is willing to spend,” says Steve Scott, senior vice president and chief technology officer at Cray. “You could build an exaflop computer tomorrow, but it’d be a crazy thing to do because of the cost and energy required to run it.”
link.
No comments:
Post a Comment