Wednesday, June 26, 2013

Still No Exacscale For You (Nvidia's Bill Dally Weighs in)

NVIDIA’s Chief Scientist, Bill Dally, tackled this issue in the ISC conference keynote address he delivered at the big event, entitled “Future Challenges of Large-scale Computing.”

Presenting to some of the high performance computing (HPC) industry’s foremost experts, Dally outlined challenges the industry needs to overcome to reach exascale by the end of this decade.

It boils down, in his view, to two major issues: power and programming.

It’s About Power, Forget Process

Theoretically, an exascale system – 100 times more computing capability than today’s fastest systems – could be built with only x86 processors, but it would require as much as 2 gigawatts of power.

That’s the entire output of the Hoover Dam.

On the other hand, the GPUs in an exascale system built with NVIDIA Kepler K20 processors would consume about 150 megawatts. So, a hybrid system that efficiently utilizes CPUs with higher-performance GPU accelerators is the best bet to tackle the power problem.

GPUs are something I pushed here at the the Lab from the retirement of the  Cray vector machines until moving over to the data systems side.  However, CPU programming is still problematic in many ways.  Yet, this could be rectified.  On the flipside...how many sites can STILL afford 150 MW of power for a single machine.  Our new building won't be able to handle 300 MW of power (we keep two big machines on the floor at a time...)

I am refining the no singularity idea for the post.  Friends have seen the extremely rough outline of the idea in email.  it needs a couple more mental iterations before being committed to a post. 

No comments:

Post a Comment