Tuesday, July 16, 2013

Fourth Crazy Thought of the Day: Singularity For You! An Argument Against Uploading

I have stood at times claiming the Singularity isn't coming. I have made a point at times of even mocking the idea, even going as far as joining the call to label the Singularity the "Rapture of the Nerds." I have only made vague statements as to why this was the case, sometimes making snarky one-off comments like "The Heat Death of the Singularity." Its time to put a little more thought and time into the subject.

I am seeing too articles about thinking machines and ridiculous videos expounding on the future is all computer and meatspace is, at best, obsolete in my kids and even possibly my own life time. I am writing my own response to the idea of the Singularity, or rather, at least one aspect which keeps getting thrown around. Today, I am tackling not AI per se and the fear mongering associated, but rather the idea of uploading. Taking all the data of your brain and turning into a very accurate model run on a computer which will be a high fidelity copy of you. Your mental twin or even afterlife. (*cough*RON*cough*)

Why? Because its complete and utter nonsense that this is happening any time soon and I will explain why.

First off, let me scope this a bit further. This is NOT an academic, peer reviewed paper. Nor is it even a white paper which must run the gauntlet of at least your peers at place of work. This is a blog post. The research backing it up is of the variety using google and personal knowledge. There will be some links to various places, but there won't be a bibliography. And, again, I am not tackling anything other than the uploading scenario.

Secondly, this is an extrapolation of the brute force method of simulating a human brain. It *IS* the method I hear to most often thrown around. However, its still brute force and there may be other methods which are less computationally intensive. I do not know of them, however, and what I don't know or can't information on is not something I feel comfortable refuting.

Those stipulations in place, let's do this thing.

The brute force method I mentioned above is literally scanning the entire brain, getting its state - a snapshot if you will - and simulating it on a computer down to the level of physical processes with the assumption and belief this will be sufficient to produce a human mind as software. The idea then is you can run that software at human rate or even potentially faster if the computers are available to do so.

You could then work 10 hours per week your reference frame, goof off 158 hours in the week while running at 4x wall clock speed still accomplish as much as you did as a meatspace person. Indeed you could even contemplate the world, sort out problems or even come up with self congruent, noncontradictory religions in the wall clock time of a week if you had a megaspeed up. Best of all, your mind would never 'die.'

So, for the moment, what would it take to simulate a human brain? After all, some have been arguing we are rapidly approaching the point where the supercomputers we have perform as many computations a second as the human brain. Is that sufficient to run a simulated human brain even in real time?

Let's see.

Let's head down to the smallest part of the brain, the synapse and see if there is a good simulation of it. In fact, there is and its called MCELL 3.0. It simulates a single synapse of a neuron. In 2007, it took 45 seconds of wall clock time to simulate a single synapse for one second of simulated time on an AMD operton derived node.  45:1 is not so good, if not as bad as some other simulations I have either worked with or known.  Based on improvements in performance, increased core counts and increased memory, the good news is we can do a real time simulation of a synapse!

If we can model the synapse, we need to take the step up.  How many synapses are there on a human neuron?  Roughly 10,000.  My day job's brand new supercomputer, Edison, is building out Phase 2 and it will only have 5500 nodes: that's two petaflops sustained performance!  Edison cannot run a neuron.  Let's check out Titan at Oak Ridge National Lab.

Titan has 18k+ nodes with two CPUs of eight cores each.  Right there we can run at least 1.8 neurons' worth of synapses!  Woo!  The good news though is though Titan runs with a GPU per node.  That GPU is 7x times faster than the combined two Opteron CPUs.  Now, I'm being bad here for a moment and using a linear extrapolation: GPU coding is NOT like #std CPU coding.  Also there are communications overheads not being included here.  Nor am I counting the computation for the internals of the neuron.  

Even so, being way overly generous, we can do a whole eight (*8*) human neurons in simulation in real time. It requires 8 MW of power to do so and I bet that does not count cooling.

We can almost do a fruit fly.  The Chinese probably can.  However, set aside the fruit fly.  Let's keep on the ball of humans.

Let's be generous again.  Let's assume there are enormous improvements in algorithms for the simulations which can be done.  MCELL may not be the fastest simulation.  Let's argue, for the sake of generosity, argue we can get three orders of magnitude improvement on the simulation speed.  This allows us 8,000 human neurons.

How many neurons do people have, really?  1,000,000,000,000,000.

The fastest American supercomputer is, under way overly generous terms, still 125,000,000,000 times too slow.  A supercomputer which could sustain an exaflop would still be over 20 million times too slow.  It would also take over 160 MW to run if you used the absolute best possible CPU cycle to power ratio on the current top500 and DOUBLED it (6 gigaflops/watt).

The comeback is we have Moore's Law.  Moore's  Law states every 18 months the number of transistors per area doubles: people have warped this to mean every 18 months computers get twice as fast.  oy. That's incorrect, but for the sake of generosity, we'll use that.  

How many generations are we away from brute forcing, under the generous terms above before we can simulate all the synapses in the human brain in real time?  It is 37 generations or 55 years.  That's with an assumption of a 1000x speed up over MCELL 3.0 and no overhead for communication or whatnot.  If there is no speed up then we're looking at another 10 generations or 18 years.  The soonest we could take on human brain simulating computer would be 2068 or 2086.

What's the power consumption?  Koomey's Law claims every 18 months the amount of power necessary to do a flop is cut in half.  So, power consumption of supercomputers ought to be stable if that were true, right?  We ought to be using the same amount of power a Cray-1 did.  After all, Moore doubles the speed and Koomey cuts the power in half.  These ought to balance out.  They haven't.  A Cray-1 ran with 115 KW of power and that included the cooling.  Titan runs with 8.1 MW of power and it probably does NOT include the cooling: the rule of thumb here is we use as much power to cool the systems as we speed on running them.  If you allow for a drift from Koomey's Law similar to what has happened between the Cray-1 to Titan, you're looking at a 500,000 times increase in power.  Even if we are again generous again with our factor of 1000 improvement handwave, allowing this time for a 1000 times improvement in power over the the drift from Koomey's Law, you're still looking at 500 times the power you need to run Titan to run a simulated human brain.

That's 4 GW.  You need an upgraded Palo Verde (actually 1.2 PVs) for running a single person.  The largest nuclear power plant in the USA is needed for ONE PERSON.

Even if you were still able to break that by our magic factor of 1000 wand down to 4 MW, half of Titan, the cost is as much as 1000 households per year and you on average get 4 human beings out of that per household.

So, for the moment, consider.  We granted 1000 times improvement over current algorithms for simulating the synapses.  We ignored the other processes for running the brain than just the synapses.  We granted a 1000x improvement from the observed drift from Koomey's Law for supercomputers.  Trtuthfully, actually its more than that, but I am being generous here.  It doesn't matter if I am off by a fator of two or ten or even a thousand.  I gave over a factor of a million.  I'd still come out ahead here with over a factor of 1000.  

This doesn't even consider the idea we may, at some point hit the sigmoid for computational technologies.  I just run with the idea we will always have computer technology moving along in the wrong headed moore's law interpretation.  There is a point, either in complexity or in

Economics, my friends, kills the Rapture of the Nerds.  Its simply cheaper to raise a human being from birth to death than it is to upload one person and keep them running.

The Singularity, at least as envisioned by those which see us simulating us in uploads just ain't gonna happen.

4 comments:

  1. Siobhan1:57 PM

    I think your third from last paragraph got truncated in posting. "There is a point, either in complexity or in "


    ReplyDelete
  2. Anonymous3:01 PM

    "How many neurons do people have, really? 1,000,000,000,000,000."

    I think you mean "how many synapses" -- we only have billions of neurons, not thousands of trillions of them.

    ReplyDelete
  3. Comments here.
    http://james-nicoll.livejournal.com/4482622.html?thread=84071998#t84071998

    ReplyDelete
  4. You seem to be assuming 10^19 synapses. But http://en.wikipedia.org/wiki/Neuron#Neurons_in_the_brain says 10^14.

    You argument still holds with this 5 orders of magnitude of error. However this big error on number of synapses leads to me to take your other numbers with a grain of salt.

    ReplyDelete