I am not kidding; your brain will probably get hacked.

I vacillate between optimism and pessimism when I consider the trajectory of human history.  On the one hand I am sympathetic to the optimists like Steven Pinker and Matt Ridley.  They are able to show a large body of data that supports a narrative of human progress.  On the other hand, I can see why scientists and hippies alike are concerned that human population growth could damage the biosphere‘s ability to sustain humans.  Unlike the hippies, I acknowledge that an anti-technology, back-to-nature approach would also result in a massive die-off of the human population.  (Pol Pot had a back-to-nature plan.)  Also, going back-to-nature would simply reset the timer since humans (perhaps more than other living things) seem to have these cooperative and competitive threads intertwined.  Groups of humans that cooperate to create technology will always outcompete groups with weaker technology.

I assume that we will need to innovate our way out of this population problem.  I don’t buy the idea that humans will give up materialism en masse and switch  to a non-consumptive knowledge-based economy as some authors suggest. ( At least not until we get Matrix quality Virtual Reality.) Though the paper just cited is probably correct that this innovation inevitably feeds into a singularity event that will move humans to a “qualitatively new level.”  I guess I prefer the devil-we-can’t-know to a probable human semi-extinction outcome.  Go Singularity!

The path to this human-salvation-level innovation is definitely rocky though.  I have brought up Thiel’s concerns about the stagnation of innovation.  And there seem to be perverse market incentives in place to wring every last drop of dinosaur juice out of the earth.  (Seriously, the insurance companies need to stand up to big oil at some point.)  We need more of our plutocrat overlords to get into electric cars, solar energy, and rockets to Mars like Elon Musk.

Ok, so are you getting the idea that this blog is not entirely about starry-eyed techno-optimism?  Good, read on.  The reason I am writing this post is that an acquaintance from my Futurist meetup sent me a NY Times article on human augmentation.  This article digs into actual augmentation that is happening today: from brain implants that help paralyzed people operate artificial limbs to drugs like Provigil which some people use to help them perform better when then skip sleep.

I am most concerned about the security of computerized augmentation though.  Cochlear implants are being used by hundreds of thousands of people right now.  These devices “require and enable remote programming.”  Devices to restore vision to the blind are in early stage development as well, along with exoskeletons to help the physically disabled.  All of these devices will require software.  Software requires updates.  Herein lies a problem.

There is a great line in Ghost in the Shell where an agent says to an enemy: “Sorry pal, I had to hack your eyes” and then kills him.  GITS was actually fairly prescient with their take on cyberwarcraft.  (The series deeply explores transhumanism as well, I highly recommend it.  Don’t be distracted by the sexy outfits, this is fairly cerebral stuff.)  Some might mistakenly assume that these augmentation devices are designed with security in mind.  But they aren’t.  One researcher showed how he could hack his own insulin pump at BlackHat last year.

Some might also mistakenly assume that computer security is effective even when effort is taken to implement it.  Consider botnets, which are massive collections (millions!) of infected computers under the control of hackers.  It’s hard to measure botnets, but there is no question that millions of machines are at work generating spam every day.  You see, the old days of naughty vandal hackers is mostly past.  Now hacking is mostly a business based on stealing your stuff: computing resources, financial account information,  intellectual property, etc.  (Well there is also the whole cyberwar thing.) So hackers do everything they can to evade detection.

I work with computer systems for a living, so I need to deal with security to do my job and I follow the trends.  At BlackHat/DefCon this year, I was depressed by the general consensus that the computer security was effectively useless against Advanced Persistent Threats (APT).  Look at hacks like Aurora, Night Dragon, and the RSA hack.  Our best technology, energy, and defence companies were cracked open and looted of intellectual property worth billions by some estimates.   APT is basically a targeted attack where the attacker knows what victim they want to hit as opposed to general attacks that just scan the internet to find any vulnerable system.  These types of attacks are fiendishly difficult to defend against.  The database approach used by most AntiMalware software is  fairly effective against non-targeted attacks of opportunity.  But to populate a database of APT attack vectors, you need data sharing and that is a whole other can of worms.  “Look how I got hacked, Mr. Corporate Competitor.”  “Sure, go ahead and install that black box on my network Uncle Sam.”

So let’s just say that computer security is problematic.  I am not trying to discourage internet usage.  It seems clear that in most cases the benefits of connecting to the internet dwarf the risks from cybercrime.  However there are many types of systems with different risk/reward ratios that should not be connected to the internet.  But they are anyway, since administrators make mistakes or take shortcuts.  Some might argue that augmentation devices will be implemented more carefully than unimportant things like power plants or facility control systems.  However, whenever there is a vector there is a way.

One might wonder what real benefit hackers gain by hacking augmentation devices.  Why hack eyes and ears?  Doctorow and Stross explore the idea of spam being injected directly into your visual field in their recent book Rapture of the Nerds.  Of course Big Brother types in governments would love to control what you see and ear and have access to those feeds as well.  Really the possibilities are endless.  The digital series H+ explores how augmentation might go awry.  I haven’t watched it much, but this scene is fairly chilling where a bunch of augmented humans experience a malfunction of some sort.

But still, the benefits of augmentation might still outweigh the risks.  I do think that augmentation will probably be the best way to avoid getting mown down by superhuman AI.  Nonetheless, we might not have a choice in the matter.  There may well come a day when anyone without augmentation will be as helpless as a modern information worker without internet access.  The augmented will simply outcompete everyone else.

7 thoughts on “I am not kidding; your brain will probably get hacked.

  1. Pingback: 2012 Humanity+ Conference Day 1 – part 1 – David Orban | The Oakland Futurist

  2. Pingback: Should we crowdsource malicious technology remediation? | The Oakland Futurist

  3. There is a disconnect between the title of your post and the content.

    One of the reasons are brains are going to be tough to hack is that they each run an unique operating system. Yeah, the components each person’s brain is built out of are constructed with very similar blueprints, but each design evolved divergently from the rest. Contrast that with our genetic mechanisms, which is not only share amongst humans, but largely with the rest of the biosphere. So we are used to viruses leaping from person to person and from animal to human — but there are very, very few instances of neural structures getting “hacked”.

    All those gizmos we are tempted to add to our brains? Sure, those will be mass-produced and vulnerable. But the inside of the brain itself will be a formidable citadel of security for a long time after that.

    That might not be much comfort to someone who has had their eyes surgically replaced by augments and suddenly is only seeing what a hacker chooses — and will undoubtedly be the grist for some very sophisticated psychological manipulation — but the brain itself getting hacked? Not for a long while yet.

    • That argument seems to suggest that augmentation is hard, but it doesn’t show that hacking a working augment is hard. If everyone’s brain wiring is truly unique, then an augment is harder. It would need to be adaptive, or people would need to develop new neural networks to use it. However, it can’t be a black box all the way up. At some point high in the stack there will be a function call for sendInputToBrain($_content) or something like that.

      • If the augments have to get into the inner data structures, it will be very hard, since those seems to be somewhat holographically distributed in ways unique enough to make things very hard indeed (darn it, where is that grandmother neuron hiding?)

        But I/O won’t require that. Figuring out what signals to send down the optic nerve so the brain receives it as intelligible is a different kind of problem. Same with auditory augments (undoubtedly easier). Maybe even hijacking proprioceptive inputs so a pilot “feels” the wind over a planes wings (although pilots will probably be obsolete long before then). An augment that lets a person subvocalize and see results of a Watson-type query on an in-eye display doesn’t require deep knowledge.

        Plugging a wikipedia module into one’s memory does. Probably even tougher would be an explicitly cognitive module, such as something to increase working memory.

  4. Pingback: h+ Magazine | Covering technological, scientific, and cultural trends that are changing human beings in fundamental ways.

  5. Pingback: McManus Proffers Trillions at SF Tech Shop Future Salon | The Oakland Futurist

Leave a Reply to Scott Jackisch Cancel reply

Your email address will not be published. Required fields are marked *