Future Salon – Future of Health: Adam Bosworth, Christine Peterson, and Faheem Ahmed

I attended my first Future Salon this evening and heard Adam Bosworth, Christine Peterson, and Faheem Ahmed give presentations.  The salon was held at the SAP campus in Mountain View.

Christine Peterson started out the talks with a presentation on Quantified Self, life extension,  and personalized medicine.  The audience was mostly aware of QS already, but some expressed disdain for the life extension idea during the Q&A.  One audience member complained that the fountain of youth has been sought for centuries but no one has delivered on the promise of extended life span.  I thought that this was a bit ironic given the steady increase in life span over time and the fact that QS and modern life extension techniques haven’t really been in use long enough to show a longevity effect.  However, Peterson responded with sympathy and actually said that she was more interested in health extension.  She mentioned concierge doctors as a good resource to help with this.

Bosworth talked about his new startup Keas which is a corporate wellness app that uses gamefication to achieve captology.  He pointed out that personalization is unhelpful in a team environment.  Having just a few core health goals gives everyone a common experience to share.  He listed four activities to achieve better heath: eat less food overall, eat more greens, reduce stress, and exercise more.  He dismissed QS as being for Silicon Valley data geeks who were mostly healthy already.  His focus is on the average American who is overweight,  stressed, eats a poor diet, and neglects exercise because that is where he feels he can do the most good.  He mentioned that he wanted to set aside his work on “legos for adults” and do something to help humanity.

Ahmed talked about his own experience as a care giver for older members of his family as well as his son.  He presented an app he led the development of at SAP called Care Circles.  This app helps care givers manage their care plans and team members.  It provides assistance in building care strategies as well as journals and customizable data trackers.  The social elements allow care givers to share medical data with anyone they want which bypasses HIPAA barriers to social apps that most health providers face.  Ahmed mentioned that generation X was a sandwich generation having a larger population of baby boomers to care for as well as a large generation Y.  I sympathized with this, having had to help with the care giving my girlfriend did during her sister’s cancer.  This tool would have been really useful to keep track of progress and tasklists.

What is Futurism anyway?

Tonight I attended a party to celebrate the recent marriage of a friend.  I found myself being asked over and over again: “So what is Futurism anyway?”  I couldn’t resist responding that that it was an art movement in Italy around the early 1900’s.  I do actually like a lot of futurist art.  They often tried to depict this sense of motion to capture the frenetic pace of modern life.  I am not too into the violence and fascism though.

But then I had to get serious and come up with a decent answer.  And that is why it’s a good idea to hang out with people outside your scene sometimes.  It forces to you articulate ideas that you often take for granted.  So I would say things like: Futurism is thinking about the future and wondering about what will happen.  Science Fiction is futurism.  Futurist consider the idea that technology is accelerating exponentially and ask what the consequences might be.

And a lot of people responded quite positively to this.  People feel these changes around them.  The impact of automation on jobs is becoming more evident.  We talked about the importance of education in these changing times and how budget cuts and skyrocketing college costs are putting kids into indentured servitude.  We talked about how China might come to rule the world. I trotted out my standard bearish comments regarding China’s corrupt financial system and it’s lack of transparency and rule of law.

A scientist who recently drank the Kurzweillian kool-aid and had actually visited China was part of this discussion.  He mentioned that systems with different paths to accomplish similar ends were more stable.  I took this to be an endorsement of pluralism and I complained that China’s police state doesn’t allow for this.  Another guest chimed in that top down rule can’t work and bottom up societies have more ideas.  But our newly minted Singularitarian friend countered that the Chinese rulers carefully tweak the different elements of society, allowing more freedom in certain areas and restricting it in others.  I don’t understand how this system can possibly work, but it’s hard to argue with the growth numbers.  (Well the specific numbers are probably fudged but there has clearly been lots of growth.)

I talked to another fellow who was into machine learning and who had doubts about the whole Deep Learning project that Norvig was recently crowing about at the Singularity Summit.  His opinion was that Deep Learning has been around for a while and that any recent success of the algorithms might be getting conflated with the benefits conferred by big data.  He said that other algorithms should be tested against this big data to see if they perform almost as well.  He mentioned support vector machines as one alternative, but these seem to require labeled training data, which Deep Learning doesn’t require.  So arguably, Deep Learning is nicer to have when evaluating big unlabeled data sets.  Anyway, when I asked Monica Anderson, she endorsed Deep Learning as being a thing, so I remain impressed for the time being.

My Deep Learning skeptic friend was also wary of Quantified Self.  I think his point was that over-quantification was being slowly forced upon people.  This hilarious scenario of ordering a pizza in the big data future immediately came to mind.  But as much as I love the ACLU, I don’t have much faith that they can protect us against big data.  I actually think that being into QS might better prepare people to deal with big data’s oppression.  At least QS’ers become more aware that personal data can tell a story and they are exploring how some of these stories can be self-constructed.  Hopefully this will help us navigate a future where nothing is private.

A recurring theme when thinking about the future is that humans will somehow get left behind as technological progress skyrockets beyond our comprehension.  A lot of humans are already getting left behind, economically and technologically.  Someone who can’t use search is at a massive disadvantage to everyone that can.  I try to be positive sometimes and point out that mobile devices are spreading throughout the developing world or that humans can augment to keep up with change.  But while we may live in an age of declining violence, I can see why some would still complain of sociopathic corporate actors and the policies being promoted that withdraw a helping hand from those in need.

At one point in the evening, toasts were made to the newlyweds and a passage by CS Lewis celebrating love was read.  I looked around as the various couples reacted to the emotional piece and I thought of my own girlfriend.  I thought about how we had been through death and madness.  Yet we managed to stay together, supporting one another, loving each other after all these years.  I thought about how deeply lucky we are to have one another.  I felt great happiness for these newlyweds with the courage to undertake this struggle for love.  I know us futurists can be cold, almost autistic in our dispassionate rationality, but it may well be love and empathy that will serve us best in the coming future where so little is certain.

Singularity Summit Day 2: Verner Vinge reminds me why I doubt the recursive self-improving AI

I did break down and actually attend a couple of talks at the Singularity Summit this year: Vernor Vinge and Peter Norvig.

Peter Norvig gave a talk that would have satisfied any generic group of AI developers.  Google is making some frightening progress.  This Deep Learning project is the most interesting aspect of his presentation from an AI architecture point of view.  It’s impressive that Google can pair two top-level researchers in the field (Andrew Ng and Geoffrey Hinton) with parallel processing expert Jeff Dean and scale up academic models onto a functional 1000 node cluster.   Boom, you are identifying cats and faces from unlabeled YouTube videos.  It must be sickening to anyone who wants to compete with Google in the AI space.

But he never really mentioned friendliness.  I was hoping he would trot out some more theory behind this big data approach.  He gave a similar talk to Monica Anderson’s AI meetup a couple of years ago.  I was there for that and it was pretty cool to see him present to such a small crowd.

At the Singularity Summit this year, he also talked about Google’s translation service which basically derives translations by mapping many many identical documents written in multiple languages.  I was hoping to ask him what happens when the algorithm starts consuming translations that were actually created by Google Translate.  It’s bound to screw them up if that happens.  But then I realized that Google probably saves every translated document and checks new documents checksums against previous translations before using them to build mappings.  That’s hard to picture though.  They manage:  A. Mind. Crushingly.  Large. Amount. Of. Data.

Vernor Vinge outlined some outcomes that he sees for the singularity.  One crazy idea he puts forth is a digital gaia where the world is minutely ornamented with digital sensors coupled to processors and actuators.  One day they all spontaneously “wake up” and all hell breaks loose.  He describes a reality with all the stability and permanence of the financial markets.  I had a vision of my SmartLivingRoom(tm) suddenly reconfiguring itself into a nightmare of yawning jaws and oozing orifices.  But in reality, we might just see wild fluctuations in the functionality of computationally enhanced environments; from smart to dumb to incomprehensible.

Next up: Augmented intelligence, a neo-neo-cortex provided by technology.  This is his preferred scenario.   Crowdsourcing is cool, yada-yada.  Vinge imagines a UI so extreme that accessing it would be as convenient as the supported cognitive features. I used to like this idea until I started thinking about the security implications.  I don’t want my brain hacked.

He did make one amazingly succinct point about human computer synergy.   Computers can give us instantaneous recall and amazing processing speed, humans can provide that which we are best at: wanting things.

Humans want things.  For me this cuts to the very heart of the AI question.  I always complain that none of these AI geniuses can show us an algorithm to define problems.  (No, CEV doesn’t count.)  Algorithmic problem definition is just another way to say algorithmic desire definition   Good luck with that one.

All simple human desires seem to arise from biological imperatives.  Maybe artificial life could give you that.   More complex desires are interpersonal and might be impossible to reduce back to metabolic processes.  You may want fame for the status, but the specific type of fame depends on which person or group you are trying to impress.  And that changes throughout your life.

And if we do build Artificial Life, it may well be that it can only function with similar constraints as, uh, non-artificial life.  In fact, Terrence Deacon may well be right and constraints are the key to everything.  Ahh, the warm fuzzies of embodiment are seeping over me now.

But seriously, SingInst, where is this algorithmic desire going to come from?  And once you get that, how the hell are you going to constrain the actions of GodLikeAI again?  I know, I know, Gandi would never change himself into an anti-Gandi.  But we may be like bacteria saying that our distant offspring would never neglect the all encompassing wisdom of nutrient gradients.