Future Salon – Future of Health: Adam Bosworth, Christine Peterson, and Faheem Ahmed

I attended my first Future Salon this evening and heard Adam Bosworth, Christine Peterson, and Faheem Ahmed give presentations.  The salon was held at the SAP campus in Mountain View.

Christine Peterson started out the talks with a presentation on Quantified Self, life extension,  and personalized medicine.  The audience was mostly aware of QS already, but some expressed disdain for the life extension idea during the Q&A.  One audience member complained that the fountain of youth has been sought for centuries but no one has delivered on the promise of extended life span.  I thought that this was a bit ironic given the steady increase in life span over time and the fact that QS and modern life extension techniques haven’t really been in use long enough to show a longevity effect.  However, Peterson responded with sympathy and actually said that she was more interested in health extension.  She mentioned concierge doctors as a good resource to help with this.

Bosworth talked about his new startup Keas which is a corporate wellness app that uses gamefication to achieve captology.  He pointed out that personalization is unhelpful in a team environment.  Having just a few core health goals gives everyone a common experience to share.  He listed four activities to achieve better heath: eat less food overall, eat more greens, reduce stress, and exercise more.  He dismissed QS as being for Silicon Valley data geeks who were mostly healthy already.  His focus is on the average American who is overweight,  stressed, eats a poor diet, and neglects exercise because that is where he feels he can do the most good.  He mentioned that he wanted to set aside his work on “legos for adults” and do something to help humanity.

Ahmed talked about his own experience as a care giver for older members of his family as well as his son.  He presented an app he led the development of at SAP called Care Circles.  This app helps care givers manage their care plans and team members.  It provides assistance in building care strategies as well as journals and customizable data trackers.  The social elements allow care givers to share medical data with anyone they want which bypasses HIPAA barriers to social apps that most health providers face.  Ahmed mentioned that generation X was a sandwich generation having a larger population of baby boomers to care for as well as a large generation Y.  I sympathized with this, having had to help with the care giving my girlfriend did during her sister’s cancer.  This tool would have been really useful to keep track of progress and tasklists.

How unlikely is safe AI? Questioning the doomsday scenarios.

I have always been dubious of the assumption that unfriendly AI is the most likely outcome for our future.  The Singularity Institute refers skeptics like myself to Eliezer Yudkowsky’s paper: Complex Value Systems are Required to Realize Valuable Futures.  I just reread Yudkowsky’s argument and contrasted it with Alexander Kruel’s counterpoint in H+ magazine.  H+ seems to have several articles that take exception with SI’s positions.  The 2012 H+ conference in San Francisco should be interesing.  I wonder how it much it will contrast with the Singularity Summit.

One thing that bothers me about Yudkowsky’s argument is that on the one hand he insists that AI will always do exactly what we tell it to do, not what we mean for it to do, but somehow this rigid instruction set could be flexible enough to outsmart all of humanity and tile the solar system with smiley faces.  There is something inconsistent in this position.  How can something be so smart that it can figure out nanotechnology but so stupid that it thinks smiley faces are a good outcome?  It’s sort of a grey goo argument.

It seems ridiculous to even try constraining something with superhuman intelligence. Consider this Nutrient Gradient analogy:

  1. Bacteria value nutrient gradients.
  2. Humans evolved from bacteria achieving a comparable IQ increase to that which a superhuman AI might achieve as it evolves.
  3. A superhuman AI might look upon human values the same way we view bacterial interest in nutrient gradients.  The AI would understand why we think human values are important, but it would see a much broader picture of reality.

Of course this sets aside the problem that humans don’t really have many universally shared values.  Only Western values are cool.  All the rest suck.

And this entire premise that an algorithm can maximize for X doesn’t really hold water when applied to a complex reflexive system like a human smile.  I mean how do you code that?  There is a vigorous amount of hand waving involved there.  I can see detecting a smile, but how to you code for all the stuff needed to create change in the world?  A program that can create molecular smiley faces by spraying bits out to the internet? Really?  But then I just don’t buy recursively self-improving AI in the first place.

Not that I am against the Singularity Institute like some people are, far from it.  Givewell.org doesn’t think that SI is a good charity to invest in, but I agree with my friend David S. that they are poorly equipped to even evaluate existential risk (Karnofsky admits existential risk analysis is only a GiveWell Lab project).  I for one am very happy that the Singularity Institute exists.  I disagree that their action might be more dangerous than their inaction.  I would much rather live in the universe where their benevolent AI God rules than one where the DARPA funded AI God rules.  Worse yet would be a Chinese AI implementing a galaxy wide police state.

This friendliness question is in some ways a political question.  How should we be ruled?  I was talking with one of the SI related people at Foresight a couple of years ago and they were commenting about how much respect they were developing for the US Constitution.  The balance of powers between the Executive, Legislative, and Judiciary is cool.  It might actually serve as good blueprint for friendly AI.  Monarchists (and AI singleton subscribers) rightly point out that a good dictator can achieve more good than a democracy can with all it’s bickering.  But a democracy is more fault tolerant, at least to the degree that it avoids the problem of one bad dictator screwing things up.  Of course Lessig would point out our other problems.  But politics is messy, similar to all human cultural artifacts.  So again, good luck coding for that.

Samuel Arbesman at Long Now

Samuel Arbesman gave a talk on his new book “The Half LIfe of Facts” at the Long Now museum tonight.  This is a good venue to hear authors speak.  It is quite intimate and there are generally plenty of good questions and discussions afterward.  Arbesman did a good job of fielding comments from the  group.  You can see his presentation style in this short video.

His thesis is that there is an order and regularity  to the way knowledge changes.  He thinks that studying this can help us order the knowledge around us.  Because of this change, we should expect some portion of the facts we take for granted now to be overturned.  He points out that doctors are taught in medical school to expect their field to change and journals such as UpToDate provide this service to help doctors keep track of changes in medical knowledge. (My personal experience makes me skeptical that many doctors actually take advantage of this sort of thing.)

Arbesman takes the position that we would all benefit from this approach to learning.   We should learn how to think and how to understand the world but treat education as a continuing process.  Which is something I tried to touch on before.  Arbesman did comment that it’s better to rely on Google for current information than memorize a bunch of facts that may or may not continue to be true.  This takes me back to the ideas of Madeline Levine who I’ve mentioned before.  She argues that children should do less homework and more play because that builds creativity and problem solving skills.

Another point Arbesman brought up was that there is so much knowledge now, that many correlations can be discovered by mining the existing literature and joining together papers that each solve some fraction of a problem.  A speaker at the CogSci conference in 2010 at UC Berkeley mentioned that many answers probably go unnoticed and uncorrelated in the literature.  One effort to start detecting these hidden relations in the bioinformatics field is CoPub project which is a text mining tool developed by Dutch academics and researchers.  Theory[Mine] does an amusing take on this idea by letting users purchase a personalized, AI derived, unique, and interesting theorem.

Arbesman also suggested that facts in the hard sciences are subject to longer half-lives than facts in biology and the half-life decreases even further for the humanities and medicine.  He mentioned that when physicists colonize other fields they are unpopular and create disruption, but that they bring in useful ideas.  But I wonder if it’s even theoretically possible to reduce sociology to physics.  This is the whole holism/reductionism dichotomy that Monica Anderson loves to explore.

Another point that came up was that while the idea of fact decay should encourage healthy skepticism, we should still try to avoid unhealthy skepticism.  During the question and answer session it was suggested that politically controversial topics such as evolution, global warming, and even GMO labelling are clouded with incorrect facts.  I think a lot of scientists get a little overly defensive by what they term as anti-science policy decisions and they might be incorrectly grouping GMO opponents in with the creationists and global warming denialists.  Hopefully, better understanding of fact decay will radiate out and attenuate some of the scientific hubris out there.