I went to see Daniel Suarez read from his latest book, Kill Decision at the Long Now this evening. I had originally been introduced to his work at a previous Long Now talk he gave years ago in support of his first book, Daemon. Daemon was about ways in which a bunch of narrow AIs could be cobbled together to form a deadly system. Kill Decision seems to be focusing on the problems around weaponizing autonomous drones. Suarez is particularly concerned about allowing algorithms to kill humans. He believes that these “kill decisions” should be made by humans and that treaties should be created to restrict the use of autonomous drones. Suarez suggested that there has been a historical trend in warfare that has required the complicity of more and more people over the years. He compared the relatively few knights required to wage battles in the middle ages with the hundreds of thousands of soldiers who must cooperate to conduct modern wars. He argues that autonomous drones would reverse this trend and allow even a single person to wage a battle without the complicity of any other humans. There was a very lively discussion and it was suggested that autonomous drones are not unlike other modern weapons in how separated an attacker can be from the actual killing. Suarez stuck by his guns an insisted that it’s important that humans and not algorithms are making the actual decisions. He acknowledged that humans still do make horrible decisions that result in many deaths, but pointed out how much worse it could be if the process were automated. I suggested that if drone warfare followed the pattern of cyberwar as Suarez suggested then we could expect to see hackers contributing to the defense against automated drones. Alex P. suggested that we should start an open-source anti-drone drone project. I like that idea. Technology is often a double-edged sword but there always seems to be more people willing to use it to help than to hurt (barely).
I had a conversation at CFAR last night in which we discussed the consequences of an objectivist vs a constructivist viewpoint. An objectivist statement might sound something like “there is a reality that exists in the absence of any observers.” A constructivist response might be “properties or characteristics of reality are a function of observer coupling with reality.” And an objectivist reaction would be “well, duh. Tell me something I don’t know.” So I doubt that there are any hard objectivists really. But then what good is constructivism?
One difference might be that the constructivist viewpoint privileges the observer’s role in reality definition. The constructivist might be biased to pay more attention to the observer when considering definitions of reality. In this way it might be compared to post-modernism which is something I hadn’t thought of but was suggested by a member of CFAR and it does make sense. So we might expect that people slavishly sticking to objectivist viewpoints would be less aware of observer biases. And this is where I reach the conclusion that I haven’t met any strict objectivists. I don’t know anyone who might otherwise be labelled objectivist who isn’t interested in cognitive biases.
Another related example is the “brain-centric” view of cognition criticized by the constructivists (enactivists) like Noë or Thompson. Those who hold a “brain-centric” view of cognition might be accused of overlooking ways in which the body or inter-subjective experience defines cognition. So I might expect someone who oversubscribes to the brain-centric view of cognition to reject the findings of Christakis on social influence on behavior. However, I have yet to meet this strawman brain-centric individual. The enactivists are presumably fighting against someone though. I guess I will dig through the literature and see if I can find any viewpoints to populate the other side of this argument.
At what point will marketing move to direct brainwave manipulation?