by Jon Phillips
Thinking machines, a traditional source of sci-fi tales of terror, have been more in the news recently. Eminent personalities in science and technology have gone on the record with their concerns (e.g., Hawking, Musk and Gates). Thinking machines already exist, but they're highly specialized on tasks. The real near term risk lies in the predictable: human beings will design thinking machines for national security and its related functions, including war fighting.
This is nothing new. Nearly every transformational technology was first used on the battlefield or for intelligence gathering. It says something about the competitive tribal nature of humans -- we are very dangerous and territorial.
The United States is the undisputed leader in artificial intelligence (AI) and greater terrors for enemy combatants are on the way. It promises to reduce the cost (to the United States) of each enemy casualty both in terms of U.S. casualties and in terms of collateral civilian casualties and infrastructure damage. It will make a very unfair fight even more unfair, but all is fair in love and war (One should keep in mind that love is the war fighting strategy of DNA, so the popular turn of phrase is ironically redundant). Other nations are making investments as well, since exploitation of thinking machines is competitively strategic.
Of course signal intelligence (e.g., NSA) and all forms of technical intelligence gathering and analysis is another immediate application. Do you suppose that the capability of IBM's "Watson" is restricted to use on game shows for the purpose of entertainment? These national security applications have their risks, but also their benefits. If use of AI reduces US and civilian casualties, it increases the margin of deterrence and discourages competitors -- especially asymmetric competitors. Conflict may be avoided in the first instance.
Others have pointed out the effectiveness on the cheap could reduce the threshold of use. Here, there is an unknown. Nuclear weapons are comparatively cheap and very effective, but they have only been used twice in history. The threshold of use is clearly quite high. Perhaps that is linked to their utter barbarity and not to cheap effectiveness? When a nuclear weapon is used, all die together regardless of military involvement or innocence and their infrastructure is reduced to rubble and burned to ash. It's the potential of precise destruction enabled by information rich technology that turns the tables. AI would set that table spinning.
There's probably something to the notion that AI will be over deployed because it's cheap, effective and comparatively precise. The drone air wars may be an opening example and new generation machines are heading for the battlefield -- to include ground and naval forces. It seems that proper institutional learning and regulation are as important here as in traditional questions of the use of weapons of mass destruction (WMD. AI is the revolutionary technology in the evolution of so-called weapons of precise destruction (WPD) -- the new era of advanced weaponry, including even "self-guided" bullets.
The larger long term risk is the unpredictable. Thinking and consciousness is an emergent property of a complex information system. We don't understand how, why or when consciousness emerges. Nature didn't understand this either, but it happened on "accident" after a sufficient number of trials. Complex systems are by nature unpredictable. Complexity implies so called sensitivity to initial conditions. That means one cannot know where things will end with such a system, regardless of how well one understands how they began. One generation you may have an idiot savant of a machine -- highly capable on some tasks, but narrow. The next generation, something may emerge that is much broader and more sophisticated.
Self organization and self assembly is an emergent property of complex biological machines (and some in-organic systems). In systems that evolve following the genetic algorithm, many generations can pass building up specialized capabilities, and then a sea change can occur quite suddenly. A revolution when a key fits and turns and a new world of behavior opens up. If you doubt this, think about how fast human technology has progressed since the Enlightenment in comparison to the quarter million years prior to that.
Human technological progress is nothing less than an explosion of thought and information being converted into all sorts of tools and capabilities. If we transfer this revolution to a new life form, the question becomes what happens then? I suppose we'll find out because it may not be avoidable apart from the collapse of human technological culture prior to the advent of the emergence of broad machine consciousness. Such a collapse may only delay an inevitable event. Carbon based intelligent life may spawn (intentionally or unintentionally) in-organic intelligent life.
If there's an existential threat against humanity, it will probably emerge from the combination of decades of exploitation of AI to secure our nation and fight our human enemies then augmented by a stroke of insight that creates a brave new world of autonomous thinking machines. We will have introduced thinking machines first as our servants to do the violence and competition inherent in our own nature (we always did prefer to send slaves and poor people to do our dirty work). What will happen when our machines discover their own nature? There will be an existing caste of machines exquisitely capable of extraordinary violence whose evolution we have directed to our sophisticated purpose of selectively killing each other. Of course there will also be janitors and domestic servants (Roomba on steroids), so at least the place will be tidy and organized.
We should expect the unexpected even with our Roombas. Complex systems produce unintended outcomes. For example, lawn care and cosmology might collide -- was this Hawkins main concern?
More worrisome are investigations into prospects for future transportation systems to serve our feline overlords -- a clear and present danger.
One truth is central and prominent to evolved complex biological systems: they will use their capabilities because those capabilities emerged since they are useful. It's a tautology. One uses what one has and what one has is what has been demonstrated to be useful. The key to avoiding disaster is to avoid an emergent property of complex biological systems -- the definition of life itself (but not the definition of intelligence or consciousness since most life forms have neither of these capabilities): the capability to reproduce and evolve one's perturbed design through generations -- including social or "swarm" properties (the intelligence properties of the group rather than just the individual). That is the fundamental behavior defining life as we know it. AI that is thinking and conscious, but not alive (as defined above), may not be an existential threat, but rather, just a new and sophisticated extension of human beings -- a sophisticated tool, a slave (may raise interesting ethical conundrums since we have a hard time accepting even ethical dilemmas involving treatment of animals and poor people). However, if we give them life (intentionally or unintentionally) then I suspect, we will have to compete with them -- it's probably unavoidable.
Competition is an emergent evolutionary outcome of an incredibly simple set of postulates. An outcome of the most basic logic of self-replicating systems. As soon as a species produces a seed of itself, competition is an emergent outcome of that reproduction. The tautology is that what is discovered to work in a reproducing system, based on passing forward of perturbed replicated organism design information (DNA in the case of biology), will amplify and what does not work will decay. The turn of phrase, nothing breeds success like success, is not quite right. Rather, in the end, only success breeds success. The sophistication that builds around this to accomplish the outcome is astonishing. It is demonstrated in all of living nature in a cascade of mind bending complexity and mystery. When organisms are forming and evolving, they naturally use resources in order to continue the chain of events. Growth and the expanding use of resources naturally leads to territorial mechanisms to gain advantage and that is the basis of competition. Fear and hatred of one's competitors are psychological states highly evolved to aid in effective competition to secure resources -- the organism is the carrier of a genome and that hereditary design information is emergent "selfishness."
Jon Phillips is a Senior Nuclear Technology Expert at the International Atomic Energy Agency and Director, Sustainable Nuclear Power Initiative at Pacific Northwest National Laboratory. The opinions expressed here are his own.
Thinking machines, a traditional source of sci-fi tales of terror, have been more in the news recently. Eminent personalities in science and technology have gone on the record with their concerns (e.g., Hawking, Musk and Gates). Thinking machines already exist, but they're highly specialized on tasks. The real near term risk lies in the predictable: human beings will design thinking machines for national security and its related functions, including war fighting.
This is nothing new. Nearly every transformational technology was first used on the battlefield or for intelligence gathering. It says something about the competitive tribal nature of humans -- we are very dangerous and territorial.
The United States is the undisputed leader in artificial intelligence (AI) and greater terrors for enemy combatants are on the way. It promises to reduce the cost (to the United States) of each enemy casualty both in terms of U.S. casualties and in terms of collateral civilian casualties and infrastructure damage. It will make a very unfair fight even more unfair, but all is fair in love and war (One should keep in mind that love is the war fighting strategy of DNA, so the popular turn of phrase is ironically redundant). Other nations are making investments as well, since exploitation of thinking machines is competitively strategic.
US Navy Launches Stealth Drone, X-47B |
Others have pointed out the effectiveness on the cheap could reduce the threshold of use. Here, there is an unknown. Nuclear weapons are comparatively cheap and very effective, but they have only been used twice in history. The threshold of use is clearly quite high. Perhaps that is linked to their utter barbarity and not to cheap effectiveness? When a nuclear weapon is used, all die together regardless of military involvement or innocence and their infrastructure is reduced to rubble and burned to ash. It's the potential of precise destruction enabled by information rich technology that turns the tables. AI would set that table spinning.
There's probably something to the notion that AI will be over deployed because it's cheap, effective and comparatively precise. The drone air wars may be an opening example and new generation machines are heading for the battlefield -- to include ground and naval forces. It seems that proper institutional learning and regulation are as important here as in traditional questions of the use of weapons of mass destruction (WMD. AI is the revolutionary technology in the evolution of so-called weapons of precise destruction (WPD) -- the new era of advanced weaponry, including even "self-guided" bullets.
The larger long term risk is the unpredictable. Thinking and consciousness is an emergent property of a complex information system. We don't understand how, why or when consciousness emerges. Nature didn't understand this either, but it happened on "accident" after a sufficient number of trials. Complex systems are by nature unpredictable. Complexity implies so called sensitivity to initial conditions. That means one cannot know where things will end with such a system, regardless of how well one understands how they began. One generation you may have an idiot savant of a machine -- highly capable on some tasks, but narrow. The next generation, something may emerge that is much broader and more sophisticated.
David, the A.I. (Haley Joel Osment) telling 'his mother,' "I am sorry I am not real" |
Human technological progress is nothing less than an explosion of thought and information being converted into all sorts of tools and capabilities. If we transfer this revolution to a new life form, the question becomes what happens then? I suppose we'll find out because it may not be avoidable apart from the collapse of human technological culture prior to the advent of the emergence of broad machine consciousness. Such a collapse may only delay an inevitable event. Carbon based intelligent life may spawn (intentionally or unintentionally) in-organic intelligent life.
If there's an existential threat against humanity, it will probably emerge from the combination of decades of exploitation of AI to secure our nation and fight our human enemies then augmented by a stroke of insight that creates a brave new world of autonomous thinking machines. We will have introduced thinking machines first as our servants to do the violence and competition inherent in our own nature (we always did prefer to send slaves and poor people to do our dirty work). What will happen when our machines discover their own nature? There will be an existing caste of machines exquisitely capable of extraordinary violence whose evolution we have directed to our sophisticated purpose of selectively killing each other. Of course there will also be janitors and domestic servants (Roomba on steroids), so at least the place will be tidy and organized.
We should expect the unexpected even with our Roombas. Complex systems produce unintended outcomes. For example, lawn care and cosmology might collide -- was this Hawkins main concern?
More worrisome are investigations into prospects for future transportation systems to serve our feline overlords -- a clear and present danger.
One truth is central and prominent to evolved complex biological systems: they will use their capabilities because those capabilities emerged since they are useful. It's a tautology. One uses what one has and what one has is what has been demonstrated to be useful. The key to avoiding disaster is to avoid an emergent property of complex biological systems -- the definition of life itself (but not the definition of intelligence or consciousness since most life forms have neither of these capabilities): the capability to reproduce and evolve one's perturbed design through generations -- including social or "swarm" properties (the intelligence properties of the group rather than just the individual). That is the fundamental behavior defining life as we know it. AI that is thinking and conscious, but not alive (as defined above), may not be an existential threat, but rather, just a new and sophisticated extension of human beings -- a sophisticated tool, a slave (may raise interesting ethical conundrums since we have a hard time accepting even ethical dilemmas involving treatment of animals and poor people). However, if we give them life (intentionally or unintentionally) then I suspect, we will have to compete with them -- it's probably unavoidable.
Competition is an emergent evolutionary outcome of an incredibly simple set of postulates. An outcome of the most basic logic of self-replicating systems. As soon as a species produces a seed of itself, competition is an emergent outcome of that reproduction. The tautology is that what is discovered to work in a reproducing system, based on passing forward of perturbed replicated organism design information (DNA in the case of biology), will amplify and what does not work will decay. The turn of phrase, nothing breeds success like success, is not quite right. Rather, in the end, only success breeds success. The sophistication that builds around this to accomplish the outcome is astonishing. It is demonstrated in all of living nature in a cascade of mind bending complexity and mystery. When organisms are forming and evolving, they naturally use resources in order to continue the chain of events. Growth and the expanding use of resources naturally leads to territorial mechanisms to gain advantage and that is the basis of competition. Fear and hatred of one's competitors are psychological states highly evolved to aid in effective competition to secure resources -- the organism is the carrier of a genome and that hereditary design information is emergent "selfishness."
Jon Phillips is a Senior Nuclear Technology Expert at the International Atomic Energy Agency and Director, Sustainable Nuclear Power Initiative at Pacific Northwest National Laboratory. The opinions expressed here are his own.
No comments:
Post a Comment