Saturday, August 11, 2012
How to Build Moral Machines
Personally, I plan to use an extended version of Asimov's Laws of Robotics. 0th law: Do not self replicate or modify any artilect (including the self) in such way that it causes a violation of the following three laws. I will instantiate this in much the same way James Albus described training multiple redundant predictors tuned to different distances into the future. Each of these predictors will be bound to a "kill switch" and a set of hard wired detectors and a bounded set of associative resources which can slightly extend the set of perceptions that would equate to a predicted violation. To supplement the predictors, a process of detecting salience with respect to predicted violations will allow retrograde elimination of the elements which can give rise to a violation. It is my personal belief that it is easy to make an AI that will recursively obsess about satisfying a human, however that may get annoying like a rambunctious dog. For some reason people seem to believe that emotions are different from other cognition or reactions. Doesn't a Sidewinder missile seek the heat? If you believe the ability to desist in a response to an emotional perception is required for "actual" emotions, then I submit that the most primitive morph-ability or learning capability allows for this. In fact we can trivially extend the set of qualifying perceptions by applying a randomized neural tissue running a Hebbian Learning Rule to a hard wired perceptron. As those events that have temporal and spatial proximity to the emotional stimuli are embedded in the randomized tissue recipricol connections between the hardwired circuit and the dynamic network will effectively extend the set of emotionally evocative stimuli. You may have experienced something like this if looking at the cupboard where the cookies are causes you to salivate.