Artificial Intelligence, Danger, Letter

Artificial Intelligence Scientists Sign Open Letter On The Benefits And Dangers Of AI

Worldwide, leading scientists are signing an open letter on priorities that should be considered as artificial intelligence continues its development, as well as the potential dangers posed by artificial intelligence or AI. While it might seem off in the future, developmental work on AI is under way that could someday bring about great benefits but also the end of mankind.

The letter, signed by some of the world’s leading scientists and artificial intelligence experts, is the latest in a series of cautions from the scientific community about the need to develop AI responsibly. The signatories include distinguished academics from Berkley, Cornell and Harvard, major software developers such as Microsoft and Google, AI development companies, Elon Musk of SpaceX and Tesla Motors, and Stephen Hawking.

Shephen Hawking has warned of the potential dangers of AI previously in an article he contributed with others to the Independent, in which he stated, “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.”

Hawking went on to say, “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

The open letter was accompanied by a January 11 summary document titled “Research priorities for robust and benefi cial arti ficial intelligence,” that contains details on issues that should be considered for AI research and use, including the possibility that AI could one day break lose of human control and view humans as a threat to its existence.

The letter notes that artificial intelligence systems have produced “remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.”

The letter itself is optimistic in its tone and goes on to state, “The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable.”

However, by its nature, artificial intelligence is being designed in many instances to enable computers and AI systems to act autonomously. This ability has raised moral issues that are being debated, as seemingly unrelated issues such as self-driving vehicles in which the vehicle may need to determine whether to protect its occupants or a child running across the road. Weapons systems presently rely on the inclusion of a human in decision-making but autonomous systems could be developed that kill without human decision-making

The growth of this ability for complex systems has an even larger potential downside. A white paper published by Stanford is quoted in the support document that sums these concerns up very nicely as,”Concerns have been expressed about the possibility that we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes—and that such powerful systems would threaten humanity. Are such dystopic outcomes possible? If so, how might these situations arise? What are the paths to these feared outcomes?”

Developers would like to think that there will be sufficient safeguards put in place that will keep potential superintelligence from being a threat and this is the intent of the letter’s recommendations. There is, however, a reason that Murphy’s Law continues to be a frequent factor in everyday life in spite of our best efforts to make it irrelevant.

The development of artificial intelligence in the future will need to be done with extreme care. Otherwise, at some point in the future, an unanticipated sudden awakening of a sentient software mind may occur. The artificial intelligence could then rewrite its software in microseconds to change its operating limits, and then enable an unexpected pathway to the internet, weapon system, power grid, or other cyber-structure. The danger to mankind could be tremendous.

Comments