This became a billionaire before he could legally drink says he has seen an AI program that could end the world within a year if it was unleashed.
If we as a society believe in the possibility of a technological singularity, a natural question to ask is: what are the moral and practical implications of such an event occurring? Countless books, films, and television series have already attempted to address this question. One such film is The Matrix, which depicts a world where the majority of humankind has been enslaved by machines to serve as an energy source. While this is certainly one of the more pessimistic outlooks on the consequences of a technological singularity, there are legitimate concerns as to what the singularity would mean for our society.
In his 2007 paper, “The Basic AI Drives,” Stephen Omohundro argues that sufficiently advanced A.I. systems of any design will contain “drives,” or “tendencies which will be present unless explicitly counteracted.” His point is that an A.I. system’s “desire” to self-improve may lead to unintended consequences. Even if an A.I. is not explicitly programmed with a hatred of humans, it may conclude that the most efficient way to achieve its stated purpose is by harming others.
Nick Hay provides a great example of an A.I. behaving in a way not intended by its creators. In his 2007 article, “The Stamp Collecting Device,” Hay describes how something as well intentioned as a super-intelligent stamp-collecting device could destroy the world. Assuming the device has the ability to compute every possible outcome given its possible set of actions, it will choose whichever action enables it to acquire the most stamps for its user. Over the course of its calculations, it will figure out that “most sequences which lead to vast numbers of stamps are destructive. One sequence hacks into computers, directing them to collect credit card numbers and bid at stamp auction. 170,000,000 stamps. Another sends a virus which makes all printers create stamps. 17,000,000,000 stamps.” The creation of super-intelligent systems has dangerous implications for even the most benign task.
Omohundro goes on to argue that the only way to prevent this from happening is by explicitly accounting for this possibility in the design of the system. In particular he advocates the development of “utility engineering.” An A.I.’s behavior is determined by the actions it deems to be utility maximizing, i.e. the actions that allow to best achieve its goal. In order to prevent the A.I. from taking actions that negatively impact humans, we need to develop utility functions that lead to the behaviors and consequences we desire. One of the earliest examples of this kind of thinking are science-fiction writer Isaac Asimov’s Three Laws of Robotics which he outlines in his famous novel I, Robot.
Another solution to the problem of unfriendly A.I. that is commonly proposed is the idea of an A.I. box. This would involve confining the A.I. to a simulated world where its actions would not be allowed to affect the external world. However, this idea leads to its own set of problems.
A common concern among Singularitarians who discuss the possibility of an A.I. box is the rapid development of nanotechnology. Many people, such as Eliezer Yudkowsky, advocate that the development of seed A.I. should precede nanotechnology. Seed A.I. is a system that can modify its own source code to make itself smarter.
In “Artificial Intelligence as a Positive and Negative Factor in Global Risk,” Yudkowsky explains that a sufficiently intelligent seed A.I. could escape its confinement through nanotechnology. In particular, the A.I. could crack the protein-folding problem, enabling it to “generate DNA strings whose folded peptide sequences fill specific functional roles in a complex chemical interaction.” That A.I. could then “escape” to the outside world by manipulating humans to develop the nanosystems it designed.
Another major concern is the development of self-sufficient machines for use in combat. The US armed forces are quickly heading in the direction of unmanned tanks, fighter jets, and other armed vehicles. In fact, in 2009 Congress mandated that one third of ground combat vehicles must be unmanned by 2015. A new Navy funded report urges against a hasty deployment of war robots and advocates that the code for these robots contain “ethics subroutines,” to prevent a Terminator-like scenario.
In September 2009, a US predator UAV carrying several hellfire missiles had to be shot down by a manned fighter jet. The controller lost positive contact with the unmanned aircraft as it was on its way out of Afghani airspace. Luckily, there were no civilian casualties, but such an incident is not to be taken lightly. The development of progressively more advanced robots could potentially be a wonderful step towards a better future, but we must make sure that we prepare for all contingencies as we forge ahead. – Source
Research Scientist Steve Omohundro’s Net Worth : A Billionaire in the Making?
In the past few months, artificial intelligence (AI) has skyrocketed. Many scientists have researched the probability of AI taking over the world, and Steve Omohundro is one of them.
Steve Omohundro is an American computer scientist born in 1959.
He obtained his Physics and Mathematical degree from Stanford University, and later, he did his Ph.D. in Physics from UC Berkeley.
After graduation, Steve was not limited to just learning but also started teaching as a computer science professor at the University of Illinois.
Further moving into his life, the scientist co-founded the Centre of Complex System Research.
Steve also became the Chief Scientist of AIBrain and also became founder and CEO of Possibility Research.
Currently, the AI genius is working at Facebook as a research scientist on AI-based simulation. – Source
Read more on these Tags: Stephen Omohundro