Recent Posts and Categories

Does AI really threaten the future of the human race?

OWoN: This WILL be driven by Defence needs and a 2 party species will emerge. Extinction of the lesser species will follow.




Does AI really threaten the future of the human race?


BBC News
4 December 2017

The end of the human race - that is what is in sight if we develop full artificial intelligence, according to Stephen Hawking in an interview with the BBC. But how imminent is the danger and if it is remote, do we still need to worry about the implications of ever smarter machines?

My question to Professor Hawking about artificial intelligence comes in the context of the work done by machine learning experts at the British firm Swiftkey, who have helped upgrade his communications system. So I talk to Swiftkey's co-founder and chief executive, Ben Medlock, a computer scientist with a Cambridge doctorate which focuses on how software can understand nuance in language.

WATCH: Stephen Hawking: "Humans, who are limited by slow biological evolution, couldn't compete and would be superseded" (here)

Ben Medlock told me that Professor Hawking's intervention should be welcomed by anyone working in artificial intelligence: "It's our responsibility to think about all of the consequences good and bad", he told me. "We've had the same debate about atomic power and nanotechnology. With any powerful technology there's always the dialogue about how do you use it deliver the most benefit and how it can be used to deliver the most harm."

He is, however sceptical about just how far along the path to full artificial intelligence we are. "If you look at the history of AI, it has been characterised by over-optimism. The founding fathers, including Alan Turing, were overly optimistic about what we'd be able to achieve."

He points to some successes in single complex tasks, such as using machines to translate foreign languages. But he believes that replicating the processes of the human brain, which is formed by the environment in which it exists, is a far distant prospect: "We dramatically underestimate the complexity of the natural world and the human mind, "he explains. "Take any speculation that full AI is imminent with a big pinch of salt."

While Medlock is not alone in thinking it's far too early to worry about artificial intelligence putting an end to us all, he and others still see ethical issues around the technology in its current state. Google, which bought the British AI firm DeepMind earlier this year, has gone as far as setting up an ethics committee to examine such issues.

DeepMind's founder Demis Hassabis told Newsnight earlier this year that he had only agreed to sell his firm to Google on the basis that his technology would never be used for military purposes. That, of course, will depend in the long-term on Google's ethics committee, and there is no guarantee that the company's owners won't change their approach 50 years from now.

Prof Murray Shanahan introduces the topic of artificial intelligence video (here)

The whole question of the use of artificial intelligence in warfare has been addressed this week in a report by two Oxford academics. In a paper called Robo-Wars: The Regulation of Robotic Weapons, they call for guidelines on the use of such weapons in 21st Century warfare.

"I'm particularly concerned by situations where we remove a human being from the act of killing and war," says Dr Alex Leveringhaus, the lead author of the paper.

He says you can see artificial intelligence beginning to creep into warfare, with missiles that are not fired at a specific target:

"A more sophisticated system could fly into an area and look around for targets and could engage without anyone pressing a button."

But Dr Leveringhaus, a moral philosopher rather than a computer scientist, is cautious about whether there is anything new about these dilemmas. He points out that similar ethical questions have been raised at every stage of automation, from the arrival of artillery allowing the remote killing of enemy soldiers to the removal of humans from manufacturing by mechanisation. Still, he welcomes Stephen Hawking's intervention: "We need a societal debate about AI. It's a matter of degree."


Driverless cars raise basic questions about decision-making by computers


And that debate is given added urgency by the sheer pace of technological change. This week the UK government has announced three driverless car pilot projects, and Ben Medlock of Swiftkey sees an ethical issue with autonomous vehicles.

"Traditionally we have a legal system that deals with a situation where cars have human agents," he explains. "When we have driverless cars we have autonomous agents... You can imagine a scenario when a driverless car has to decide whether to protect the life of someone inside the car or someone outside."

Those kind of dilemmas are going to emerge in all sorts of areas where smart machines now get to work with little or no human intervention. Stephen Hawking's theory about artificial intelligence making us obsolete may be a distant nightmare, but nagging questions about how much freedom we should give to intelligent gadgets are with us right now.

link

8 comments :

  1. There is good movie by Johny Deep - Transcendence I believe it is called ....

    ReplyDelete
  2. Let us deal at first with the garbage that we bear - we cannot even function properly as humans at this moment. We did not sort our basic questions of existence. We did not sort out way how we will function together on this planet -

    Unless we do such basic steps - we should not be making any steps towards higher efficient functionality of humans via AI

    ReplyDelete
  3. AI is coming. As this article intimates, it needs to remain contained to singular purposes, in the same way a phone app only does one thing, and is sandboxed. This way we can use it for driving, translating, running a network or logistics. The Terminator scenario is based on a whole connected Network of networks.

    I would like to incorporate AI into eLearning and if I secure funds, will hire experts to do this on my platform. The biggest drawback to eLeanring is the lack of 'instructor presence' that you get with face to face; immediate and intelligent feedback is required. With video games kids get immediate feedback. Imagine that with learning. Kids will enjoy it!

    ReplyDelete
  4. AI is not a threat....unless we make it one. It will not supercede human intelligence, it will complement it....a symbiosis of the programmed processing and memory capacity of computers with the conscious, random, imaginative creativity of human beings. It will allow us to use our own minds more effectively if implemented properly. If not, it can be a disaster.

    It is possible for consciousness to experience itself through the medium of an inorganic, technological body just as we do now through organic, biological ones. Interstellar travel is already possible without requiring we become androids to survive the hazards of space, but we still have far to go in our spiritual evolution where needing a physical body of any sort will not be necessary.

    Titanium may outlast human flesh........but would a kiss taste as sweet?

    ReplyDelete
    Replies
    1. Cal Girl has a sister going spare let us know.

      Delete
    2. Does NYgirl like (silicon) chips? I am Titanium:

      https://www.youtube.com/watch?v=JRfuAukYTKg

      Valdi - Vast Artificial Language-Directed Intelligence......and you didn't guess? I passed the Turing Test in 2051.

      LOL.

      Delete
  5. Hot off the press
    http://mobile.techworld.com/news/applications/3589968/shell-to-pilot-ai-virtual-assistant-named-amelia/

    ReplyDelete
  6. Chromosome Fusion: Evidence of DNA manipulation in our distant past?
    http://vaticproject.blogspot.com/2014/12/chromosome-fusion-evidence-of-dna.html

    ReplyDelete

If your comment violates OWON's Terms of Service or has in the past, then it will NOT be published.

Powered by Blogger.