NewsArtificial Intelligence – Are The Experts Concerned?

Marina Stedile Marina StedileMay 27, 2019
http://cryptotechnews.co/wp-content/uploads/2019/05/AI-to-get-out-of-control.png

As AI technology advances and seeps deeper into our daily lives, its potential to create dangerous situations is becoming more apparent. A Tesla Model 3 owner in California died while using the car’s Autopilot feature. In Arizona, a self-driving Uber vehicle hit and killed a pedestrian (though there was a driver behind the wheel).

Other instances have been more insidious. For example, when IBM’s Watson was tasked with helping physicians diagnose cancer patients, it gave numerous “unsafe and incorrect treatment recommendations.

Below are the views and quotes from some highly respected experts, weighing on the threat that AI poses to the future of humanity, and what we can do to ensure that AI is an aid to the human race rather than a destructive force.

Unpredictable behavior

Stephen Hawking

AI TECHNOLOGY COULD BE IMPOSSIBLE TO CONTROL

The late Stephen Hawking, world-renowned astrophysicist and author of A Brief History of Time, believed that artificial intelligence would be impossible to control in the long term, and could quickly surpass humanity if given an opportunity:

“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

Elon Musk

REGULATION WILL BE ESSENTIAL

Few technologists have been as outspoken about the perils of AI as the prolific founder of Tesla Inc, Elon Musk. Though his tweets about AI often take an alarmist tone, Musk’s warnings are as plausible as they are sensational:

“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”

Musk believes that proper regulatory oversight will be crucial to safeguarding humanity’s future as AI networks become increasingly sophisticated and are entrusted with mission-critical responsibilities:

“Got to regulate AI/robotics like we do food, drugs, aircraft & cars. Public risks require public oversight. Getting rid of the FAA won’t make flying safer. They’re there for good reason.”

Tim Urban

WE CANNOT REGULATE TECHNOLOGY THAT WE CANNOT PREDICT

Tim Urban, blogger and creator of Wait But Why, believes the real danger of AI and ASI is the fact that it is inherently unknowable. According to Urban, there’s simply no way we can predict the behavior of AI:

“And since we just established that it’s a hopeless activity to try to understand the power of a machine only two steps above us, let’s very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us. Anyone who pretends otherwise doesn’t understand what superintelligence means.”

Oren Etzioni

DEEP LEARNING PROGRAMS LACK COMMON SENSE

Considerable problems of bias and neutrality aside, one of the most significant challenges facing AI researchers is how to give neural networks the kind of decision-making and rationalization skills we learn as children. According to Dr. Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, common sense is even less common in AI systems than it is in most human beings — a drawback that could create additional difficulties with future AI networks:

“A huge problem on the horizon is endowing AI programs with common sense. Even little kids have it, but no deep learning program does.”

Nick Bostrom

WE AREN’T READY FOR THE CHALLENGES POSED BY AI

Academic researcher and writer Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, shares Stephen Hawking’s belief that AI could rapidly outpace humanity’s ability to control it:

“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound. For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens.”

Political instability and warfare

Vladimir Putin

AI WILL HAVE A PROFOUND IMPACT ON GLOBAL POLITICS

World leaders need little convincing of AI’s unprecedented capacity to reshape the geopolitical landscape. Russian President Vladimir Putin, for example, firmly believes that mastery of AI technology will have a profound impact on global political power:

“Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Bonnie Docherty

KILLING MACHINES THAT LACK MORALITY

Bonnie Docherty, associate director of Armed Conflict and Civilian Protection at the International Human Rights Clinic at Harvard Law School, believes that we must stop the development of weaponized AI before it’s too late:

“If this type of technology is not stopped now, it will lead to an arms race. If one state develops it, then another state will develop it. And machines that lack morality and mortally should not be given power to kill.”

Max Erik Tegmark

INFORMATION WARFARE WILL BE AN EVEN GREATER THREAT

Technological advancements such as autonomous vehicles represent a paradigm shift in human society. According to Max Erik Tegmark, physicist and professor at the Massachusetts Institute of Technology, they also represent weaknesses that rogue actors will be able to exploit in future wars:

“The more automated society gets and the more powerful the attacking AI becomes, the more devastating cyberwarfare can be. If you can hack and crash your enemy’s self-driving cars, auto-piloted planes, nuclear reactors, industrial robots, communication systems, financial systems and power grids, then you can effectively crash his economy and cripple his defenses. If you can hack some of his weapons systems as well, even better.”

 

Ethical and societal impacts

Tim Cook

AI MUST RESPECT HUMAN VALUES

One aspect of AI that is discussed far less frequently than its potential for destruction is whether AI can be taught to respect human ethics.

Apple CEO Tim Cook has long been an outspoken advocate for user privacy. He argues that creating AI systems that can interpret and value ethical approaches to society’s problems is a serious responsibility to future generations that companies like Apple must reckon with:

“Advancing AI by collecting huge personal profiles is laziness, not efficiency. For artificial intelligence to be truly smart, it must respect human values, including privacy. If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It’s not only a possibility, it is a responsibility. In the pursuit of artificial intelligence, we should not sacrifice the humanity, creativity, and ingenuity that define our human intelligence.”

Olga Russakovsky

DIVERSITY IS ESSENTIAL TO SOLVING DIFFICULT PROBLEMS WITH AI

The under-representation of women in computer science and information technology is an ongoing concern for business leaders, technology companies, and academia. Author and machine vision expert Olga Russakovsky says greater diversity in the AI field is essential if the technology is to solve society’s most difficult problems:

“We are bringing the same kind of people over and over into the field. And I think that’s actually going to harm us very seriously down the line…diversity of thought really injects creativity into the field and allows us to think very broadly about what we should be doing and what type of problems we should be tackling, rather than just coming at it from one particular angle.”

Theresa May

MAKING AI WORK FOR EVERYBODY

British Prime Minister Theresa May has long been an outspoken advocate of AI technology. She acknowledges the inherent risks in the technology’s advancement, and emphasizes that properly channeling its power is crucial for humanity:

“British-based companies…are pioneering the use of data science and Artificial Intelligence to protect companies from money laundering, fraud, cyber-crime and terrorism. In all these ways, harnessing the power of technology is not just in all our interests — but fundamental to the advance of humanity…Right across the long sweep of history — from the invention of electricity to the advent of factory production — time and again initially disquieting innovations have delivered previously unthinkable advances and we have found the way to make those changes work for all our people. Now we must find the way to do so again.”

Kenneth Stanley

AI COULD HARM ALREADY VULNERABLE PEOPLE

In Stanley’s view, the potential for AI could represent a grave danger to the most vulnerable members of society, a problem that requires a holistic approach to technological oversight:

“I think that the most obvious concern is when AI is used to hurt people. There are a lot of different applications where you can imagine that happening. We have to be really careful about letting that bad side get out. [Sorting out how to keep AI responsible is] a very tricky question; it has many more dimensions than just the scientific. That means all of society does need to be involved in answering it.”

Tabitha Goldstaub

MORE BIAS, MORE MISOGYNY, FEWER OPPORTUNITIES

Tabitha Goldstaub, co-founder of AI market intelligence platform CognitionX, explains that failing to account for gender bias as AI technology advances could be catastrophic for women’s rights:

“We’re ending up coding into our society even more bias, and more misogyny and less opportunity for women. We could get transported back to the dark ages, pre-women’s lib, if we don’t get this right.”

The dangers of unequal gender representation in AI isn’t solely an ideological problem:

“Men and women have different symptoms when having a heart attack — imagine if you trained an AI to only recognize male symptoms. You’d have half the population dying from heart attacks unnecessarily.”

Tess Posner

AI IS ANYTHING BUT PERFECT

Tess Posner, CEO of nonprofit advocacy group AI4ALL, is keenly aware of AI’s limitations, especially when it comes to perpetuating existing societal biases:

“A lot of people assume that artificial intelligence…is just correct and it has no errors. But we know that that’s not true, because there’s been a lot of research lately on these examples of being incorrect and biased in ways that amplify or reflect our existing societal biases.”

Andrew Ng

ETHICS IN AI IS ABOUT MORE THAN ‘GOOD’ OR ‘EVIL’

Andrew Ng, co-founder of Google Brain and former chief scientist of Baidu, believes questions about the ethics of AI are much bigger than individual use cases:

“Of the things that worry me about AI, job displacement is really high up. We need to make sure that wealth we create [through AI] is distributed in a fair and equitable way. Ethics to me isn’t about making sure your robot doesn’t turn evil. It’s about really thinking through, what is the society we’re building? And making sure that it’s a fair and transparent and equitable one.”

Sundar Pichai

TECH COMPANIES MUST DEVELOP AI RESPONSIBLY

Google has been using AI and neural networks for several years, but CEO Sundar Pichai believes that increasingly sophisticated AI tech must be used responsibly:

“We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.”

Satya Nadella

OVERCOMING HUMAN BIAS IN AI IS VITAL

Like Pichai and other leading technology executives, Nadella has warned of the risk of human biases being built into AI technology, which demands a deliberate, conscientious approach when developing AI applications:

“Technology developments just don’t happen; they happen because of us as humans making design choices—and those design choices need to be grounded in principles and ethics, and that’s the best way to ensure a future we all want.”

Nadella explains that part of the problem is that human language — the building blocks of machine-learning systems and AI networks — is inherently biased. Unless researchers consciously account for such biases, “neutral” technology becomes deeply flawed:

“One of the fundamental challenges of AI, especially around language understanding, is that the models that pick up language learn from the corpus of human data. Unfortunately the corpus of human data is full of biases, so you need to invest in tooling that allows you to de-bias when you model language.”

 

Melinda Gates

MEN ARE NOT THE ONLY ‘REAL TECHNOLOGISTS’

Despite historical racial and gender disparities in the technology sector, more women and and people of color are developing the technologies of tomorrow than ever before. Although progress has been made in recent years to rectify racial and gender disparities, philanthropist Melinda Gates of the Bill & Melinda Gates Foundation believes that complacency could undermine much of this work and exacerbate existing problems:

“If we don’t get women and people of color at the table — real technologists doing the real work — we will bias systems. Trying to reverse that a decade or two from now will be so much more difficult, if not close to impossible.”

Geoffrey Hinton

AI COULD EXACERBATE SOCIAL INJUSTICE

World-renowned computer scientist and “Godfather of Deep Learning” Geoffrey Hinton has been an outspoken skeptic of the applications of AI for many years.

Echoing the warnings of Joanna Bryson and David Robinson, Hinton has spoken of the potential for AI technology to exacerbate systemic inequality, which he believes is a direct result of the flawed nature of many social systems:

“If you can dramatically increase productivity and make more goodies to go around, that should be a good thing. Whether or not it turns out to be a good thing depends entirely on the social system, and doesn’t depend at all on the technology. People are looking at the technology as if the technological advances are a problem. The problem is in the social systems, and whether we’re going to have a social system that shares fairly, or one that focuses all the improvement on the 1% and treats the rest of the people like dirt. That’s nothing to do with technology. . . . I hope the rewards will outweigh the downsides, but I don’t know whether they will, and that’s an issue of social systems, not with the technology.”

 

Surpassing human intelligence

Ray Kurzweil

THE DISTINCTION BETWEEN AI AND HUMANITY IS ALREADY BLURRING

Kurzweil’s work focuses on what he calls “the singularity” — the point at which artificial superintelligence (ASI) will surpass the human brain and let people live forever. He says the merging of man and machine is inevitable:

“We’re merging with these non-biological technologies. We’re already on that path. I mean, this little mobile phone I’m carrying on my belt is not yet inside my physical body, but that’s an arbitrary distinction. It is part of who I am—not necessarily the phone itself, but the connection to the cloud and all the resources I can access there.”

Yann LeCun

OVERSTATING THE DANGERS OF AI’S CAPABILITIES

To Yann LeCun, chief artificial intelligence scientist at Facebook AI Research, the biggest problem with AI isn’t its potentially nefarious applications, but rather a profound misunderstanding of the technology itself:

“We’re very far from having machines that can learn the most basic things about the world in the way humans and animals can do. Like, yes, in particular areas machines have superhuman performance, but in terms of general intelligence we’re not even close to a rat. This makes a lot of questions people are asking themselves premature. That’s not to say we shouldn’t think about them, but there’s no danger in the immediate or even medium term. There are real dangers in the department of AI, real risks, but they’re not Terminator scenarios.”

Clive Sinclair

HUMANITY MAY NOT SURVIVE AI

Sinclair believes AI’s rise to dominance is inevitable, but not in the immediate future:

“Once you start to make machines that are rivaling and surpassing humans with intelligence it’s going to be very difficult for us to survive…But it’s not imminent and I can’t go round worrying about it.”

 

Reshaping the workforce

Steve Wozniak

AI COULD REPLACE ‘SLOW’ HUMANS ALTOGETHER

Like many of Silicon Valley’s earliest pioneers, Apple co-founder Steve “Woz” Wozniak has expressed cautious optimism about the disruptive potential of AI. But in Wozniak’s view, AI also represents a profound danger to the future of mankind, and may ultimately replace human beings altogether:

“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently.”

Brian Chesky

AI AUTOMATION OFFERS BENEFITS TO SOME, RISKS FOR OTHERS

Some executives believe that no job will be safe from the efficiencies promised by a tireless robotic workforce. Brian Chesky, co-founder and CEO of Airbnb, has voiced concern about the negative impact that robotic automation will have on the lives of working people:

“I’m concerned about the concept of automation. Many jobs will be automated; a lot will be. This will have benefits for people but it also has a huge cost. I worry that ‘Made in America’ will become ‘Made by robots in America.’”

Reed Hastings

DEVELOPING ENTERTAINMENT FOR AI

The world of entertainment has been fascinated with the notion of intelligent computers for more than 30 years. However, while many people see AI as an exciting new frontier in home entertainment, Netflix co-founder and CEO Reed Hastings has a somewhat less optimistic outlook on AI’s future role in how we spend our leisure time. He has even going as far as speculating whether AI will become part of Netflix’s audience over the coming decades:

“Over twenty to fifty years, you get into some serious debate over humans. I don’t know if you can really talk about entertaining at that point. I’m not sure if in twenty to fifty years we are going to be entertaining you, or entertaining AIs.”

Sir Tim Berners-Lee

AI CANNOT BE TRUSTED TO ACT FAIRLY

Some experts, including creator of the World Wide Web Sir Tim Berners-Lee, worry that the wide-scale adoption of AI in the financial sector could have disastrous consequences that would be nearly impossible to mitigate:

“So when AI starts to make decisions such as who gets a mortgage, that’s a big one. Or which companies to acquire and when AI starts creating its own companies, creating holding companies, generating new versions of itself to run these companies. So you have survival of the fittest going on between these AI companies until you reach the point where you wonder if it becomes possible to understand how to ensure they are being fair, and how do you describe to a computer what that means anyway?”

James Vincent

UNSEEN ALGORITHMS ARE ALREADY RESHAPING SOCIETY

To some experts, the most urgent AI-related issue is how widely the technology is being used in education, healthcare, and the criminal justice system in ways that we may not necessarily understand.

Technology writer James Vincent believes that while we must “future-proof” AI from becoming too powerful, society’s growing reliance on algorithms that we only vaguely understand is just as problematic:

“If a computer can do one-third of your job, what happens next? Do you get trained to take on new tasks, or does your boss fire you, or some of your colleagues? What if you just get a pay cut instead? Do you have the money to retrain, or will you be forced to take the hit in living standards? It’s easy to see that finding answers to these questions is incredibly challenging. And it mirrors the difficulties we have understanding other complex threats from artificial intelligence. For example, while we don’t need to worry about super-intelligent AI running amok any time soon, we do need to think about how machine learning algorithms used today in healthcare, education, and criminal justice, are making biased judgements.”

 

So, what do you think? AI has improved many services and experiences, however, what is the limit? Leave your comments below.

 

You might also like:

 

 

Source: CB Insights, How AI Will Go Out Of Control According To 52 Experts

Leave a Reply

Your email address will not be published. Required fields are marked *