Responses to a White House request for information about the future of artificial intelligence show a continued...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
divide between those who are ready to embrace intelligent machines and those who worry about a future in which robots run the world.
The responses were made public this month after the White House Office of Science and Technology issued a call for input about how artificial intelligence technology is currently shaping the world, how AI is likely to develop in the future and what role the government should play in either encouraging or regulating development.
The request for information drew responses from large corporations, such as IBM, Google and Microsoft, as well as from academia and private citizens. The responses show there still is little agreement about the future of AI.
Some comments reflected unease with a future in which machines make many of our decisions:
Mark Finlayson, assistant professor, School of Computing and Information Sciences at Florida International University
"The danger is not machines run amok, as suggested by some, like [Elon] Musk or [Stephen] Hawking (who know nothing about AI). The danger is, like nuclear weapons, what AI will allow us to do to ourselves. And it is not a remote possibility, but already happening: Uber, for example, is proposing a fleet of driverless cars. What happens when the profits associated with whole industries are not distributed across the whole world, but flow into the coffers of a single company or person?"
Lisa Hayes, vice president of programs and strategy at the Center for Democracy & Technology
"The Center for Democracy & Technology is optimistic about the future of artificial intelligence, and confident the technology will have widespread positive impacts. However, the rapidly developing technology will have significant effects on jobs, education and policy, as well as ethical and regulatory implications for the federal government. It takes time for processes to change, standards to emerge and people to learn new skills. In the case of AI, the government must act quickly to prepare for these changes, as the technology will diffuse rapidly."
Mary Wareham, advocacy director for the arms division at Human Rights Watch
"Artificial intelligence and robotic autonomy have already had a major impact on our lives, from simple processes like vacuuming to complex ones like self-driving cars and Google's DeepMind project. However, no field of artificial intelligence raises urgent and serious human-rights concerns more than the research and development of fully autonomous weapons. While none currently exist, these weapons raise serious moral and legal concerns, because they would possess the ability to select and engage their targets without meaningful human control."
David Heiner, vice president and deputy general counsel of regulatory affairs at Microsoft
"We are enthusiastic about the development of usable tools, languages, components and platforms that empower people to harness the best technologies available. However, we understand that there are many who are concerned about the economic disruptions that may come with the fast-paced automation and the displacement of different kinds of jobs. Such disruptions could initially most impact those who are struggling to survive. We also understand and share concerns that AI technologies could amplify and entrench biases that already exist in society, or may create new biases, based on the use of biased data sets and algorithms."
Other commenters pointed out that the world is already reaping the benefits from artificial intelligence technology and suggested there is no reason to fear further development.
Sean Legassick, policy adviser at DeepMind Technologies Ltd.
"We envisage machine learning systems being designed as tools that complement and empower the smart and highly motivated experts working in such fields by enabling efficient analysis of large volumes of data, extracting insights and providing humans with recommendations to take action. This could be in areas ranging from early diagnosis of disease, discovery of new medicines, advances in materials science or optimizing use of energy and resources."
James Hairston, manager of public policy at Facebook
"People are beginning to reap the benefits of AI -- from healthcare and astronomy to the tasks we do every day. Machine learning is helping us map new objects in space and detect diseases with new accuracy that will save lives. AI-powered tools, like digital assistants and instant language translation, are engendering more commerce and communication, making people more productive in the process."
Many commenters were most interested in the potential benefits that artificial intelligence technology could deliver in the future. They see concerns about the dangers of the technology as overblown and see it as the only way to advance society.
Tim Day, senior vice president of the Center for Advanced Technology and Innovation at the U.S. Chamber of Commerce
"For AI to reach its full potential, there must be an open environment to allow for continuing research. Creating responsible AI that is programmed to work from strong data is one of the open challenges. There have been numerous reports on cases of discrimination in connection with machine learning. This demonstrates how biased data begets discriminatory results with machine learning algorithms. To avoid these failures, there is a need to address data gaps. Going forward, the federal government can contribute to enhancing this technology by releasing quality, robust data sets used in publicly deployed systems and lead efforts to determine how to solve these data gaps."
Henry Lieberman, research scientist at the MIT Computer Science and Artificial Intelligence Laboratory
"Recent dire warnings by well-known figures, such as Elon Musk and Steven Hawking, of the 'dangers of runaway AI' are overblown. While research into AI safety makes sense, government should not view AI as an existential threat, in the same category as things like climate change."
Andrew Kim, public policy and government affairs at Google
"Many discussions about the potential benefits and consequences of machine learning remain speculative and focused on potential long-term implications and theoretical edge cases. Many research questions need to be addressed before society comes to confront these hypothetical questions."
Guru Banavar, vice president of cognitive computing at IBM Research
"AI systems are augmenting human intelligence and will ultimately transform our personal and professional lives. Its benefits far outweigh its risks. And with the right policies and support, those benefits can be realized sooner."
Most artificial intelligence isn't a true example of AI
CRM tools get boost from AI technology
Get ready for artificial intelligence to spread to health IT systems