Skip to Main Content

Next Topic for AI to Master? Ethics.

“AI will be the best or worst thing ever for humanity,” Elon Musk said last summer.
7m read
Written by:
Photo Credit: erhui1979/iStock

No one doubts the potential power of artificial intelligence, but some do question whether that power will ultimately be used for good or ill.

Researchers announce new, if often incremental, advances in AI capabilities about every other day, touting its ability to improve everything from medical care and public safety to space exploration and repairing our crumbling infrastructure. On the other side of the coin, however, are worries about what AI could do if it becomes too autonomous — “killer robots” are one fear — or too powerful. Billionaire entrepreneur Elon Musk has embodied both sides of the debate, praising the potential of AI systems while warning that AI poses a “fundamental risk to the existence of human civilization,” and that an AI arms race will likely be the cause of World War III.

“AI will be the best or worst thing ever for humanity,” he said last summer.

At the core of the debate is ethics, which is becoming an increasingly hot topic in government and technology circles. As with ethics in other matters, it’s not so much a matter of if, but of how. “Just because you can doesn’t mean you should” is a trusty old phrase that’s always good to keep in mind, but it never stopped nuclear proliferation or countless predatory business practices. For developers and users of AI, the question is how to ensure AI systems can be used responsibly.

Ethics Under the Hood

A starting point could be with the programming. The Government Accountability Office in March issued a report, “Artificial Intelligence: Emerging Opportunities, Challenges, and Implications,” drawn from a forum of participants from industry, government, academia and nonprofits, convened by the comptroller general of the U.S. Among the challenges discussed was the need for ethics at a fundamental computing level.

“We’re going to need to have some kind of computational ethics system,” said one participant, according to the report. “We’re not going to be able to anticipate, in advance, all the crazy situations that you’re going to have to make complicated decisions.”

One way to get there is to have AI systems that can tell you why they reached a certain conclusion, something they currently can’t do. Fast-working AI systems are much better at giving answers than they are at showing their work.

Research projects such as the Defense Advanced Research Project Agency’s Explainable AI work to create machine learning techniques that allow a machine to have a plain-language conversation with a human to explain its reasoning, which could go a long way toward developing trust in human-machine teams and spotting potential problems with machine-based decisions.

A Feature, Not a Bug?

There are efforts to get a handle on AI ethics, although specific strategies are hard to come by.

recent report by the Center for Strategic and International Studies and Booz Allen Hamilton, which focused mostly on AI innovation, said government needs to play a significant role in the “safety, ethics and transparency issues surrounding artificial intelligence.” Aside from recommending the National Institute of Standards and Technology develop standards, it didn’t get into specifics.

Technology giant Microsoft, meanwhile, seems to be taking an I’ll-know-it-when-I-see-it approach, saying this month it has scuttled some potential deals over ethical concerns about AI being misused.

Some leaders in the U.K., however, want to make ethics a selling point, which could have the effect of raising the value of ethical standards. The House of Lords’ Select Committee on Artificial Intelligence last week published a report suggesting a focus on ethics could help the country be a leader in AI. The report acknowledged the U.K. can’t match the government investments made by the U.S. and China, but says it could “forge a distinctive role for itself as a pioneer in ethical AI.”

The report also puts forth five basic principles as an ethical guide:

  1. AI should be developed for the common good and benefit of humanity.

  1. AI should operate on principles of intelligibility and fairness.

  1. AI should not be used to diminish the data rights or privacy of individuals, families or communities.

  1. All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside AI.

  1. The autonomous power to hurt, destroy or deceive human beings should never be vested in AI.

As with other outlines, those five principles don’t get into a lot of detail. But as AI systems become more powerful and ethical questions more potent, it might be a good place to start.

Related Content
Woman typing at computer

Stay in the Know

Subscribe now to receive our newsletters.

Subscribe