The potential of artificial intelligence and data analytics crosses a variety of sectors, and progress has already occurred in areas like health care, transportation and national security. But the future of AI will depend on how policy, ethics and legal issues are handled.
A recent Brookings Institution report, “How Artificial Intelligence is Transforming the World,” outlines the most prominent areas for AI applications, but also addresses the obstacles in the way of AI development and the recommendations for getting the most out of AI while putting citizens first. It’s important for policymakers, opinion leaders and the interested general public to understand how AI works, where it can best be used and how it should be regulated, because it’s not just a buzzword.
So, Where Can AI Help?
AI is already making a splash sectorwide. PriceWaterhouseCoopers estimates AI technologies could increase global gross domestic product by $15.7 trillion, or 14 percent, by 2030. But where is it most helpful?
National Security: Take the Defense Department’s Project Maven, for example, which uses AI and machine learning to sort through the extensive data, imagery and video captured by surveillance to quickly find actionable intelligence and suspicious activity patterns. AI’s ability to analyze big data in real-time will greatly help command and control, improve decision-making and defend critical cybernetworks. But there are still ethical debates about whether to couple this capability to automatic decisions to launch AI weapon systems, to prepare for “hyperwar.”
Health Care: AI tools train computers on data sets to learn what a normal virus looks like compared to irregular-appearing cells or body parts. It can help with early detection, prevention and prediction through imaging exercises to label abnormalities, so radiological imaging specialists and clinicians can determine a patient’s risk and decide treatment. Essentially, AI can keep patients out of the hospital.
Criminal Justice: Chicago developed an AI-led “strategic subject list” to analyze people arrested for their risk of being future criminals. Systems like these can reduce human bias in law enforcement, but there’s also worry it can punish citizens for crimes they haven’t yet committed.
Transportation: The report found more than $80 billion was invested in autonomous vehicle technology between August 2014 and June 2017. This includes applications for autonomous driving and the technologies used, like lane-changing systems, cameras and sensors for collision avoidance, high-performance computing to adapt to circumstances, light detection and ranging systems and so on.
Smart Cities: Local governments use AI to improve urban service delivery, environmental planning, energy utilization, crime prevention and resource management. The Cincinnati Fire Department is using data analytics to optimize medical emergency response, and Boston deployed cameras and inductive-loop traffic detectors to manage traffic, as well as sensors that identify gunshots.
Finance: Decisions about loans are made by software that analyzes more data about an individual than just their credit score or background check. Machines conduct high-frequency trading in stock exchanges, and AI is used to identify abnormalities, outliers or deviant cases for fraud detection.
But What’s the Hold Up?
There are still a number of policy, regulatory and ethical issues posed by AI and its use. AI depends on data that can be analyzed in real time, so data access problems need to be sorted out. Biases in data and algorithms cause AI systems to discriminate, and facial recognition software is raising racial issues.
Similarly, AI ethics and transparency are a concern, because it’s unclear how automated systems and algorithms make their decisions, or how they’re using citizen data. For this reasons, the European Union is implementing the General Data Protection Regulation in May, which allows people to opt out of personally tailored ads, and contest legal decision made by algorithms to appeal for human intervention.
And there’s also legal liability. As the report stated, “If there are harms or infractions (or fatalities in the case of driverless cars), the operators of the algorithm likely will fall under product liability rules.” The liability will be based on the situational facts, but it’s still raising unresolved legal questions.
What do We Do Next?
Well, the report provided a number of recommendations: improve data access, increase government investment in AI (which the House IT subcommittee has been discussing), promote digital education and workforce development, create a federal AI advisory committee (which has also been addressed, but may take too long), engage with local and state officials, regulate broader objectives rather than specific algorithms, take biases seriously, maintain ways for human oversight and control of AI, penalize malicious behavior and promote cybersecurity.