There are two schools of thought when it comes to the future of artificial intelligence (AI):
- The utopian view: Intelligent systems will usher in a new age of enlightenment where humans are free from work to pursue more noble goals. AI systems will be programmed to cure disease, settle disputes fairly and augment our human existence only in ways that benefit us.
- The apocalyptic view: Intelligent systems will steal our jobs, surpass humans in evolution, become war machines and prioritize a distant future over current needs. Our dubious efforts to control them will only reveal our own shortcomings and inferior ability to apply morality to technology we cannot control.
As with most things, the truth is probably somewhere in the middle.
Regardless of where you fall on this spectrum, it’s important to consider how humans might influence AI as the technology evolves. One idea is that humans will largely form the conscience or moral fabric of AI. But how would we do that? And how can we apply ethics to AI to help prevent the worst from happening?
Artificial Intelligence for Executives
AI requires a vision to achieve. Your vision isn’t cookie cutter, so your AI application shouldn’t be either. With our guidance, you can integrate advanced analytics, including artificial intelligence, into your strategy – and understand the strengths and weaknesses of various methods based on your goals.
The human, AI relationship
The power of deep learning systems is that they determine their own parameters or features. Just give them a task or purpose, point them at the data, and they handle the rest. For example, the autotune capability in SAS® Visual Data Mining and Machine Learning can figure out the best result for itself. But people are still the most critical part of the process.
“Humans solve problems, not machines,” explains Mary Beth Ainsworth, an AI specialist at SAS. “Machines can surface the information needed to solve problems and then be programmed to address that problem in an automated way – based on the human solution provided for the problem.”
While future AI systems might also be able to gather their own data, most current systems rely on humans to provide the input parameters, including the data and the best result, as identified through learning definitions – like reinforcement learning. When you ask the algorithm to figure out the best way to achieve that result, you have no idea how the machine will solve the problem. You just know it will be more efficient than you are.
Given this current relationship between humans and AI, we can take a number of steps to more ethically control the outcome of AI projects. Let’s start with these three.
Humans solve problems, not machines. Machines can surface the information needed to solve problems and then be programmed to address that problem in an automated way – based on the human solution provided for the problem. Mary Beth Ainsworth AI and Language Analytics Strategist SAS
Step 1 for AI ethics: Provide the best data
AI algorithms are trained through a set of data that is used to inform or build the algorithm. If your algorithm is identifying a whale as a horse, clearly you need to provide more data about whales (and horses). Likewise, if your algorithm is identifying animals as humans, you need to provide more data about a more diverse set of humans. If your algorithm is making inaccurate or unethical decisions, it may mean there wasn’t sufficient data to train the model, or that learning reinforcement wasn’t appropriate for the desired result.
Of course, it’s also possible that humans have, perhaps unwittingly, injected their unethical values into the system via biased data selection or badly assigned reinforcement values. Overall, we have to make sure the data and inputs we provide are painting a complete and correct picture for the algorithms.
Step 2 for AI ethics: Provide the proper oversight
Establish a system of governance with clear owners and stakeholders for all AI projects. Define which decisions you’ll automate with AI and which ones will require human input. Assign responsibility for all parts of the process with accountability for AI errors, and set clear boundaries for AI system development. This includes monitoring and auditing algorithms regularly to ensure bias is not creeping in and the models are still operating as intended.
Whether it’s the data scientist or a dedicated hands-on ethicist, someone should be responsible for AI policies and protocols, including compliance. Perhaps one day all all organizations will establish a chief AI ethicist role. But regardless of the title, somebody has to be responsible for determining if the output and performance are within a given ethical framework.
Just as we’ve always had a need for governance, traceability, monitoring and refining with standard analytics, we also do for AI. The consequences are far greater in AI, though, because the machines can start to ask the questions and define the answers themselves.
Step 3 for AI ethics: Consider ramifications of new technologies
In order for individuals to enforce policies, the technology must allow humans to make adjustments. Humans must be able to select and adjust the training data, control the data sources and choose how the data is transformed. Likewise, AI technologies should support robust governance, including data access and the ability to guide the algorithms when they are incorrect or operating outside of ethically defined boundaries.
There’s no way to anticipate all potential scenarios with AI, but it’s important to consider the possibilities and put controls in place for positive and negative reinforcement. For example, introducing new, even competing, goals can reward decisions that are ethical and identify unethical decisions as wrong or misguided. An AI system designed to place equal weight on quality and efficiency would produce different results than a system focused entirely on efficiency. Further, designing an AI system with several independent and conflicting goals could add additional accountability to the system.
Don’t avoid AI ethics
AI can enhance automobile safety and diagnose cancer – but it can also choose targets for cruise missiles. All AI capabilities have considerable ethical ramifications that need to be discussed from multiple points of view. How can we ensure ethical systems for AI aren’t abused?
The three steps above are just a beginning. They’ll help you start the hard conversations about developing ethical AI guidelines for your organization. You may be hesitant to draw these ethical lines, but we can’t avoid the conversation. So don’t wait. Start the discussion now so you can identify the boundaries, how to enforce them and even how to change them, if necessary.
- Article From living on the streets to owning a profitable businessZoe Empowers, a ministry for vulnerable children living in life-threatening poverty in Africa and India, finally found a way to measure and reveal its incredible impact. Using SAS, they developed an "empowerment index" that measures improvements in housing, health, education and more for program participants.
- Article Saving lives during a global pandemic through medical resource optimizationCleveland Clinic is operationalizing analytics to combat COVID-19, creating innovative models that help forecast patient volume, bed capacity, medical equipment availability and more.
- Article AI in government: The path to adoption and deploymentThe government sector is lagging in AI adoption, but awareness of the importance of AI in the public sector is increasing. Our survey indicates that operational issues are requiring governments to turn their attention to AI projects as a way to address important public issues.
- Article Five questions to help bridge the analytics gapEveryone is turning to analytics for a competitive edge. But the real advantage is in using analytics as a tool for exploration and innovation.