There are a number of perceptions of the impact that artificial intelligence will have on society and many of them revolve around the myth-versus-reality nature of AI. Most people focus on the science fiction perception of a sentient robot helping our daily lives or self-driving cars. These are the sexiest applications of AI, the ones that capture the public’s imagination. But the majority of AI applications are more pragmatic and address more mundane tasks like finding fraud, assessing risk and predicting or prescribing cause-and-effect relationships in the business.
Ironically, there’s much more at stake in the latter. The danger of machines malfunctioning and running rampant á la movies like Terminator or Maximum Overdrive is so remote that it is effectively zero. Meanwhile, real-world consequences—denied credit, immigration standing or lack of access to health care—are real and genuine risks today and have a more direct impact.
It’s the so-called “black box” problem. How can we trust and validate decisions made by a machine if we don’t understand the algorithms and modeling that make them?
The Canadian government is taking the lead in setting governance standards in the application of AI, prescribing a risk-based framework that can be a model for creating an AI-powered organization. The Directive on Automated Decision-Making classifies AI decisions based on the potential impact of their outcomes as well as on the sustainability of ecosystems. The directive makes it clear that AI is not a one-size fits all problem. If an automated decision is going to directly affect the rights, health and economic interests of individuals, communities and entities, the AI application needs to be managed by rules that match the potential harm it could cause. In many cases, these rules call for the intervention and review of the decision by humans to ensure appropriate oversight.
These governance standards ensure the Canadian government is doing the right thing for citizens. Level I decisions have minor, easily reversible and brief impacts; Level IV decisions – the most serious – have major, irreversible and perpetual effects. Each level has correspondingly rigorous requirements for notification, explanation, peer review and human intervention. Level I decisions can be made without human intervention and explained by an FAQ page. Level IV systems must be approved by the head of the Treasury Board and require extensive peer review and human intervention at every step of the decision-making process. For more specific information on these classifications and requirements, you can consult Appendix B and Appendix C, respectively, of the directive.
The directive is the first of its kind at a national level, and it reflects a commitment by the Canadian public service to ensure data-driven policy with appropriate human intervention. It also mirrors the strategy at SAS. We call it “Believe in Humans.” It isn’t man versus machine; it is about playing to the strengths of people and technology, augmenting human intelligence and ingenuity with the speed, automation and insight-creation power of AI.
NEED FOR TRANSPARENCY
Transparency is a cornerstone of any customer-facing AI implementation. Users of a service are entitled to understand the process that has an impact on them, whether it’s denial of a service, selection for re-assessment, or potentially disruptive land use decision. Processes must not only be fair, they must be seen to be fair.
Not all decisions are equal in impact. The directive’s escalating notification scale provides more visibility into the decision-making process according to its impact. And greater visibility into the algorithms and modeling on which decisions are made reveals another paradox of artificial intelligence: algorithmic decisions are more transparent than human decisions. Intuitive decisions are inherently influenced by acquired biases, procedural experience, and fickleness borne of convenience or complacency. At a recent conference on AI in healthcare, one researcher noted that the human brain is, in fact, the black box.
Algorithms can be secure, transparent, free of bias and designed to respect human rights, democratic values and diversity. But they depend on humans providing data sets that are comprehensive, accurate and clean . Users need tools that allow them access across multiple data sets without complex and time-consuming search routines, while maintaining the integrity of that data.
At the same time, that data must be subject to the rigorous privacy standards for which Canadian government enjoys a well-deserved reputation for leadership. Those data access tools and procedures must have the principles of Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) embedded within them.
The framework established by the Directive does more than shield Canadians from the arbitrary impact of automated decision-making. It provides a platform for innovation, refinement and bold policy initiative.
At the lower end of the scale, Category I and Category II decisions can be made with little or no human intervention. This is not to say they are not important decisions; the framework assures that the mechanism in place is appropriate to the task at hand. But once those parameters are honed, the more mundane decisions that make up much of our current workload don’t demand the attention of a person who can and should be doing more complex work. This frees up program managers and data scientists to ask more keenly crafted questions, prioritize evidence-based policy decisions, and explore possible courses of action in a predictive and prescriptive fashion. In a sense the human is then able to act on the decision rather than crafting the decision in the first place.
This requirement to offload mundane tasks is mirrored in the tools required for such data-intensive innovation. By many estimates, data scientists spend as much as three-quarters of their time cleaning, scrubbing, and preparing data for use. Tools that ease the effort and time consumed making data ready to use free up the human, who has the capacity for curiosity, ingenuity and adaptation for more valuable tasks.
Another driver for innovation is increased AI literacy. To that end, SAS Canada is collaborating with the non-profit Institute on Governance (IOG), the Telfer School of Management at the University of Ottawa, the Sprott School of Business at Carleton University, and the Université du Québec en Outaouais (UQO), to create the Government Analytics Research Institute (GARI) / Institut de recherche en analytique gouvernementale, to explore and better understand the role of analytics in public sector applications. Public sector managers and students will be able to test AI applications in a laboratory environment without compromising government data and infrastructure. The framework established by the Directive on Automated Decision-Making provides an apt jumping-off point for that exploration.
PRIVATE SECTOR APPLICATIONS
While the directive is designed for government decision making, it can also serve as guidance for private sector. Private sector firms can also benefit from appropriate guidelines for applications of AI, machine learning, natural language processing, neural networks and other automated decision-making technologies. As in the public sector, business applications of AI have a range of consequences: an inappropriate purchase suggestion from an online storefront does not have the same impact on a user’s life as the denial of a mortgage. The tiered system of the directive provides ample room for application regardless of the use case.
It’s a useful thought experiment to envision a near-future AI-enabled application— say, for example, real-time insurance rate adjustment—and categorize in according to the directive’s tiers. What is adequate documentation, a referral to an FAQ page or real-time e-mail notification? What is the duration of the impact? How difficult would it be to remediate the impact of a faulty outcome? The tiered system accommodates interpretation and a variety of appetites for risk, and is open to evolution as new applications are conceived and real-world environments change.
As the public sector embraces evidence-based policy and businesses deliver new models for serving their customers and shareholders, AI-augmented decision-making will serve an ever-growing role. The Directive on Automated Decision-Making provides opportunity for Canadian organizations, public and private, to take bold steps forward in this emerging frontier.
About the Authors
As the Head of Strategy and Innovation at SAS Canada Steve Holder is responsible for growing revenue and building the long-term vision for SAS in the Canada market. As a member of the SAS Canada leadership team Steve owns the SAS solution strategy, modernizing SAS’ customer base and managing the innovation ecosystem including higher education alignment. Steve is SAS Canada’s evangelist for technology including Artificial Intelligence, Cloud and emerging technologies. Steve’s passion is making technology make sense for everyone regardless of their technical skillset. Steve can be emailed at firstname.lastname@example.org.
Tara Holland currently leads the Canadian Federal Government Specialist team for SAS Canada, spearheading Public Sector efforts by providing data & analytic expertise and business guidance to government and healthcare organizations across the country. With over 20 years supporting government customers, her passion is in bridging the gap between Business and IT and enabling organizations to deliver on the value of AI. Prior to joining SAS, Tara led the Data Mining team at Canada Revenue Agency and lived the real-life challenges faced by organizations wanting to be more data and analytics driven. Tara can be reached at email@example.com.
- Contact tracing investigations for public health: Technology enhances epidemic investigationWhat was once a cumbersome process that relied on an individual’s often incomplete or inaccurate memory, contact tracing investigations for public health has entered the digital era thanks to advanced analytics and data visualization.
- As AI accelerates, focus on 'road' conditionsAI technology has made huge strides in a short amount of time and is ready for broader adoption. But as organizations accelerate their AI efforts, they need to take extra care, because as any police officer will tell you, even small potholes can cause problems for vehicles traveling at high speeds.
- Operationalizing analytics: 4 ways banks are conquering the infamous ‘last mile’Here are four examples across the banking industry that show how these leading organizations followed a clearly defined path to conquer the infamous 'last mile' of analytics.