Part 1: AI versus ai

Why Traditional IT Governance Won’t Work For AI

Welcome to Combatting Artificial Intelligence (AI) Hype in Government with actual intelligence (ai). This week we are looking at how the Government of Canada might use artificial intelligence (AI) and explore how existing/emerging Information Management and Information Technology (IM-IT) governance frameworks are ill-equipped for the challenges presented by AI.

Potential Canadian Government AI Uses

There is no shortage of opportunities for the use of AI by the Canadian government. We will reference the excellent and evolving “The Government of Canada Digital Disruption White Paper on Artificial Intelligence” by Michael Karlin to identify high level categories for AI deployment:

  1. Service Delivery to the Public
  2. Service and Support Internally
  3. Policy Development
  4. Risk Response

For the balance of this article we will focus on the first category as the basis for establishing a case for a new kind of AI governance framework.

Service Delivery to the Public

This is the category most visible to the Canadian public and initially will be the most contentious. There are, of course, innocuous uses such as chatbots and automated assistants that can only benefit most user experiences. As the Canadian Digital Service pushes the Government of Canada forward into more contemporary methods of service delivery, AI user support capabilities will emerge as the standard frontline model.

AI deployed to make automated program/service eligibility & prioritization decisions is a more complicated and potentially controversial matter. If the public is not happy about decisions the AI has made, what is their recourse for algorithmic explanation and review. This is not a question reserved for the Canadian experience, there is much debate over whether aspects of European Union’s General Data Protection Regulation (GDPR) enshrine the “right to explanation” for automated decision-making or whether practical considerations render this to be unenforceable.

Let’s look at a hypothetical example and a fictitious government program. Jane loses her job and wants to apply for a new Employment Insurance (EI) needs-based program. She goes to the Government of Canada website and navigates to the EI page. Jane first uses a chatbot assistant to see if she is eligible. She answers the chatbot’s questions and the chatbot says she should be eligible for the new EI program. Encouraged, Jane completes the more comprehensive EI program application process, reads a notification that the process uses AI to determine eligibility and submits her application. Almost instantaneously, Jane receives notification that her application has been rejected. What can Jane do?

This is a hypothetical, over-simplified scenario but let’s look at the how difficult it might be to give Jane an explanation why her application was rejected. First we have to understand how the hypothetical EI application AI decision-making capability might work (seriously simplified) :

  1. Collect and preprocess raw training data (raw training data consists of a collection of existing approved & rejected machine-readable applications).
  2. Feature engineering (more on this in Part 2 of this series).
  3. Use machine learning & training data to create model.
  4. Deploy model.
Jane’s Experience: Simple AI Model Process

So why does this make it difficult for Jane to get answers about why her application was rejected? It is not just that the algorithmic model is essentially a black box. Here are some of the additional considerations:

  • Does the raw data cover all possible permutations of accepted and rejected applications? (Accepted on first pass-does not change, Accepted on first pass-rescinded after further review, Rejected on first pass-does not change, Rejected on first pass-granted after further review/appeal)
  • Does the preprocessing introduce unexpected bias in model?
  • Does the feature engineering accidentally diminish a significant but unique aspect of Jane’s application (ex. single mom, living in an economically depressed community)?
  • Do the configuration parameters for the model learning introduce unexpected bias?

All this makes it very difficult to find out why Jane’s application was rejected. Then add in the complexity that the model can be updated/replaced and have different results for Jane from one day to another and an explanation for Jane can be impossible to nail down.

Although the above scenario is hypothetical, AI’s impact on people can be very real. A thought-provoking new book by Virginia Eubanks “Automating Inequality” chronicles the harm automated decision tools can have on the poor and disenfranchised. New York City (NYC) recently passed a bill that creates a taskforce to do an algorithmic inventory and determine whether the algorithms are biased in any way against the residents of NYC.

Existing IT-IM Governance Frameworks

Traditional IM-IT governance frameworks involve oversight committees and project steering committees that review and approve IT-IM initiatives. Problems are identified reactively from user complaints. If a deployed technology has issues, an audit may be conducted. The application of this model to AI use has two important problems:

  1. The lack of technical AI knowledge and experience in the Canadian Public Service. This may be a point-in-time issue but the likelihood this expertise will make its way into the committee level any time soon is low.
  2. It will be difficult for auditors/investigators to reverse-engineer decisions made by AI.

The second problem can not be underemphasized. A current traditional IM-IT initiative, the Canadian Public Sector pay system (Phoenix), has had its own troubles and can be used to illustrate the point. A recent story (Julie Ireton — CBC News — Jan. 12th 2018) suggests that a number of the pay system issues stem from a customization that defaulted a new employee’s pay to the lowest level of their classification, regardless of their contractual specifics.

Here is how this Phoenix software issue differs from the AI case. For Phoenix, the auditors/investigators could look at the customization software code and determine why it was underpaying new employees. In the case of Jane’s hypothetical AI issue, there is no way to open up the black box model and determine why Jane’s application was rejected. It may be difficult to even determine the version of the model that processed Jane’s application. Traditional IM-IT governance models do not fit AI. There is a move by the Government of Canada toward more agile and user-centric IM-IT initiatives but this does not address the AI issue.

Closing Thoughts

It is difficult to estimate just how big an impact AI will have on Canadian government operations and services. Thousands of programs and services delivered through over a hundred federal institutions offer many opportunities and contexts to leverage AI. Throw in the complexity of Shared Services Canada and the planned “Cloud first” philosophy, and the need for an agile, adaptable and comprehensive AI governance framework becomes self-evident.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store