Skip to main content

The Canadian government’s proposed use of artificial intelligence to assess refugee claims and immigration applications could jeopardize applicants' human rights, says a group of privacy experts.

The warnings are raised in a new report from the Citizen Lab, a group of civic-minded technological and privacy policy researchers at the University of Toronto’s Munk School of Global Affairs, and the University of Toronto Faculty of Law’s International Human Rights Program.

The federal government is already taking steps to make use of artificial intelligence. In May, The Globe and Mail reported that Justice Canada and Immigration, Refugees and Citizenship Canada (IRCC) were piloting an artificial-intelligence program to assist with preremoval risk assessments and immigration applications on humanitarian grounds.

The report being released Wednesday cautions that experimenting with these technologies in the immigration and refugee system amounts to a “high-risk laboratory,” as many of these applications come from some of the world’s most vulnerable people, including those fleeing persecution and war zones. Artificial intelligence could be used, according to the analysis, to assess an applicant’s risk factor, predict the likelihood that an application is fraudulent, gauge whether a marriage is genuine or whether children are biological offspring.

Citizen Lab is calling on Ottawa to freeze development of all artificial intelligence-based systems until a government standard and oversight bodies have been established.

Canada processed more than 50,000 refugee claims in 2017, and projects it will admit 310,000 new permanent residents in 2018. Canada also processes millions of work, study, and travel applications each year as part of an array of immigration and visa programs.

“IRCC is committed to implementing this type of technology in a responsible manner and working collaboratively with the Treasury Board of Canada Secretariat,” spokeswoman Nancy Caron said in an e-mailed statement. “IRCC is exploring tools that can support case law and legal research, facilitate trend analysis in litigation, predict litigation outcomes, and help develop legal advice and assessments."

Ottawa has recognized that government branches will need guidance when undertaking artificial-intelligence projects. The Treasury Board is currently developing a federal standard for automated decision-making systems, and recently published a draft “algorithmic impact assessment,” which will help departments grapple with the expected and unexpected implications of algorithmic systems.

In procurement documents, IRCC stated it hoped the system could help assess the “merits” of an immigrant or refugee’s application before a final decision is rendered.

“AI is not neutral. If the recipe of your algorithm is biased, the results in the decision are also going to be biased,” said Petra Molnar, an immigration lawyer with the International Human Rights Program and co-author of the report.

These biases have sometimes become painfully evident. In 2015, Google’s photo-tagging algorithm suggested the label “gorilla” when presented with an image of a black person. To fix the problem, Google eventually removed “gorilla” from the list of possible labels.

Prejudiced algorithms are making their way into essential civic processes, too: In 2016, the U.S. publication ProPublica revealed that COMPAS, a risk-assessment algorithm used in sentencing and parole determinations in Florida, was biased against black people.

“We’re heartened by the fact that the government is thinking through the use of these technologies," Ms. Molnar said, "but what we want to do is bring everyone to the table and look at it from a human-rights lens as the starting point.”

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe