Throughout university, I held down a part-time job as a child care worker at the YMCA, the same job my mom had when I was growing up. I needed the gig to be able to afford to work unpaid media internships, but I quickly realized that it also bolstered my capacity to make quick, empathetic decisions. And it introduced me to a group of mostly immigrant women who told me stories about their past lives as project managers and head nurses—stories that coloured in the gaps left by my academic education. I’ve never included that position on a job application, though; career counsellors and job-advice sites made it clear that, given the limited inventory on a resumé, the position wouldn’t further my career trajectory.

The selective erasure of that seminal YMCA job never sat well with me. It felt like I was pandering to the slanted views of a human-resources department; I was bothered by the idea that I was reinforcing the unvocalized perceptions about which experiences and occupations our deeply unequal society considers to be valuable. But, I reasoned with myself, as the child of immigrants, I owed it to my parents to do everything in my power to land a job that paved a path for generational wealth. It turns out that I was performing for an altogether different gatekeeper, however—and one that is steadily accumulating more power.

Amazon came under fire a few years ago for its resumé-screening software, which was directed by an algorithm that penalized CVs that included the word “women” (as in “women’s basketball team captain” or “women and gender studies”) because it had been trained on 10 years’ worth of mostly male resumés. I was horrified but not surprised. According to Shauna Goldenberg, a human-resources consultant based in Toronto who often advises companies that use software based on artificial intelligence (AI) in their workplaces, resumé-screening algorithms became ubiquitous because of their early promise to unbiasedly abridge the hiring process. But they can easily be coded with information that’s far from neutral, standardizing how applicants are further disadvantaged across the intersections of race, gender and class and cementing tacit human prejudices into yet more structures that are hard to discern and even harder to demolish. “When you use technology to streamline the recruiting process, you must acknowledge that the people coding the technology will put their unconscious bias into it,” she explains.

Artificial intelligence is most essentially defined as computer-processing systems that have been designed to perform the functions of human cognition. And humans, consciously or not, built early AI that contained the most abhorrent parts of our cognition—systems that replicated and formalized the silent mechanisms of systemic oppression that protect those with power by dehumanizing those without; systems like predictive policing (used by police departments in Vancouver, Edmonton, Saskatoon and London, Ont.) that assess who might commit a crime based on automated decision-making; and systems, like those now being used in Canadian immigrant and refugee processes, that pose a threat to domestic and international human-rights laws.

As flaws are uncovered, the tech world is enthusiastically seizing upon “ethical AI” as the newest frontier of innovation—a move urged forward by the cultural zeitgeist, which
demands the rebuke of racism and sexism in all its forms. So the promise of virtuous yet efficient tech has swooped in on a white horse, once again pledging to provide software-based solutions capable of tightening an ever-shrinking bottom line by streamlining labour-intensive tasks like large-scale recruitment and by-the-minute performance tracking but also, in this iteration, fervently parading better-than-ever inclusivity mandates.

Ethical AI’s possibilities form a labyrinth that the Canadian tech landscape is actually uniquely positioned to address. Since the turn of the millennium, Canada has become an international hub of machine learning, producing the most AI patents per million people among the G7 countries and China, and Toronto ranks ahead of New York in tech talent. In 2017, Canada became the first country in the world to announce a national AI strategy in order to “develop global thought leadership on the economic, ethical, policy and legal implications of advances in artificial intelligence,” and renowned Canadian AI institutes like Vector, located in the Toronto-Waterloo Innovation Corridor, have poured research into developing ethical best practices.

When you use technology to streamline the recruiting process, you must acknowledge that the people coding the technology will put their unconscious bias into it.

Of course, setting the international benchmark for ethical innovation is a boon for national pride, but it’s also especially lucrative territory, considering that employers in Canada’s biggest cities often have to contend with an interlocked set of factors when hiring: highly qualified and diverse applicant pools, a competitive job market that consistently seeks out top-tier talent and a near-universal incentive to build diversity initiatives into a company’s public-facing image. And so across Canada, a growing number of start-ups, many of which are based in the Greater Toronto Area, purport similar promises: the removal of discriminatory bias from the process of finding, getting and keeping a job by applying AI-based tools.

Fintros, a finance-career discovery platform that scrapes resumés of identifying information to render all applicants anonymous, is one of the Toronto outfits looking to satiate the corporate hunger to lean on Canada’s marketable brand as an inclusive cultural mosaic while neatly maximizing productivity. Another one is Plum, a one-stop- shop skill-based platform that uses organizational psychology (rather than job history) to inform decisions on employee hiring, growth and retention. And Knockri—which to date has raised $3.4 million in funding, has co-published reports with LinkedIn and has a seat at the World Economic Forum’s Global Council on Equality and Inclusion—has been a unique trailblazer in the field, using “evidence-based” machine learning to do away with bias in hiring.

Inspired by co-founder Jahanzaib Ansari’s observation that an anglicized spelling of his name received a better response when he applied for jobs, Knockri’s leadership team has gone to significant lengths to ensure that they don’t succumb to the pitfalls of other similarly intentioned software. The company uses proprietary data sets (the raw material used to train algorithms) that are inclusive of the full spectrum of cultures, races, genders and accents rather than historical data from scientific studies or census tracts, which can contain bias. Yet COO Maaz Rana believes that there’s still a staggering amount of work to be done. “We talk about all the investment that’s happening in AI within Canada, but prior to doing so, we need to make sure that the foundation on which it’s built is solid,” he explains. “That has yet to be accomplished because there’s no universal standard that companies are expected to follow.”

A few months ago, I applied for a job at Amazon, and my resumé—scrubbed of my child-care job but including a mention of an internship at a feminist and anti-racist publishing house—made it through to the interview stage. The mega-corp had scrapped its biased resumé-screening platform when it came under fire a few years ago, citing its ethical shortcomings, but since most companies keep the black box of their algorithms tightly under wraps, it’s hard to say how much has changed since then. What is clear, however, is that the work required to enact a national framework for ethical AI built in good faith remains overwhelming.

The reality is that ethical AI will never be anything other than a buzzword until it’s capable of moving beyond the perception that only some are worthy of its benefits. 



Despite the fact that the meaningful application of ethical AI is still in its infancy (right now, it largely subsists through a crop of experimental software, extensive speculative research, a disparate set of national guidelines and vague platitudes from tech giants like Microsoft and Google), it has been touted as having the capacity to incite legitimate transformative change in the workplace. But therein lies the risk: Declaring that a foundational problem is solved without any probing inquiry quickly shifts resources elsewhere and, in the process, neatly conceals the oppression that remains.

In 2019, the Ontario Human Rights Commission released a report acknowledging the need for increased research to examine the potential impacts of replacing human judgment with crime-prediction AI, especially when policing in Black and Indigenous communities. In July, just weeks after the police killing of George Floyd sparked international protests and demands to defund the police, controversial facial-recognition company Clearview AI ceased offering its services, which were used by a number of law-enforcement agencies, including the RCMP, in Canada after an investigation was started by the Canadian Privacy Commissioner. The U.S.-based company, which became a “viral hit” with law-enforcement agencies in just a few years, had come under fire for populating its database with billions of unregulated images scraped from social media in order to help identify suspects and victims—which also poses a risk to darker-skinned people since facial-recognition software has a proven history of misidentifying them.

Safiya Umoja Noble, co-director of the UCLA Center for Critical Internet Inquiry and author of Algorithms of Oppression, is one of the researchers on the front lines who are revealing the violent repercussions of unaudited AI. She rang the alarm in 2010 about a fundamental flaw in Google’s algorithm that produced racist and pornographic results when the terms “Black girls” and “Black women” were fed into its search engine. Today, she has a disconcerting question about the rapid, unregulated deployment of predictive analytics in every sector of the economy: Who, exactly, is leading the charge?

“There’s now mainstream public understanding that these technologies can be harmful,” explains Noble. “But the resources for researching and studying ethics have gone right back to the original epicentres that sold us the bill of goods. Similar to when big tobacco funded all of its own favourite researchers, that’s kind of what big tech is doing.”

It’s a pivot that has resulted in companies like Google and Facebook—deflecting attention from the role their loosely controlled algorithms played in the outcome of the 2016 U.S. election and Brexit—repositioning themselves as cutting-edge thought lead- ers. In 2014, Google acquired DeepMind, a renowned AI company with a laser focus on research in ethics, and since then has created a sleek blog that touts the company’s social-good initiatives, like a collaboration with the LGBTQ+ organization The Trevor Project that’s intended to build a virtual counsellor-training program. “It’s like ethics has become an industry,” adds Noble.

Last March, the federal government attempted to curb the unbridled, potentially adverse use of AI by announcing a directive that sought to hold AI-driven decision-making to some degree of “transparency, accountability, legality and procedural fairness.” However, innovation often quickly outpaces the development of legislation, and the implementation of these regulations by governing bodies remains spotty at best.

The reality, though, is that ethical AI will never be anything other than a buzzword until it’s capable of moving beyond the perception that only some workers are worthy of its benefits. Often, low-status and low-wage positions, like migrant farm workers and essential-care staff, are left out of the conversation—which means that, once again, women and people of colour are being disproportionately silenced. “We need to be very mindful of the types of voices that aren’t being heard—and [those] that are being catered to,” says Rana.

Last year, the Canadian Agri-Food Automation and Intelligence Network announced a $108.5 million project that promised to create a network of private partners that would use Canada’s strengths in AI to “change the face of agriculture.” While the decision to digitize gave lip service to potentially improving working conditions, in practice, it has resulted in some of Southern Ontario’s migrant farmers being subjected to performance tracking through smartwatches and fingerprinting (a harrowing reality when coupled with the insufficient safety protocols that led to outbreaks of COVID-19 at a number of farms in Leamington, Ont.). “The increased use of automation will have negative consequences—from wage theft to heightened surveillance at work and at home—on the predominantly racialized labour force in the agricultural industry,” explains Chris Ramsaroop, one of the founding members of Justice for Migrant Workers. “While the industry will claim that AI and automation is being implemented to enhance productivity and improve efficiency, from our perspective, it’s based on exerting further control on workers.”

It’s no longer possible to believe that software created in the spirit of techno-optimism can promote social good through its mere existence. Rather, we must place those lofty expectations on the gatekeepers of AI—the people at the top who know there is more on the line than access to jobs that grant upward social mobility. “It’s about abolishing harmful digital systems that are fundamentally exploitative, by virtue of their existence, and dangerous to vulnerable people who are already oppressed,” says Noble. Technology on its own will never birth radical innovation. Change can only be delivered when living, breathing people imagine a new way forward, when we learn to scrutinize the ways we interact with these coded expressions of power and when we begin to demand transparency and accountability of the AI we allow into our lives. And perhaps when we can reclaim some agency by reinserting line items—barista, child-care worker, cashier—into our resumés, we can begin the process of redefining the artificial definition of valuable work experience and unlearning the insidious prejudices that have long plagued our inartificial human experience.

Read more:
Emily Tamfo on How 2020 Became the Year of Letting Go of Lofty Goals
Apple Launches Health Records in Canada