Canada’s House of Commons Standing Committee on Industry and Technology is currently inviting expert witnesses to testify on Bill C-27, which includes the Artificial Intelligence & Data Act (AIDA).
Drawing on AIGS Canada’s white paper and initial recommendations on C-27, L’Allié made a moving case for why AI legislation like C-27 is needed, and how it can be improved to meet the needs of Canadians.
The full translated hearing and testimony can be viewed on ParlVu (starting at 11:20:35).
L’Allié’s opening Statement (English version) can be viewed on YouTube.
Full text of the opening statement:
“Monsieur le président, mesdames et messieurs les membres du comité, je vous remercie de l’honneur de m’avoir invité.
Gouvernance et sécurité de l’IA Canada est un organisme à but non lucratif multipartite et une communauté de personnes à travers le pays. Notre point de départ est la question “Que pouvons-nous faire au Canada, et à partir du Canada, pour s’assurer que l’avenir créé par l’IA soit bénéfique?”.
En novembre nous avons déposé un mémoire avec des recommandations détaillées pour la loi sur l’IA et les données, et nous préparons présentement un deuxième mémoire en réponse aux amendements du ministre.
Les témoins des séances précédentes ont déjà discuté des risques liés aux systèmes actuels, je vais donc orienter mes remarques aujourd’hui vers les enjeux sécuritaires et économiques de l’IA à venir, ainsi que les contraintes de temps auxquelles nous faisons face pour nous y préparer, et ce que tout cela veut dire pour la LIAD.
Let me start by stating the obvious: with human intelligence staying roughly the same, and AI getting better by the day, it is only a matter of time before AI outperforms us in all domains. This includes ones like reasoning, caring for people, and navigating real-world complexity where we currently hold a clear advantage. Building this level of AI is the explicit goal of frontier labs like OpenAI, Google DeepMind, and more recently, Meta.
The first major implication of smarter-than-human AI is for public safety, due to the weaponisation and control problems.
The weaponisation problem is straightforward. If a human being can design or use weapons of mass destruction, then a smarter-than-human AI system can too. This means that in the hands of the wrong people, smarter-than-human AI systems could be used for unprecedented harm.
The control problem comes from the fact that a system that is smarter than us is by definition one that can outcompete us. Which means that if an advanced AI system, through accident or poor design, starts to interpret human beings as a threat and take actions against us, we will not be able to stop it. Moreover, there is a growing body of evidence, backed by research at the world’s top AI labs, suggesting that without proper safety precautions AI systems above a certain threshold of intelligence may behave adversarially by default. This is why hundreds of leading AI experts signed a statement last year saying that mitigating the risk of extinction from AI should be a global priority.
The second major implication is for labour. As AI approaches the point where it can do everything we can only better, including designing robots that can outperform us physically, our labour will be increasingly less useful. The economic pressures are such that a company that doesn’t eventually replace its CEO, board and employees with smarter-than-human AI systems and robotics, will likely be a company that loses out to others that do. If we don’t manage these developments wisely, increasing numbers of people will get left behind. I want to be clear, however, that AI is also a very positive force. The world we create with advanced AI could be a far more peaceful, prosperous and equitable one than we currently have. It’s just that as discussed so far, AI – and in particular smarter-than-human AI – represents a tsunami of change, and there’s a lot we need to get right.
How much time do we have? The reality is we’re already late in the game. Even the rudimentary AI that we have today is causing issues with everything from biassed employment decisions, to enabling cybercrime, to spreading misinformation. But the greatest risks come from AI that is reliably smarter than us, and that AI could be coming soon. Many leading experts expect human levels of AI in as little as 2 to 5 years, and the engineers at the frontier labs that we’ve talked to are saying there’s even a 5-10% chance of it being built in 2024. While accurate predictions about the future are impossible, the trends are clear enough that a responsible government needs to be ready.
So what can we do? In our white paper, Governing AI: A plan for Canada, we outline five categories of action needed from government, including establishing a central AI agency, investing in AI governance and safety research, championing global talks, and launching a national conversion on AI. Legislative action is the fifth, and essential, pillar.
The main reasons Canada needs the AI & Data Act are:
- To limit current and future harms by banning or regulating high risk use cases and capabilities
- To create a culture of ethics, safety, and accountability in the public and private sectors that can scale up as AI technology advances
- To provide government with the capacity, agility, and oversight to adequately protect Canadians and respond to developments in the field as they arise
The Minister’s amendments are a good step in the right direction, and I’d be happy to provide feedback on them.
To conclude, while the challenges we face with AI are daunting and the timelines to address them very tight, constructive action to govern the risks and harness the opportunities is possible, and bills like C-27 are an essential piece of the puzzle.
As the wheels of history turn around us, one thing is clear: success on this global issue will require every country to step up to the challenge – and Canada’s on us.
Thank you. Merci. Il me fera plaisir de répondre à vos questions.