AIGS’s rebuttal to The Logic’s article “Doom Inc.”

The Logic Logo

AI Governance & Safety Canada, a cross-partisan not-for-profit and community of people across the country working to improve AI governance, was recently profiled in a front-page article on The Logic questioning those who are working on AI Safety.

AIGS and its founder Wyatt Tessari L’Allié came out relatively unscathed, and we were happy to see the Oct 21st AI Ethics and Safety rally on Parliament Hill get covered. However, the article does a disservice to the growing concerns about AI safety by insinuating that those who care about the issue are malevolently motivated. In particular, it takes aim at the Effective Altruism movement and its related charity of Open Philanthropy, which has been a big source of funding for important AI governance and technical safety research.

These are our counterpoints:

  1. The article doesn’t address the AI risks fuelling the safety concerns. Leading scientists such as Yoshua Bengio and Stuart Russell have gone to great lengths to explain the weaponisation and control problems of advanced AI systems. AIGS has too. In his interview for the article, L’Allié repeatedly pointed to these technical concerns as the reason for founding AIGS. Attacking the charities that fund AI safety efforts without mentioning the stated reasons they’re doing so is deeply misleading.

  2. Funding to boost AI capabilities is 100x greater than funding to make it safe. If The Logic really cared about following the money, it would look at the hundreds of billions of dollars being invested to make AI more capable, not the tiny portion trying to make it safer for humanity.

  3. Effective Altruism is not the reason people worry about AI. EA is a movement dedicated to doing as much good as it can in the world, and it attracts many caring, thoughtful, people. EAs work on pressing issues such as global poverty, pandemic preparedness, and animal welfare. They have taken a particular interest in AI safety because it’s a high impact but relatively neglected cause area. Due to these shared interests, AIGS’s founders have often interacted with Effective Altruists, but EA does not control or represent AIGS or the broader AI Safety movement. If EA never existed, AI safety would still be a growing global concern.

  4. The article misses the growing global consensus about AI risks. The article makes it appear that only a narrow group of people care about AI safety, and fails to point out the hundreds of leading AI experts who signed the CAIS statement on extinction risk, that 28 nations including Canada signed a statement recognising potential catastrophic risks from AI, or that in the US 83% of people believe AI could accidentally cause a catastrophic event.

  5. Finally, caring about AI safety doesn’t make you a ‘doomer’. AIGS’s mission isn’t to make you fear AI, but to make Canada a leader in its responsible governance and safety. If we thought we were doomed, we’d simply give up – not advocate for better policy.

We thank you for reading our rebuttal, and hope that these points will rectify the misleading narrative crafted in the article. We invite everyone to research the concerns about AI safety and join the movement calling for action to reduce the risks.


The AIGS team

Media Contact:

General inquiries:

Back to News