AIGS’s rebuttal to The Logic’s article “Doom Inc.”

The Logic Logo

AI Governance & Safety Canada, a cross-partisan not-for-profit and community of people across the country working to improve AI governance, was recently profiled in a front-page article on The Logic questioning those who are working on AI Safety.

AIGS and its founder Wyatt Tessari L’Allié came out relatively unscathed, and we were happy to see the Oct 21st AI Ethics and Safety rally on Parliament Hill get covered. However, the article does a disservice to the growing concerns about AI safety by insinuating that those who care about the issue are malevolently motivated. In particular, it takes aim at the Effective Altruism movement and its related charity of Open Philanthropy, which has been a big source of funding for important AI governance and technical safety research.

These are our counterpoints:

  1. The article doesn’t address the AI risks fuelling the safety concerns. Leading scientists such as Yoshua Bengio and Stuart Russell have gone to great lengths to explain the weaponisation and control problems of advanced AI systems. AIGS has too. In his interview for the article, L’Allié repeatedly pointed to these technical concerns as the reason for founding AIGS. Attacking the charities that fund AI safety efforts without mentioning the stated reasons they’re doing so is deeply misleading.

  2. Funding to boost AI capabilities is 100x greater than funding to make it safe. If The Logic really cared about following the money, it would look at the hundreds of billions of dollars being invested to make AI more capable, not the tiny portion trying to make it safer for humanity.

  3. Effective Altruism is not the reason people worry about AI. EA is a movement dedicated to doing as much good as it can in the world, and it attracts many caring, thoughtful, people. EAs work on pressing issues such as global poverty, pandemic preparedness, and animal welfare. They have taken a particular interest in AI safety because it’s a high impact but relatively neglected cause area. Due to these shared interests, AIGS’s founders have interacted with Effective Altruists (as disclosed in full below), but EA does not control or represent AIGS or the broader AI Safety movement. If EA never existed, AI safety would still be a growing global concern.

  4. The article misses the growing global consensus about AI risks. The article makes it appear that only a narrow group of people care about AI safety, and fails to point out the hundreds of leading AI experts who signed the CAIS statement on extinction risk, that 28 nations including Canada signed a statement recognising potential catastrophic risks from AI, or that in the US 83% of people believe AI could accidentally cause a catastrophic event.

  5. Finally, caring about AI safety doesn’t make you a ‘doomer’. AIGS’s mission isn’t to make you fear AI, but to make Canada a leader in its responsible governance and safety. If we thought we were doomed, we’d simply give up – not advocate for better policy.

We thank you for reading our rebuttal, and hope that these points will rectify the misleading narrative crafted in the article. We invite everyone to research the concerns about AI safety and join the movement calling for action to reduce the risks.


The AIGS team

Media Contact:

General inquiries:


AIGS’s connection to the Effective Altruism movement:

After campaigning for action on climate from 2010-2015, AIGS founder Wyatt Tessari L’Allié took four years to research 21st trends and came to his own conclusions about the world’s most pressing issues, which included AI safety. When he later met some EAs in Toronto, he found that they mostly agreed on the issues, and he has since been friends with EAs at various times in his life. In 2022, when AI safety wasn’t on any Canadian charity’s radar, he applied for a grant from the EA Long Term Future Fund to cover his salary and expenses for a year so that he could begin organising the AI safety movement in Canada. A top-up was received in Feb 2023 to pay for Mario Gibney (AIGS co-founder) to help him. AIGS Canada was founded in April 2023 and both grants expired in May.

L’Allié and Gibney are grateful for their initial EA funding, but decided not to pursue it for AIGS when they realised that AIGS would need to do direct advocacy with the Canadian government (EA Funds are global in scope but based in the US). The Canadian government’s role in shaping positive AI outcomes should be determined by Canadians, so since its founding in April, AIGS has been proudly 100% funded by small and medium donations by individual Canadians concerned about AI Safety.

If you are a Canadian citizen or resident, we invite you to support our work to make AI safer.

Back to News