New to the idea of advanced AI risk? Here are some beginner-friendly explanations:

AI experts are increasingly afraid of what they’re creating - Vox (written article, estimated reading time 18 minutes)

“For a long time, AI safety faced the difficulty of being a research field about a far-off problem, which is why only a small number of researchers were even trying to figure out how to make it safe. Now, it has the opposite problem: The challenge is here, and it’s just not clear if we’ll solve it in time.”

Vox ai article image
An executive primer image

An executive primer on artificial general intelligence - McKinsey & Company (written article, estimated reading time 15 minutes)

“Even a small probability of achieving AGI in the next decade justifies paying attention to developments in the field, given the potentially dramatic inflection point that AGI could bring about in society.”

The AI Revolution: The Road to Superintelligence - Wait But Why (written article, estimated reading time 35 minutes)

“There are three reasons a lot of people are confused about the term AI:

  • We associate AI with movies.
  • AI is a broad topic.
  • We use AI all the time in our daily lives, but we often don’t realize it’s AI.

So let’s clear things up.”

Human progress through time image
Our world in data image

Artificial Intelligence - Our World in Data (resource page, including several medium-length articles and graphs)

“How exactly such powerful AI systems are built and used will be very important for the future of our world, and our own lives. All technologies have both positive and negative consequences, but with AI the range of these consequences is extraordinarily large: the technology has an immense potential for good, but also comes with large downsides and high risks.”

AI Ethics & AI Safety

AI Ethics and AI Safety are two interconnected aspects in the responsible development and deployment of AI systems.

AI Ethics focuses on the moral principles and guidelines that govern the design, implementation, and impact of AI on society, addressing issues such as fairness, transparency, accountability, and privacy. AI Safety, on the other hand, involves the technical aspects of developing AI systems that are robust, reliable, and secure, aiming to prevent unintended consequences and harmful behaviour.

While these two domains may seem distinct, efforts in AI Ethics and AI Safety can mutually reinforce one another. By ensuring that AI systems align with human values, ethical considerations help guide the development of safe AI systems that can be trusted and understood by users. Meanwhile, advances in AI Safety can help avoid accidents and misuse that would violate basic human rights. In summary, addressing AI Ethics and AI Safety in tandem creates a strong foundation for the responsible development and deployment of AI technologies.

Deeper Dive: The Development of Advanced AI

By Anson Ho

The past, present, and future of AI

Developments in artificial intelligence (AI) have been truly astonishing. In 2012, the field of deep learning was just beginning to take off, and AI systems struggled at fairly simple tasks like object recognition. The story today is wildly different – AI systems can outperform most high-school students on a wide range of subjects ranging from biology to economics, make state-of-the-art progress in solving protein folding, are wildly superhuman at games like Chess and Go, and are poised to impact large fractions of the economy.

Click to read more

Progress over the last decade has been extremely rapid, and there are reasons to expect these trends to continue. All of the three main ingredients for AI – compute, data, and algorithms – have been growing exponentially and may continue to do so for some time. For instance, between 2010 and 2022, the total amount of training computation has grown by a factor of around 100 million, several orders of magnitude more than growth due to Moore’s Law. Global investment in AI is rising significantly, and empirical evidence from deep learning does not show signs of diminishing returns to scale.

While the future is difficult to predict, our best economic models and forecasts suggest AI will have tremendous impact on society over the coming decades.

Large language models like GPT-4 may lead to the development of AI systems more capable than humans on a wide range of tasks,creating significant disruption to the jobs market. If not developed safely, these systems could lead to catastrophic risks, such as the use of AI in autonomous weapons and warfare, using AI systems to develop weapons or control high-stakes systems, and the loss of control of AI systems, threatening humanity itself.

Risks from advanced AI

The potential for rapid progress and massive impact raises serious concerns, ranging from drastic increases in inequality, to the use of highly advanced AI in war. At AIGS Canada, we are interested in three main categories of risk from advanced AI systems, which may cause significant harm at both a national and global scale.

Misuse risk: AI systems are quickly becoming both more powerful and easier to use. If developed safely, this can bring many benefits to society, but a failure to do so could massively increase the destructive capacity of malicious actors. As a real-life example, AI systems were used to design chemical weapons with a predicted toxicity more lethal than any other weapon known. While this was a proof-of-concept performed by well-meaning scientists, we should expect more of such risks to occur in the coming years as AI systems are more widely accessed and integrated into society. Autonomous weapons have already been used on the battlefield, and they may be used to target political opponents or write code for cyberattacks.

Misalignment risk: The risks from AI, unfortunately, go far beyond those from malicious actors. There are serious challenges in ensuring that AI systems are aligned with human values, and do what their designers want them to do. For instance, AI systems may only demonstrate good behavior when they are closely overseen during training, but this behavior may fail to generalize once they are deployed in the real world. This may allow unaligned AI systems to overcome attempted safety precautions – an extremely serious issue if highly intelligent and autonomous AI systems are deployed in the real world or in high-stakes scenarios.

Structural risk: Misuse and misalignment risks exist within a broader ecosystem of risk. For instance, competitive pressures between actors building advanced AI systems could lead to cutting corners on safety, perhaps with weakened safety measures and heightened risk of malicious use. On top of this, the widespread deployment of AI systems could drastically amplify inequality, and misaligned systems could lead to a drift in societal values threatening existing institutions. Systems today can already outperform many humans on a wide range of tasks, and are getting better by the minute – are our institutions and society really able to adapt fast enough to these changes?

These risks can unfold and impact society extremely quickly – ChatGPT, for instance, reached 100 million monthly active users within 2 months of deployment, the fastest growing consumer application to date. AI systems are likely to become increasingly integrated into society, and if we do not take action today, or if we do not take the right action, this could lead to major problems for society down the line.

The urgent need for action

Concern about the impacts of AI are growing, and not just about minor risks. For instance, according to a survey of 738 researchers at NeurIPS and ICML, both top machine learning conferences, 48% of respondents believed that there was a 10% of an “extremely bad outcome (e.g., human extinction)” due to AI. At the same time, 69% of respondents believed that AI safety should be prioritized “more” or “much more” than it currently is.

These concerns are not just confined to the field of machine learning either – a recent Monmouth University survey of the US public found that 56% of respondents were worried that artificially intelligent machines would hurt humans’ overall quality of life, and 55% were at least somewhat worried that AI could one day pose a risk to the human race’s existence.

Recently, an open letter by the Future of Life Institute calling for a pause on large-scale AI experiments received over 25,000 signatories, including top AI researchers like Yoshua Bengio and Stuart Russell.

What should we do to tackle these issues? We need to adopt strategies that can cope with the pace of change, as well as technical and social interventions that help us with the challenges of today, and have the ability to scale to future systems. For example, this could include regulating organizations to ensure that they carefully audit their systems prior to deployment, or investing in technical AI safety research (such as to make systems more human-interpretable). Work in this area has yet to mature, and ideas and discussion are constantly evolving – what happens over the coming months and years may therefore be pivotal in shaping the future of AI, Canada, and the world.