“We’re really hoping to get that conversation started as to what are the ground rules around using AI, how do we make sure they’re used for things we want and not for others, and help society catch up.” - AIGS Canada Founder and Executive Director Wyatt Tessari
AIGS Canada Founder and Executive Director Wyatt Tessari L’Allié appeared on The Kelly Cutrara Show on Global News Radio AM640 to discuss current developments in artificial intelligence, including an AI application that can detect if employees who are calling into work sick, are actually sick.
Full Interview Transcript:
Host: AI might mess with your plans, because there’s a group of scientists, I believe in India right now, working on AI that would detect if you were sick or not. Here to talk about it is Wyatt Tessari with AIGS Canada, Co-Founder and authority dedicated to AI governance and safety. Welcome to the show, good to have you on Wyatt.
Wyatt: Thanks so much, thanks for having me.
Host: So talk about this, this was originally (cough) - go figure, my voice is legitimately cracking - this is legitimately put together to find out if you are sick so you don’t have to go into doctor’s offices, right. But businesses might use it in another way. Explain how the AI would work.
Wyatt: Basically, what they’re doing is they’re trying to recognize the patterns of someone who is sick. They listen to hundreds of people who were sick, and who weren’t, and they would be able to train the AI to recognize, ok this person is sick, this isn’t. It’s still very much experimental. But it does speak to the bigger picture of, these systems are getting better all the time and these are the kinds of things you can do with AI.
Host: I was thinking about it, you know, one of the things that I heard is that they’re going to have to - part of the study is - that you have to read an Aesop Fable. And so, you would have to somehow - your employer, if you wanted to use this for work - would have to have your voice recorded in normal time reading that fable, and then when you called into work and you want to take a day off sick, you’d have to re-read it.
Wyatt: Exactly. This is why it’s nowhere near as ready for prime time. There’s a lot of steps to go, but voice recognition has been getting really good, really fast. And we’re already seeing, for example with music, you can write some lyrics and then have an AI generated voice of your favourite artist sing them, as if they were singing them for real. The voice recognition and voice imitation technology is getting really, really good. In the particular case of this sick notes situation, probably it’s not ready yet. But the thing about AI, and about technology in general, is that it’s getting better and better everyday. And what comes can be a lot more capable than what we currently have.
Host: Is AI hackable at all? I mean, how advanced is the system when it comes to hackers?
Wyatt: This is a really interesting point because AI systems can be used both ways. For example, if you were a cunning employee you could train an AI system to speak like you and use that AI system to answer the call for the test. So you could try to fool the other AI by using AI.
Host: So it would be battle of the AI’s? Whoever’s savvier.
Wyatt: Exactly, yes. AI can be used for so many things, and it’s really one small aspect of the bigger picture, which is what happens to sick days and how do you manage that? What we’re concerned around on our end is, what kind of world do we want to create with these technologies? Because obviously it’s going to impact not just sick days but also the job situation, the democracy piece and misinformation piece - there’s a bunch of ways in which the system can be used for good and bad, and because they’re being developed so fast we’re really not ready for what’s to come in the coming years. We’re really hoping to get that conversation started as to what are the ground rules around using AI, how do we make sure they’re used for things we want and not for others, and help society catch up and preempt what’s coming.
Host: I was talking to Carmi Levy, who’s a tech expert, earlier on in the week about someone asking AI to basically end humanity, and to get working on that problem. And it was asking other AI programs to help it. Didn’t get too far because we’re still here, but I asked if maybe we should think about bringing in the three laws of robotics from Isaac Asimov. They are: a robot may not injure a human being or through inaction allow a human being to come to harm, a robot must obey orders given to it by human beings except where such orders would conflict with the first line, a robot must protect its own existence as long as such protection does not conflict with the first or second law. Is that being built into AI systems, and should it have to by law?
Wyatt: I think it’s one class example of the things you want AI systems to be aware of and act on. The challenge right now is the more advanced AI systems really are black boxes so what they’ll do for example in the case of the famous ChatGPT - they basically trained it with all the data on the internet and asked it to predict the next word. So what it’s done is, it has some type of internal map of the world via what it’s learned on the internet, but we don’t know what it’s learned, and we don’t know how it’s thinking. It can write some pretty impressive essays and do a bunch of fancy stuff, but we don’t know exactly how it’s doing it. So for one, we do need to make sure that we have rules; we know what rules we want it to follow. But the second piece is the technical safety piece, how do we make sure these systems are understanding those rules in a way we want them to and are not going to find really smart ways to get around them.
Host: Alright, well, a lot of questions left unanswered but that’s coming. Wyatt, thank you very much for joining us, really appreciate your time today.
Wyatt: Thank you, I appreciate it.
Host: Have a great day. Wyatt Tessari is with AIGS Canada, Co-Founder and authority dedicated to AI governance and safety.