Skip to main content

Watch: An interview with Janelle Shane, AI Weirdness

25 October 2023
  • Software Engineering

Janelle Shane is a Research Scientist and the creative mind behind AI Weirdness, a blog that uses humour to delve into the stranger side of AI.

We invited Janelle to talk at our latest Distinguished Speaker Series event, held in Dallas on the 17th October, where she spoke about What AI can do – and can’t.

In-keeping with her blog, which uses accessible and humorous examples of some of the issues AI can run into, watch Janelle’s talk showcasing several AI shortcomings.

We also took the opportunity to talk to Janelle one-to-one, to learn more about her background, the inspiration behind her blog, and why humour is so key to the way she communicates about AI.

The Distinguished Speaker Series

Throughout the year, we host a number of speakers as part of G-Research’s Distinguished Speaker Series.

We pride ourselves on our learning environment, which gives people the opportunity to develop personally and professionally within their roles, and our Distinguished Speaker Series is central to that.

We invite global experts in their fields to discuss their cutting-edge work with an audience of G-Research employees and guests, giving attendees the chance to learn from the best.

An Interview with Janelle Shane

I am Janelle Shane and I run an AI humor blog called AI Weirdness. What I find interesting about AI is how these answers you can get back from it sometimes are things that no human would have thought of or written things that are very surprising, things that make you stop and think about what you just asked for and what you just got back. I'm going to be talking about what AI can do and cannot do, and how it is that we can approach using AI in a productive way to solve a problem and avoiding some of the pitfalls of giving AI a problem that's just too broad for it to handle. The AI Weirdness blog was originally a blog that I set up for, uh, sharing research results, but then I saw an experiment where somebody had posted a bunch of AI generated recipes, so they'd taken a very early text generating algorithm, given it a bunch of recipe cookbook recipes as examples, and had it generate, tried to generate more examples of those and the recipes they were asking for, like folding water and rolling it into cubes. One, one recipe asked for shredded bourbon. And so when I got to the end of reading all these, of course I could barely see the screen. I was laughing so hard, and so I said, well, I should, I need to learn how to generate more of these. What I think is important to say about AI is to have a clear picture of the kinds of things that it does that maybe don't make it to the press releases and maybe don't make it to the marketing material. So these are the cases in which it will fall down or do something weird or do something unexpected. And when we're talking about putting AI into a bunch of different applications, some applications very, that are very important are very life and death. I think it is important that we collectively the people who are building, using, buying these algorithms know to what we can expect from them and know where the marketing material ends and where the reality begins. And I think an important way to do that is through stories, is through examples, is through, uh, intuitive things that sort of stick in your mind and say, aha, this was like the time when the algorithm deleted the list of numbers rather than sorting it so that there would be no sorting errors left. I think the most important thing I, I want people to take from my talk is that today's AI is a narrow artificial intelligence it, and it does best on a very narrow problem and a lot of the problems we give it are too broad for it, and that's when we start to see mistakes. So the narrow the problem, the better the results we're going to get.
Open video transcript

Stay up to date with
G-Research