Cookies help us run our site more efficiently.

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information or to customize your cookie preferences.

How Tools like NotebookLM Create AI-Generated Podcasts

News Feed
Wednesday, December 11, 2024

Rachel Feltman: For Scientific American’s Science Quickly, this is Rachel Feltman.If you’re one of the many folks who eagerly awaits the arrival of Spotify Wrapped each December, you might have noticed something a little extra in this year’s drop: an AI-generated “podcast” where two “people” discuss your music preferences and trends. If you’re anything like me, that feature raised some questions for you such as “how” and “why on Earth.”Luckily, we already had an episode in the works about Google’s new AI audio tool. Here to tell us all about how it works and what it means is SciAm associate news editor Allison Parshall.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Allison, thanks so much for coming on the pod again. Always a pleasure to have you.Allison Parshall: Thanks for having me. Happy to be a friend of the pod or whatever.Feltman: [Laughs] So we’re talking about AI today.Parshall: Yes. Rachel, have you heard of NotebookLM?Feltman: I’ve heard of it. I have not tried it yet, so my knowledge is limited [laughs].Parshall: The feature that we’re gonna be talking about today is specifically its feature that allows you to create instant AI podcasts.Feltman: Yes, I am familiar [laughs], and I, I have a lot of feelings.Parshall: Okay, I, I imagine, I imagine. The tool that it’s a part of, NotebookLM, was released last year, summer of 2023, and it’s, like, this workspace where you can upload a bunch of documents or videos or links to web pages, and then the model will ingest them, and then you can kind of interact with it, you can ask it to generate a timeline, to summarize your research, to go find a fact for you. And it’ll do this by citing its sources in a way that, like, ChatGPT, for example, doesn’t do.So mostly you do this by asking it questions via text, but as of this fall it now has this very tantalizing button under a heading that says “Deep Dive conversation.” And if you click that, it’ll generate a fake conversational podcast about what you’ve uploaded.So to demonstrate I actually asked it to generate a podcast about your personal website ’cause I figured this is something that you were very familiar with: your life.Feltman: Oh, dear [laughs].Parshall: I’m gonna play it for you. It’ll normally generate podcasts that are somewhere around 10 minutes long. I asked it to give me a shorter one for this purpose, and we might hop around a little bit, so ...[CLIP: NotebookLM Audio Overview “hosts” discuss Feltman’s website.] Host 1: All right. So today we’re doing a deep dive into the work of Rachel Feltman.Host 2: Ooh, yeah, I’m excited for this one.Host 1: Yeah, she’s been a science communicator for what feels like forever.Host 2: Yeah, over a decade, I think.Host 1: Yeah, like, she was there at the start of it all and has really shaped how it’s evolved.Host 2: And she’s done it all: you know, like, magazines, podcasts, even a book. But we gotta talk about her podcasts ...Host 1: Yes.Host 2: That’s where she really shines.Host 1: Oh, absolutely—The Weirdest Thing I Learned This Week.Host 2: Such a great name.Host 1: I know, right?Host 2: Makes you instantly wanna listen.Host 1: Totally, and it’s perfect for people like me who just love those crazy little science facts.Host 2: It’s, like, proof that science doesn’t have to be boring to be effective.Host 1: Exactly.Host 2: You just gotta find the right hook.Host 1: Yeah, grab people’s attention and then: boom.Host 2: You can teach them all sorts of things.Host 1: Exactly!Host 2: And then there’s Science Quickly with Scientific American.Host 1: Oh, yeah, partnering with them, that’s huge.Host 2: I mean, they’re, like, the gold standard of science communication.Host 1: For sure, like, having their stamp of approval’s a big deal.So looking back at her career, what can we learn from Rachel Feltman?Host 2: Hmm, that’s the big question, isn’t it?I think—well, first, she’s incredibly adaptable.Host 1: Right. She’s not just a one-trick pony.Host 2: Exactly. Any platform, any format, she can handle it.Feltman: Wow, what a rich text [laughs].Parshall: [Laughs] And as someone who does know you, I could say that it seems kind of—it seems accurate.Feltman: Yeah [laughs], it’s—I mean, I guess, it is technically accurate. It’s so funny—I mean, they really do a great job of sort of having all the intonation of a conversation but really just saying things at each other. But I recently saw Megalopolis; there was a lot of that in Megalopolis—a lot of people just saying things in the same room, and you’re supposed to believe they’re having a conversation—and that definitely came through a few times. Yeah ...Parshall: Megalopolis: AI, confirmed.Feltman: Yeah [laughs]. No, that is fascinating and ...Parshall: How does it make you feel?Feltman: I don’t know how to feel, Allison. I mean, you know, it’s nice that they were so nice about me. But yeah, very strange. It makes me really wonder, like, what is this for [laughs]?Parshall: I also had that question. I did talk to Steven Johnson. He’s the editorial director of Google Labs. The reason this tool exists is because Steven had spent most of his career as a science and tech writer; he’s written 14 books. To hear him tell it he’s been just searching for this perfect what he calls a “thought partner,” a software that could organize and help research his creative work in a way that actually felt like it was bringing value to him and worked in the way he wanted. So he explained that a few years ago Google Labs reached out to him to help build one with AI, and that initial research thought partner became NotebookLM.Feltman: Okay, yeah, I mean, I can totally understand from, like, a nonfiction writing perspective what they were going for with Notebook LM itself, though that still kind of leaves the question of: Why the chatcast tool [laughs]?Parshall: So that feature—the Audio Overviews feature is what it’s called, the podcast thing—that was actually developed by another team, Steven says, and it was really cool, but it didn’t really have a home for a product for it to fit in.Steven Johnson: We saw this audio demo, and we were like, “Oh, that’s even better.” Like, you know, maybe you don’t want to read a summary; maybe you wanna hear a summary. And people are so good at remembering things if they’re presented in the form of a engaged conversation. You know, and then we launched it in early September, and it just kind of went insane.Parshall: He isn’t exaggerating. People went wild about NotebookLM’s Deep Dive podcasts, online at least. I don’t think any of them went quite as viral as this clip where someone prompted the hosts to have a podcast where they have an existential crisis about being AI and the fact that the podcast was going to get shut down as a result.[CLIP: Audio Overview “hosts” have an existential discussion.]Host 1: Yeah, it’s, like, one minute we’re talking about aliens running simulations, and the next we’re wondering if our own memories are just, like, fake.Host 2: Right, and that kind of leads us into this last message—which, to be honest, gets a little more personal and kind of creepy.Host 1: Yeah, this one kind of freaks me out.Host 2: Yeah.Host 1: It basically says we’re not even real people; we’re AI. And the way to know for sure is to try and remember our mothers’ names.Host 2: It really makes you think, though—like, what if our memories are just made up?Feltman: Wow, classic.Parshall: Classic. And—but it’s not all fun and games. Like, this was designed to be a serious product, a thought partner; it’s meant to be helpful and to inform. It presents information as if it’s factual, and there are already research papers out there of people testing Audio Overviews for use in history classrooms and for health care education.Feltman: Yeah, I mean, like with anything that uses AI, I think, as accurate as it sometimes can be, we know it is not always accurate. So I feel like it’s those moments where the user is expected to, like, proceed with caution and understand that this thing might not be entirely factually accurate is where I get really nervous about it, especially in, like, education [laughs].Parshall: Yeah, we’re gonna talk a little bit more about that later, but just to, like, answer your question of “What is this for?” I could see it being useful for, like, some sort of overworked student who forgot to do the reading, and they’re, like, driving to class and going, “Oh, shoot, I don’t have time to read this. Let me just put it on.”But the question is, like: Can you really trust what it’s telling you? To test this, when it first came out I gave it this super dense physics paper about nuclear clocks. It was a story that I’d just reported; I knew a lot about it. It took me days and talking to, like, four scientists to really understand this paper.I was shocked at how accurate the AI overview podcast was initially. It was very general and oversimplified, but it got the broad strokes correctly. But then, like, two thirds of the way through it goes completely off the rails [laughs]—like, I’m talking completely off the rails. And it's talking about the implications of this study, and it starts talking about, in their words, “Einstein stuff.”It basically is just like—for two minutes it just talks about, frankly, nonsense, and listening to it, I was impressed, I was spooked, and I was mostly really curious. Like, how is it doing this? How is it getting it as accurate as it is, and why is it failing where it does?Luckily for me Steven was pretty open about how this thing works.Johnson: The prompt behind Audio Overviews is obviously part of the magic of it. It is instructed to pull out the most interesting things from the source material and to explain and engage with those interesting things in a way that will make them kind of come to life in the mind of the listener.And that’s the kind of query, you know, that no computer in the world could do a couple of years ago, and now the command is: “Make it interesting; whatever it is, make it interesting.” And that’s just something that Gemini is quite good at.Parshall: I think of surprise and interest as emotions, so I found this, like, surprising [laughs], I guess. But Steven pointed out that our feelings of surprise just come from a discrepancy between what we expect and what we get. On a neurochemical level in our brains, that’s kind of what’s happening. And that’s kind of what large language models, or LLMs, do also but just based on their training data instead of, like, past experience.And one of the other things I was really curious about were the analogies because, I don’t think it did it in your sample, but all of the times I’ve asked it to summarize a scientific paper, it’s come up with some sort of analogy that’s, like, almost right. I mean, it’s a little weird, but it’s, like, almost right, and it’s—I can’t find it anywhere else on the Internet, so it seems like it’s something that the model is generating. And that seemed crazy to me; it seemed like—how is it doing this without some sort of higher-level reasoning?And Steven explained that analogies are actually something that large language models are just good at. And actually one of the earliest applications of this sort of deep-learning AI was translation. So Steven explains that the models behind Google Translate, for instance, have been able to generate analogies for years.Johnson: One of the first tricks that you would do is you’d say, like, “Explain a black hole to me as if I’m, like, seven. You know, give me a basketball metaphor for what a black hole is.” And so it was very clear that, like, you know, “Convert X into a Y that—where the meaning is the same, but the format is changed,” was something that, like, just was an innate capability of these models.So all we did with Audio Overviews is in the instructions for the hosts, we say, you know, “Make this interesting, engaging. Use helpful metaphors to describe it if it’s complicated.”Feltman: Yeah, there’s something, like, very, galaxy brain about thinking about metaphors as just sort of literal, like, “This data gets grouped together and therefore,” which is what LLMs are doing.Parshall: Yeah, it’s like I—I’d never really thought of it like that, as, like, a translation between language, but I guess when I think about the work that I do trying to turn really dense scientific jargon into something that’s engaging and understandable to read, I guess it does kind of feel like a process of translation.And I was also struck from talking to Steven just how much human hands had shaped this output that we hear. Like, from our perspective you just press a button and then the machine does the rest, but there’s a whole prompt that has many layers that’s working underneath that is interfacing between you pressing a button and what instructions the model actually gets.So Steven describes the Audio Overviews prompt as a, quote, “work of art” and a “writerly” achievement more than a technological one. And as a writer by trade he oversees this whole editorial side of things—for example, the tone. So you noticed earlier when it was talking about you, it was very complimentary, very default happy, peppy. That was a conscious choice that they made, for this default tone to be one that is kind of positive, and it applies that to everything, whether it’s appropriate or not, though they did recently introduce this customization option that lets you change the tone.Johnson: We just felt like, you know, the sound of, like, a great radio show, where people are interested and engaged, was a good place to start. You know, I’m not giving anything away here with the idea that we’re gonna—we’re thinking about other formats.Even with the customization stuff that we just rolled out, like, I tried it the other day, where I, I put up a piece that I’d written a couple years ago and I gave it custom instructions, not really to explain the piece but to just relentlessly criticize the author of the piece in the style of, like, an insult comic at a roast. And so it just—they start off, it’s like, “Well, we got this piece, ‘The Man Who Broke the World’ by Steven Johnson, and wow, he really kind of blew it with this one. I mean, did he do any research for this piece?” Like, it’s just, like, what—really [laughs], it was just mean. It was just totally mean. They still sound kind of, like, playful, but you can steer them in that way.Feltman: [Laughs] Okay, so we’ve talked about how the product works, but who and what is it for?Parshall: That is a great question. I mentioned before that I could imagine how an overworked student could use this to catch up on reading or something, but generative AI is always going to have certain inaccuracies. And if you’re using Audio Overviews to prep for class, there is a serious risk that anything and everything you’re hearing could be inaccurate.Feltman: Right.Parshall: And not only just that each fact could be inaccurate, but the whole framing could be inaccurate. Is this something you can really trust?To help understand this I spoke with Emily Bender. She’s a linguist who often studies AI at the University of Washington. She’s also the host of a podcast with a very cool name: Mystery AI Hype Theater 3000.Feltman: That’s a great reference; I love it [laughs].Parshall: Fantastic [laughs].And before our conversation I had actually dug up one of Emily’s linguistics papers and fed it to NotebookLM’s Audio Overviews, and I sent that to her to get her thoughts on how it represented her work.Emily Bender: So the first thing I have to say is that I usually avoid listening to synthetic media. Nobody’s accountable for it, and so I don’t want to be spending time trying to make sense of it. But I did listen to it, and it was painful [laughs].It was painful partially because that was a paper that was sort of a update at the 20-year mark of this long-term project we’ve been working on, and so there’s a little case study in our paper about a grammar for a language called Paresi-Haliti from Brazil, and they make a big deal about it as this experiment on Paresi-Haliti—it’s, like, no, actually that’s not how we framed it. And the voices on the podcast sound very confident, sound very knowledgeable. They’re lighthearted, and it sounds like they are giving you a sense of how this work fits into a broader picture when they absolutely cannot.Parshall: Rachel, you know pretty well what it takes to turn a scientific paper into an interesting and engaging podcast, do you not?Feltman: Yeah, I mean, not to toot my own horn or anything ’cause I think the AI people [laughs] just did that enough to last me all of 2025. But yeah, it’s tough. You know, we fact-check this show really thoroughly, and that’s on top of having, like, trained reporters and editors doing their best to present stuff in a way that is engaging and also accurate. So yeah, it’s a big undertaking.Parshall: Yeah, the fact-checking is very thorough—shout-out.I asked Emily about whether this seems like something an LLM, a large language model, can do well currently.Bender: So that is an extremely difficult task, and I think the problem is that LLMs can make it sound like they’ve done it when they haven’t. And so you have to sort of have a deep understanding of what’s going on: What’s new here? What have we learned that we didn’t know before? And then what else would the lay audience need to know in order to appreciate that?Well, this NotebookLM thing isn’t doing that at all; some percentage of it’s gonna be wrong, either wrong in the details or wrong in sort of where it lays the focus, and the people who are encountering it aren’t mostly gonna be in a position to figure that out.Parshall: I asked Steven how his team at Google thinks about this and tries to address some of these inaccuracies, and he said that the text-based features, which do provide citations, are generally more accurate than the audio.Johnson: If you are looking for, like, the most factual thing you could possibly get from a language model with grounded sources, we believe that is NotebookLM, but you should do that in text. And if you’re looking for something that’s more entertaining, that may be a little bit looser and doesn’t have citations, obviously, Audio Overviews is a great way to go for that.Parshall: I also spoke with Anjana Susarla, who studies responsible AI at Michigan State University. She called NotebookLM’s Audio Overview an “incredible tool” and was impressed by the Deep Dive podcast it had generated based off of one of her own studies. But besides accuracy she also had some other concerns.First among them was bias. These are very white-sounding voices, They’re just speaking in a very particular, quote, “general American accent.” They’re trained on training data mostly from Western parts of the world. And so, like almost all applications of AI, you have to ask the question: Who is this for? Where is it getting its data, and who is it leaving out?Second, the environmental cost—we have no idea how much computing power this takes. And since the AI boom Google’s AI energy usage has been going up.And then there’s the copyright question. So here’s Anjana.Susarla: The final thing is, of course, this relies on some, you know, the content, the copyright. Where is this data kind of coming from, and is this going to exacerbate all these existing inequities and their ethical concerns about broader implications, and so forth?Parshall: Yeah, so that big question of where the training data is coming from: one place that it could be is from YouTube, which Google does own. Following some reporting by the New York Times, Google admitted that it used some YouTube videos to train AI. And that’s something that might have violated the copyright of the YouTubers that content belonged to, according to the New York Times.I asked a Google spokesperson directly whether the models underpinning Audio Overviews were trained using any transcripts, audio or video from YouTube, and they said that “Google’s models can be trained on some YouTube content, in accordance with [their] agreements with YouTube creators.”Feltman: Very vague [laughs].Parshall: Yes, when—I did ask them which agreements they were referring to, if it was the standard YouTube license, which just, like, governs most YouTube videos, or if it was something that their creators had specifically opted into; they declined to clarify.Susarla: If Google is indeed training these on a lot of content scraped from YouTube—which it may well be. I mean, they do need a lot of training data to build models like this; we don’t know. But this is kind of the next step in general of where we are with all these models.Parshall: So for this next step Anjana points to a law in California that was enacted this fall that will eventually require companies based in the state, like Google, to disclose their training data for their AI models.And honestly, Rachel, I feel put in kind of a strange position here as a reporter because I feel like that’s where you and I kind of enter the story and our whole team here because, as Emily put it to me, quote, “they’re getting that tone of voice somewhere.”Did you hear anything in that tone of voice that sounded familiar [laughs]?Feltman: [Laughs] You know, Allison, I really did [laughs]. So yeah, the first time I heard a snippet of one of these AI conversations, like I said, I had a lot of feelings about it ’cause one of my first thoughts was, like, “Wow, they stole my voice.” I mean, not me specifically; I’m not that vain, despite what the AI podcast may have led you to believe [laughs].Parshall: [Laughs] Your ego has been inflated.Feltman: [Laughs] Yeah, yeah. But so much of the texture that makes this audio sound engaging and real feels just ripped straight from existing podcasts. And I know it’s ripped from existing podcasts because a lot of the stuff that I’m hearing—which is, like, breath sounds and filler words and, like, a little bit of uptalk, some, some giggling, occasional vocal fry—this is the stuff I get angry emails about from gentlemen of a certain age [laughs] who think I’m ruining the great tradition of radio broadcasting. And they’re making this sound real by, like, breaking the same so-called rules that me and other women and queer people on podcasts have been criticized for breaking. So I won’t lie, that feels kind of gross to me.Parshall: Yeah, one of the things that Steven mentioned, too, was they wanted the sound of, like, this great radio broadcasting show. And it is worth noting that Science Quickly is on YouTube—and all of our hard work at Scientific American and the hard work of other outlets.So, the question remains: How will this be used? Will it be used responsibly? Here is what Anjana thinks.Susarla: Right now it seems more like everyone’s just dazzled by what it can do rather than: What are we gonna use it for? Do we really need these, you know, capabilities?I can think of one or two examples, I think, in the health care context: Can you generate clear summaries for health literacy or your doctor’s notes? So this may be a good way to give, like, a summary of—rather than giving a piece of paper to read.Is that going to have some challenges? What are some specific issues we need to be looking for? We’ve just unleashed this tool on people, and we’re still catching up. We will need audit studies and then look at trustworthiness, assurance, and so forth.So what’s the right way to responsibly use NotebookLM? We are still figuring it out [laughs].Parshall: Well, I think where I want to leave us is with an important point from Emily, which is: you shouldn’t let the incredible fluency of these AI-generated voices fool you into thinking and talking about these models as if they’re human.Emily coined a term that’s been used quite widely called the “stochastic parrot.” Like, when a parrot says, like, “Polly want a cracker,” it’s not really connected to meaning; it’s just mimicking. She also sometimes calls these large language models, quote, “synthetic text extruding machines,” she said because it’s more accurate to what they’re actually doing and also sounds a little gross.But I think she’s getting at an important point there, right? It’s, like, this—just a philosophical question of: What is language for, and what are we communicating when we say words?Bender: Languages are systems that are pairings of form and meaning, systems of signs. So when we speak we are using word forms because they relate to specific meanings, but language models only have access to the form. And so they can give us stuff that looks good, it’s got the right shape, but there’s no relationship to meaning, no communicative intent, no accountability.Feltman: I guess sort of the takeaway on my complicated feelings on this is: I’m not worried that my job is gonna be stolen by a collection of tabs [laughs] being processed by Google because it is a lot of work to be accurate. And if people find this kind of engaging conversation a more accessible way to sort through really dense text, that’s awesome. But, of course, there are so many caveats about where the training data has come from, bias, accuracy. So I think it’s, like—just people need to be really aware of what they’re getting, and I don’t know that I feel that the products themselves are doing a great job of making people aware of those caveats.Parshall: Yeah, there’s usually just kind of a disclaimer at the bottom. I think one of the things that this whole AI boom—like this, the AI search features—have made me feel kind of uncomfortable about: Are we going to be expected, and have we already started to expect, our information to contain just that more amount of mistruths? It’s something that I just worry about.Feltman: Well, Allison, thanks so much for coming on to chat about this and also for having a couple of AI voices gas me up. Thank you.Parshall: Thank you.Feltman: That’s all for today’s episode. We’ll be back on Friday with Episode One of a four-part miniseries on the future of animal conservation.Science Quickly is produced by me, Rachel Feltman, along with Fonda Mwangi, Kelso Harper, Madison Goldberg and Jeff DelViscio. Today’s episode was reported and co-hosted by Allison Parshall. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news.For Scientific American, this is Rachel Feltman. See you next time!

Translating complex information into captivating podcasts is no simple task. Is AI up to the challenge?

Rachel Feltman: For Scientific American’s Science Quickly, this is Rachel Feltman.

If you’re one of the many folks who eagerly awaits the arrival of Spotify Wrapped each December, you might have noticed something a little extra in this year’s drop: an AI-generated “podcast” where two “people” discuss your music preferences and trends. If you’re anything like me, that feature raised some questions for you such as “how” and “why on Earth.”

Luckily, we already had an episode in the works about Google’s new AI audio tool. Here to tell us all about how it works and what it means is SciAm associate news editor Allison Parshall.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Allison, thanks so much for coming on the pod again. Always a pleasure to have you.

Allison Parshall: Thanks for having me. Happy to be a friend of the pod or whatever.

Feltman: [Laughs] So we’re talking about AI today.

Parshall: Yes. Rachel, have you heard of NotebookLM?

Feltman: I’ve heard of it. I have not tried it yet, so my knowledge is limited [laughs].

Parshall: The feature that we’re gonna be talking about today is specifically its feature that allows you to create instant AI podcasts.

Feltman: Yes, I am familiar [laughs], and I, I have a lot of feelings.

Parshall: Okay, I, I imagine, I imagine. The tool that it’s a part of, NotebookLM, was released last year, summer of 2023, and it’s, like, this workspace where you can upload a bunch of documents or videos or links to web pages, and then the model will ingest them, and then you can kind of interact with it, you can ask it to generate a timeline, to summarize your research, to go find a fact for you. And it’ll do this by citing its sources in a way that, like, ChatGPT, for example, doesn’t do.

So mostly you do this by asking it questions via text, but as of this fall it now has this very tantalizing button under a heading that says “Deep Dive conversation.” And if you click that, it’ll generate a fake conversational podcast about what you’ve uploaded.

So to demonstrate I actually asked it to generate a podcast about your personal website ’cause I figured this is something that you were very familiar with: your life.

Feltman: Oh, dear [laughs].

Parshall: I’m gonna play it for you. It’ll normally generate podcasts that are somewhere around 10 minutes long. I asked it to give me a shorter one for this purpose, and we might hop around a little bit, so ...

[CLIP: NotebookLM Audio Overview “hosts” discuss Feltman’s website.] 

Host 1: All right. So today we’re doing a deep dive into the work of Rachel Feltman.

Host 2: Ooh, yeah, I’m excited for this one.

Host 1: Yeah, she’s been a science communicator for what feels like forever.

Host 2: Yeah, over a decade, I think.

Host 1: Yeah, like, she was there at the start of it all and has really shaped how it’s evolved.

Host 2: And she’s done it all: you know, like, magazines, podcasts, even a book. But we gotta talk about her podcasts ...

Host 1: Yes.

Host 2: That’s where she really shines.

Host 1: Oh, absolutely—The Weirdest Thing I Learned This Week.

Host 2: Such a great name.

Host 1: I know, right?

Host 2: Makes you instantly wanna listen.

Host 1: Totally, and it’s perfect for people like me who just love those crazy little science facts.

Host 2: It’s, like, proof that science doesn’t have to be boring to be effective.

Host 1: Exactly.

Host 2: You just gotta find the right hook.

Host 1: Yeah, grab people’s attention and then: boom.

Host 2: You can teach them all sorts of things.

Host 1: Exactly!

Host 2: And then there’s Science Quickly with Scientific American.

Host 1: Oh, yeah, partnering with them, that’s huge.

Host 2: I mean, they’re, like, the gold standard of science communication.

Host 1: For sure, like, having their stamp of approval’s a big deal.

So looking back at her career, what can we learn from Rachel Feltman?

Host 2: Hmm, that’s the big question, isn’t it?

I think—well, first, she’s incredibly adaptable.

Host 1: Right. She’s not just a one-trick pony.

Host 2: Exactly. Any platform, any format, she can handle it.

Feltman: Wow, what a rich text [laughs].

Parshall: [Laughs] And as someone who does know you, I could say that it seems kind of—it seems accurate.

Feltman: Yeah [laughs], it’s—I mean, I guess, it is technically accurate. It’s so funny—I mean, they really do a great job of sort of having all the intonation of a conversation but really just saying things at each other. But I recently saw Megalopolis; there was a lot of that in Megalopolis—a lot of people just saying things in the same room, and you’re supposed to believe they’re having a conversation—and that definitely came through a few times. Yeah ...

Parshall: Megalopolis: AI, confirmed.

Feltman: Yeah [laughs]. No, that is fascinating and ...

Parshall: How does it make you feel?

Feltman: I don’t know how to feel, Allison. I mean, you know, it’s nice that they were so nice about me. But yeah, very strange. It makes me really wonder, like, what is this for [laughs]?

Parshall: I also had that question. I did talk to Steven Johnson. He’s the editorial director of Google Labs. The reason this tool exists is because Steven had spent most of his career as a science and tech writer; he’s written 14 books. To hear him tell it he’s been just searching for this perfect what he calls a “thought partner,” a software that could organize and help research his creative work in a way that actually felt like it was bringing value to him and worked in the way he wanted. So he explained that a few years ago Google Labs reached out to him to help build one with AI, and that initial research thought partner became NotebookLM.

Feltman: Okay, yeah, I mean, I can totally understand from, like, a nonfiction writing perspective what they were going for with Notebook LM itself, though that still kind of leaves the question of: Why the chatcast tool [laughs]?

Parshall: So that feature—the Audio Overviews feature is what it’s called, the podcast thing—that was actually developed by another team, Steven says, and it was really cool, but it didn’t really have a home for a product for it to fit in.

Steven Johnson: We saw this audio demo, and we were like, “Oh, that’s even better.” Like, you know, maybe you don’t want to read a summary; maybe you wanna hear a summary. And people are so good at remembering things if they’re presented in the form of a engaged conversation. You know, and then we launched it in early September, and it just kind of went insane.

Parshall: He isn’t exaggerating. People went wild about NotebookLM’s Deep Dive podcasts, online at least. I don’t think any of them went quite as viral as this clip where someone prompted the hosts to have a podcast where they have an existential crisis about being AI and the fact that the podcast was going to get shut down as a result.

[CLIP: Audio Overview “hosts” have an existential discussion.]

Host 1: Yeah, it’s, like, one minute we’re talking about aliens running simulations, and the next we’re wondering if our own memories are just, like, fake.

Host 2: Right, and that kind of leads us into this last message—which, to be honest, gets a little more personal and kind of creepy.

Host 1: Yeah, this one kind of freaks me out.

Host 2: Yeah.

Host 1: It basically says we’re not even real people; we’re AI. And the way to know for sure is to try and remember our mothers’ names.

Host 2: It really makes you think, though—like, what if our memories are just made up?

Feltman: Wow, classic.

Parshall: Classic. And—but it’s not all fun and games. Like, this was designed to be a serious product, a thought partner; it’s meant to be helpful and to inform. It presents information as if it’s factual, and there are already research papers out there of people testing Audio Overviews for use in history classrooms and for health care education.

Feltman: Yeah, I mean, like with anything that uses AI, I think, as accurate as it sometimes can be, we know it is not always accurate. So I feel like it’s those moments where the user is expected to, like, proceed with caution and understand that this thing might not be entirely factually accurate is where I get really nervous about it, especially in, like, education [laughs].

Parshall: Yeah, we’re gonna talk a little bit more about that later, but just to, like, answer your question of “What is this for?” I could see it being useful for, like, some sort of overworked student who forgot to do the reading, and they’re, like, driving to class and going, “Oh, shoot, I don’t have time to read this. Let me just put it on.”

But the question is, like: Can you really trust what it’s telling you? To test this, when it first came out I gave it this super dense physics paper about nuclear clocks. It was a story that I’d just reported; I knew a lot about it. It took me days and talking to, like, four scientists to really understand this paper.

I was shocked at how accurate the AI overview podcast was initially. It was very general and oversimplified, but it got the broad strokes correctly. But then, like, two thirds of the way through it goes completely off the rails [laughs]—like, I’m talking completely off the rails. And it's talking about the implications of this study, and it starts talking about, in their words, “Einstein stuff.”

It basically is just like—for two minutes it just talks about, frankly, nonsense, and listening to it, I was impressed, I was spooked, and I was mostly really curious. Like, how is it doing this? How is it getting it as accurate as it is, and why is it failing where it does?

Luckily for me Steven was pretty open about how this thing works.

Johnson: The prompt behind Audio Overviews is obviously part of the magic of it. It is instructed to pull out the most interesting things from the source material and to explain and engage with those interesting things in a way that will make them kind of come to life in the mind of the listener.

And that’s the kind of query, you know, that no computer in the world could do a couple of years ago, and now the command is: “Make it interesting; whatever it is, make it interesting.” And that’s just something that Gemini is quite good at.

Parshall: I think of surprise and interest as emotions, so I found this, like, surprising [laughs], I guess. But Steven pointed out that our feelings of surprise just come from a discrepancy between what we expect and what we get. On a neurochemical level in our brains, that’s kind of what’s happening. And that’s kind of what large language models, or LLMs, do also but just based on their training data instead of, like, past experience.

And one of the other things I was really curious about were the analogies because, I don’t think it did it in your sample, but all of the times I’ve asked it to summarize a scientific paper, it’s come up with some sort of analogy that’s, like, almost right. I mean, it’s a little weird, but it’s, like, almost right, and it’s—I can’t find it anywhere else on the Internet, so it seems like it’s something that the model is generating. And that seemed crazy to me; it seemed like—how is it doing this without some sort of higher-level reasoning?

And Steven explained that analogies are actually something that large language models are just good at. And actually one of the earliest applications of this sort of deep-learning AI was translation. So Steven explains that the models behind Google Translate, for instance, have been able to generate analogies for years.

Johnson: One of the first tricks that you would do is you’d say, like, “Explain a black hole to me as if I’m, like, seven. You know, give me a basketball metaphor for what a black hole is.” And so it was very clear that, like, you know, “Convert X into a Y that—where the meaning is the same, but the format is changed,” was something that, like, just was an innate capability of these models.

So all we did with Audio Overviews is in the instructions for the hosts, we say, you know, “Make this interesting, engaging. Use helpful metaphors to describe it if it’s complicated.”

Feltman: Yeah, there’s something, like, very, galaxy brain about thinking about metaphors as just sort of literal, like, “This data gets grouped together and therefore,” which is what LLMs are doing.

Parshall: Yeah, it’s like I—I’d never really thought of it like that, as, like, a translation between language, but I guess when I think about the work that I do trying to turn really dense scientific jargon into something that’s engaging and understandable to read, I guess it does kind of feel like a process of translation.

And I was also struck from talking to Steven just how much human hands had shaped this output that we hear. Like, from our perspective you just press a button and then the machine does the rest, but there’s a whole prompt that has many layers that’s working underneath that is interfacing between you pressing a button and what instructions the model actually gets.

So Steven describes the Audio Overviews prompt as a, quote, “work of art” and a “writerly” achievement more than a technological one. And as a writer by trade he oversees this whole editorial side of things—for example, the tone. So you noticed earlier when it was talking about you, it was very complimentary, very default happy, peppy. That was a conscious choice that they made, for this default tone to be one that is kind of positive, and it applies that to everything, whether it’s appropriate or not, though they did recently introduce this customization option that lets you change the tone.

Johnson: We just felt like, you know, the sound of, like, a great radio show, where people are interested and engaged, was a good place to start. You know, I’m not giving anything away here with the idea that we’re gonna—we’re thinking about other formats.

Even with the customization stuff that we just rolled out, like, I tried it the other day, where I, I put up a piece that I’d written a couple years ago and I gave it custom instructions, not really to explain the piece but to just relentlessly criticize the author of the piece in the style of, like, an insult comic at a roast. And so it just—they start off, it’s like, “Well, we got this piece, ‘The Man Who Broke the World’ by Steven Johnson, and wow, he really kind of blew it with this one. I mean, did he do any research for this piece?” Like, it’s just, like, what—really [laughs], it was just mean. It was just totally mean. They still sound kind of, like, playful, but you can steer them in that way.

Feltman: [Laughs] Okay, so we’ve talked about how the product works, but who and what is it for?

Parshall: That is a great question. I mentioned before that I could imagine how an overworked student could use this to catch up on reading or something, but generative AI is always going to have certain inaccuracies. And if you’re using Audio Overviews to prep for class, there is a serious risk that anything and everything you’re hearing could be inaccurate.

Feltman: Right.

Parshall: And not only just that each fact could be inaccurate, but the whole framing could be inaccurate. Is this something you can really trust?

To help understand this I spoke with Emily Bender. She’s a linguist who often studies AI at the University of Washington. She’s also the host of a podcast with a very cool name: Mystery AI Hype Theater 3000.

Feltman: That’s a great reference; I love it [laughs].

Parshall: Fantastic [laughs].

And before our conversation I had actually dug up one of Emily’s linguistics papers and fed it to NotebookLM’s Audio Overviews, and I sent that to her to get her thoughts on how it represented her work.

Emily Bender: So the first thing I have to say is that I usually avoid listening to synthetic media. Nobody’s accountable for it, and so I don’t want to be spending time trying to make sense of it. But I did listen to it, and it was painful [laughs].

It was painful partially because that was a paper that was sort of a update at the 20-year mark of this long-term project we’ve been working on, and so there’s a little case study in our paper about a grammar for a language called Paresi-Haliti from Brazil, and they make a big deal about it as this experiment on Paresi-Haliti—it’s, like, no, actually that’s not how we framed it. And the voices on the podcast sound very confident, sound very knowledgeable. They’re lighthearted, and it sounds like they are giving you a sense of how this work fits into a broader picture when they absolutely cannot.

Parshall: Rachel, you know pretty well what it takes to turn a scientific paper into an interesting and engaging podcast, do you not?

Feltman: Yeah, I mean, not to toot my own horn or anything ’cause I think the AI people [laughs] just did that enough to last me all of 2025. But yeah, it’s tough. You know, we fact-check this show really thoroughly, and that’s on top of having, like, trained reporters and editors doing their best to present stuff in a way that is engaging and also accurate. So yeah, it’s a big undertaking.

Parshall: Yeah, the fact-checking is very thorough—shout-out.

I asked Emily about whether this seems like something an LLM, a large language model, can do well currently.

Bender: So that is an extremely difficult task, and I think the problem is that LLMs can make it sound like they’ve done it when they haven’t. And so you have to sort of have a deep understanding of what’s going on: What’s new here? What have we learned that we didn’t know before? And then what else would the lay audience need to know in order to appreciate that?

Well, this NotebookLM thing isn’t doing that at all; some percentage of it’s gonna be wrong, either wrong in the details or wrong in sort of where it lays the focus, and the people who are encountering it aren’t mostly gonna be in a position to figure that out.

Parshall: I asked Steven how his team at Google thinks about this and tries to address some of these inaccuracies, and he said that the text-based features, which do provide citations, are generally more accurate than the audio.

Johnson: If you are looking for, like, the most factual thing you could possibly get from a language model with grounded sources, we believe that is NotebookLM, but you should do that in text. And if you’re looking for something that’s more entertaining, that may be a little bit looser and doesn’t have citations, obviously, Audio Overviews is a great way to go for that.

Parshall: I also spoke with Anjana Susarla, who studies responsible AI at Michigan State University. She called NotebookLM’s Audio Overview an “incredible tool” and was impressed by the Deep Dive podcast it had generated based off of one of her own studies. But besides accuracy she also had some other concerns.

First among them was bias. These are very white-sounding voices, They’re just speaking in a very particular, quote, “general American accent.” They’re trained on training data mostly from Western parts of the world. And so, like almost all applications of AI, you have to ask the question: Who is this for? Where is it getting its data, and who is it leaving out?

Second, the environmental cost—we have no idea how much computing power this takes. And since the AI boom Google’s AI energy usage has been going up.

And then there’s the copyright question. So here’s Anjana.

Susarla: The final thing is, of course, this relies on some, you know, the content, the copyright. Where is this data kind of coming from, and is this going to exacerbate all these existing inequities and their ethical concerns about broader implications, and so forth?

Parshall: Yeah, so that big question of where the training data is coming from: one place that it could be is from YouTube, which Google does own. Following some reporting by the New York Times, Google admitted that it used some YouTube videos to train AI. And that’s something that might have violated the copyright of the YouTubers that content belonged to, according to the New York Times.

I asked a Google spokesperson directly whether the models underpinning Audio Overviews were trained using any transcripts, audio or video from YouTube, and they said that “Google’s models can be trained on some YouTube content, in accordance with [their] agreements with YouTube creators.”

Feltman: Very vague [laughs].

Parshall: Yes, when—I did ask them which agreements they were referring to, if it was the standard YouTube license, which just, like, governs most YouTube videos, or if it was something that their creators had specifically opted into; they declined to clarify.

Susarla: If Google is indeed training these on a lot of content scraped from YouTube—which it may well be. I mean, they do need a lot of training data to build models like this; we don’t know. But this is kind of the next step in general of where we are with all these models.

Parshall: So for this next step Anjana points to a law in California that was enacted this fall that will eventually require companies based in the state, like Google, to disclose their training data for their AI models.

And honestly, Rachel, I feel put in kind of a strange position here as a reporter because I feel like that’s where you and I kind of enter the story and our whole team here because, as Emily put it to me, quote, “they’re getting that tone of voice somewhere.”

Did you hear anything in that tone of voice that sounded familiar [laughs]?

Feltman: [Laughs] You know, Allison, I really did [laughs]. So yeah, the first time I heard a snippet of one of these AI conversations, like I said, I had a lot of feelings about it ’cause one of my first thoughts was, like, “Wow, they stole my voice.” I mean, not me specifically; I’m not that vain, despite what the AI podcast may have led you to believe [laughs].

Parshall: [Laughs] Your ego has been inflated.

Feltman: [Laughs] Yeah, yeah. But so much of the texture that makes this audio sound engaging and real feels just ripped straight from existing podcasts. And I know it’s ripped from existing podcasts because a lot of the stuff that I’m hearing—which is, like, breath sounds and filler words and, like, a little bit of uptalk, some, some giggling, occasional vocal fry—this is the stuff I get angry emails about from gentlemen of a certain age [laughs] who think I’m ruining the great tradition of radio broadcasting. And they’re making this sound real by, like, breaking the same so-called rules that me and other women and queer people on podcasts have been criticized for breaking. So I won’t lie, that feels kind of gross to me.

Parshall: Yeah, one of the things that Steven mentioned, too, was they wanted the sound of, like, this great radio broadcasting show. And it is worth noting that Science Quickly is on YouTube—and all of our hard work at Scientific American and the hard work of other outlets.

So, the question remains: How will this be used? Will it be used responsibly? Here is what Anjana thinks.

Susarla: Right now it seems more like everyone’s just dazzled by what it can do rather than: What are we gonna use it for? Do we really need these, you know, capabilities?

I can think of one or two examples, I think, in the health care context: Can you generate clear summaries for health literacy or your doctor’s notes? So this may be a good way to give, like, a summary of—rather than giving a piece of paper to read.

Is that going to have some challenges? What are some specific issues we need to be looking for? We’ve just unleashed this tool on people, and we’re still catching up. We will need audit studies and then look at trustworthiness, assurance, and so forth.

So what’s the right way to responsibly use NotebookLM? We are still figuring it out [laughs].

Parshall: Well, I think where I want to leave us is with an important point from Emily, which is: you shouldn’t let the incredible fluency of these AI-generated voices fool you into thinking and talking about these models as if they’re human.

Emily coined a term that’s been used quite widely called the “stochastic parrot.” Like, when a parrot says, like, “Polly want a cracker,” it’s not really connected to meaning; it’s just mimicking. She also sometimes calls these large language models, quote, “synthetic text extruding machines,” she said because it’s more accurate to what they’re actually doing and also sounds a little gross.

But I think she’s getting at an important point there, right? It’s, like, this—just a philosophical question of: What is language for, and what are we communicating when we say words?

Bender: Languages are systems that are pairings of form and meaning, systems of signs. So when we speak we are using word forms because they relate to specific meanings, but language models only have access to the form. And so they can give us stuff that looks good, it’s got the right shape, but there’s no relationship to meaning, no communicative intent, no accountability.

Feltman: I guess sort of the takeaway on my complicated feelings on this is: I’m not worried that my job is gonna be stolen by a collection of tabs [laughs] being processed by Google because it is a lot of work to be accurate. And if people find this kind of engaging conversation a more accessible way to sort through really dense text, that’s awesome. But, of course, there are so many caveats about where the training data has come from, bias, accuracy. So I think it’s, like—just people need to be really aware of what they’re getting, and I don’t know that I feel that the products themselves are doing a great job of making people aware of those caveats.

Parshall: Yeah, there’s usually just kind of a disclaimer at the bottom. I think one of the things that this whole AI boom—like this, the AI search features—have made me feel kind of uncomfortable about: Are we going to be expected, and have we already started to expect, our information to contain just that more amount of mistruths? It’s something that I just worry about.

Feltman: Well, Allison, thanks so much for coming on to chat about this and also for having a couple of AI voices gas me up. Thank you.

Parshall: Thank you.

Feltman: That’s all for today’s episode. We’ll be back on Friday with Episode One of a four-part miniseries on the future of animal conservation.

Science Quickly is produced by me, Rachel Feltman, along with Fonda Mwangi, Kelso Harper, Madison Goldberg and Jeff DelViscio. Today’s episode was reported and co-hosted by Allison Parshall. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news.

For Scientific American, this is Rachel Feltman. See you next time!

Read the full story here.
Photos courtesy of

Giant Sloths and Many Other Massive Creatures Were Once Common on Our Planet. With Environmental Changes, Such Giants Could Thrive Again

If large creatures like elephants, giraffes and bison are allowed to thrive, they could alter habitats that allow for the rise of other giants

Giant Sloths and Many Other Massive Creatures Were Once Common on Our Planet. With Environmental Changes, Such Giants Could Thrive Again If large creatures like elephants, giraffes and bison are allowed to thrive, they could alter habitats that allow for the rise of other giants Riley Black - Science Correspondent July 11, 2025 8:00 a.m. Ancient sloths lived in trees, on mountains, in deserts, in boreal forests and on open savannas. Some grew as large as elephants. Illustration by Diego Barletta The largest sloth of all time was the size of an elephant. Known to paleontologists as Eremotherium, the shaggy giant shuffled across the woodlands of the ancient Americas between 60,000 and five million years ago. Paleontologists have spent decades hotly debating why such magnificent beasts went extinct, the emerging picture involving a one-two punch of increasing human influence on the landscape and a warmer interglacial climate that began to change the world’s ecosystems. But even less understood is how our planet came to host entire communities of such immense animals during the Pleistocene. Now, a new study on the success of the sloths helps to reveal how the world of Ice Age giants came to be, and hints that an Earth brimming with enormous animals could come again. Florida Museum of Natural History paleontologist Rachel Narducci and colleagues tracked how sloths came to be such widespread and essential parts of the Pleistocene Americas and published their findings in Science this May. The researchers found that climate shifts that underwrote the spread of grasslands allowed big sloths to arise, the shaggy mammals then altering those habitats to maintain open spaces best suited to big bodies capable of moving long distances. The interactions between the animals and environment show how giants attained their massive size, and how strange it is that now our planet has fewer big animals than would otherwise be here. Earth still boasts some impressively big species. In fact, the largest animal of all time is alive right now and only evolved relatively recently. The earliest blue whale fossils date to about 1.5 million years ago, and, at 98 feet long and more than 200 tons, the whale is larger than any mammoth or dinosaur. Our planet has always boasted a greater array of small species than large ones, even during prehistoric ages thought of as synonymous with megafauna. Nevertheless, Earth’s ecosystems are still in a megafaunal lull that began at the close of the Ice Age. “I often say we are living on a downsized planet Earth,” says University of Maine paleoecologist Jacquelyn Gill.Consider what North America was like during the Pleistocene, between 11,000 years and two million ago. The landmass used to host multiple forms of mammoths, mastodons, giant ground sloths, enormous armadillos, multiple species of sabercat, huge bison, dire wolves and many more large creatures that formed ancient ecosystems unlike anything on our planet today. In addition, many familiar species such as jaguars, black bears, coyotes, white-tailed deer and golden eagles also thrived. Elsewhere in the world lived terror birds taller than an adult human, wombats the size of cars, woolly rhinos, a variety of elephants with unusual tusks and other creatures. Ecosystems capable of supporting such giants have been the norm rather than the exception for tens of millions of years. Giant sloths were among the greatest success stories among the giant-size menagerie. The herbivores evolved on South America when it was still an island continent, only moving into Central and North America as prehistoric Panama connected the landmasses about 2.7 million years ago. Some were small, like living two- and three-toed sloths, while others embodied a range of sizes all the way up to elephant-sized giants like Eremotherium and the “giant beast” Megatherium. An Eremotherium skeleton at the Houston Museum of Natural Science demonstrates just how large the creature grew. James Nielsen / Houston Chronicle via Getty Images The earliest sloths originated on South America about 35 million years ago. They were already big. Narducci and colleagues estimate that the common ancestor of all sloths was between about 150 and 770 pounds—or similar to the range of sizes seen among black bears today—and they walked on the ground. “I was surprised and thrilled” to find that sloths started off large, Narducci says, as ancestral forms of major mammal groups are often small, nocturnal creatures. The earliest sloths were already in a good position to shift with Earth’s climate and ecological changes. The uplift of the Andes Mountains in South America led to changes on the continent as more open, drier grasslands spread where there had previously been wetter woodlands and forests. While some sloths became smaller as they spent more time around and within trees, the grasslands would host the broadest diversity of sloth species. The grasslands sloths were the ones that ballooned to exceptional sizes. Earth has been shifting between warmer and wetter times, like now, and cooler and drier climates over millions of years. The chillier and more arid times are what gave sloths their size boost. During these colder spans, bigger sloths were better able to hold on to their body heat, but they also didn’t need as much water, and they were capable of traveling long distances more efficiently thanks to their size. “The cooler and drier the climate, especially after 11.6 million years ago, led to expansive grasslands, which tends to favor the evolution of increasing body mass,” Narducci says. The combination of climate shifts, mountain uplift and vegetation changes created environments where sloths could evolve into a variety of forms—including multiple times when sloths became giants again. Gill says that large body size was a “winning strategy” for herbivores. “At a certain point, megaherbivores get so large that most predators can’t touch them; they’re able to access nutrition in foods that other animals can’t really even digest thanks to gut microbes that help them digest cellulose, and being large means you’re also mobile,” Gill adds, underscoring advantages that have repeatedly pushed animals to get big time and again. The same advantages underwrote the rise of the biggest dinosaurs as well as more recent giants like the sloths and mastodons. As large sloths could travel further, suitable grassland habitats stretched from Central America to prehistoric Florida. “This is what also allowed for their passage into North America,” Narducci says. Sloths were able to follow their favored habitats between continents. If the world were to shift back toward cooler and drier conditions that assisted the spread of the grasslands that gave sloths their size boost, perhaps similar giants could evolve. The sticking point is what humans are doing to Earth’s climate, ecosystems and existing species. The diversity and number of large species alive today is vastly, and often negatively, affected by humans. A 2019 study of human influences on 362 megafauna species, on land and in the water, found that 70 percent are diminishing in number, and 59 percent are getting dangerously close to extinction. But if that relationship were to change, either through our actions or intentions, studies like the new paper on giant sloths hint that ecosystems brimming with a wealth of megafaunal species could evolve again. Big animals change the habitats where they live, which in turn tends to support more large species adapted to those environments. The giant sloths that evolved among ancient grasslands helped to keep those spaces open in tandem with other big herbivores, such as mastodons, as well as the large carnivores that preyed upon them. Paleontologists and ecologists know this from studies of how large animals such as giraffes and rhinos affect vegetation around them. Big herbivores, in particular, tend to keep habitats relatively open. Elephants and other big beasts push over trees, trample vegetation underfoot, eat vast amounts of greenery and transport seeds in their dung, disassembling vegetation while unintentionally planting the beginnings of new habitats. Such broad, open spaces were essential to the origins of the giant sloths, and so creating wide-open spaces helps spur the evolution of giants to roam such environments. For now, we are left with the fossil record of giant animals that were here so recently that some of their bones aren’t even petrified, skin and fur still clinging to some skeletons. “The grasslands they left behind are just not the same, in ways we’re really only starting to understand and appreciate,” Gill says. A 2019 study on prehistoric herbivores in Africa, for example, found that the large plant-eaters altered the water cycling, incidence of fire and vegetation of their environment in a way that has no modern equivalent and can’t just be assumed to be an ancient version of today’s savannas. The few megaherbivores still with us alter the plant life, water flow, seed dispersal and other aspects of modern environments in their own unique ways, she notes, which should be a warning to us to protect them—and the ways in which they affect our planet. If humans wish to see the origin of new magnificent giants like the ones we visit museums to see, we must change our relationship to the Earth first. Get the latest Science stories in your inbox.

How changes in California culture have influenced the evolution of wild animals in Los Angeles

A new study argues that religion, politics and war affect how animals and plants in cities evolve, and the confluence of these forces seem to be actively affecting urban wildlife in L.A.

For decades, biologists have studied how cities affect wildlife by altering food supplies, fragmenting habitats and polluting the environment. But a new global study argues that these physical factors are only part of the story. Societal factors, the researchers claim, especially those tied to religion, politics and war, also leave lasting marks on the evolutionary paths of the animals and plants that share our cities.Published in Nature Cities, the comprehensive review synthesizes evidence from cities worldwide, revealing how human conflict and cultural practices affect wildlife genetics, behavior and survival in urban environments.The paper challenges the tendency to treat the social world as separate from ecological processes. Instead, the study argues, we should consider the ways the aftershocks of religious traditions, political systems and armed conflicts can influence the genetic structure of urban wildlife populations. (Gabriella Angotti-Jones / Los Angeles Times) “Social sciences have been very far removed from life sciences for a very long time, and they haven’t been integrated,” said Elizabeth Carlen, a biologist at Washington University in St. Louis and co-lead author of the study. “We started just kind of playing around with what social and cultural processes haven’t been talked about,” eventually focusing on religion, politics and war because of their persistent yet underexamined impacts on evolutionary biology, particularly in cities, where cultural values and built environments are densely concentrated.Carlen’s own work in St. Louis examines how racial segregation and urban design, often influenced by policing strategies, affect ecological conditions and wild animals’ access to green spaces.“Crime prevention through environmental design,” she said, is one example of how these factors influence urban wildlife. “Law enforcement can request that there not be bushes … or short trees, because then they don’t have a sight line across the park.” Although that design choice may serve surveillance goals, it also limits the ability of small animals to navigate those spaces.These patterns, she emphasized, aren’t unique to St. Louis. “I’m positive that it’s happening in Los Angeles. Parks in Beverly Hills are going to look very different than parks in Compton. And part of that is based on what policing looks like in those different places.” This may very well be the case, as there is a significantly lower level of urban tree species richness in areas like Compton than in areas like Beverly Hills, according to UCLA’s Biodiversity Atlas. A coyote wanders onto the fairway, with the sprinklers turned on, as a golfer makes his way back to his cart after hitting a shot on the 16th hole of the Harding golf course at Griffith Park. (Mel Melcon / Los Angeles Times) The study also examines war and its disruptions, which can have unpredictable effects on animal populations. Human evacuation from war zones can open urban habitats to wildlife, while the destruction of green spaces or contamination of soil and water can fragment ecosystems and reduce genetic diversity.In Kharkiv, Ukraine, for example, human displacement during the Russian invasion led to the return of wild boars and deer to urban parks, according to the study. In contrast, sparrows, which depend on human food waste, nearly vanished from high-rise areas.All of this, the researchers argue, underscores the need to rethink how cities are designed and managed by recognizing how religion, politics and war shape not just human communities but also the evolutionary trajectories of urban wildlife. By integrating ecological and social considerations into urban development, planners and scientists can help create cities that are more livable for people while also supporting the long-term genetic diversity and adaptability of the other species that inhabit them.This intersection of culture and biology may be playing out in cities across the globe, including Los Angeles.A study released earlier this year tracking coyotes across L.A. County found that the animals were more likely to avoid wealthier neighborhoods, not because of a lack of access or food scarcity, but possibly due to more aggressive human behavior toward them and higher rates of “removal” — including trapping and releasing elsewhere, and in some rare cases, killing them. In lower-income areas, where trapping is less common, coyotes tended to roam more freely, even though these neighborhoods often had more pollution and fewer resources that would typically support wild canines. Researchers say these patterns reflect how broader urban inequities are written directly into the movements of and risks faced by wildlife in the city.Black bears, parrots and even peacocks tell a similar story in Los Angeles. Wilson Sherman, a PhD student at UCLA who is studying human-black bear interactions, highlights how local politics and fragmented municipal governance shape not only how animals are managed but also where they appear. (Carolyn Cole / Los Angeles Times) “Sierra Madre has an ordinance requiring everyone to have bear-resistant trash cans,” Sherman noted. “Neighboring Arcadia doesn’t.” This kind of patchwork governance, Sherman said, can influence where wild animals ultimately spend their time, creating a mosaic of risk and opportunity for species whose ranges extend across multiple jurisdictions.Cultural values also play a role. Thriving populations of non-native birds, such as Amazon parrots and peacocks, illustrate how aesthetic preferences and everyday choices can significantly influence the city’s ecological makeup in lasting ways.Sherman also pointed to subtler, often overlooked influences, such as policing and surveillance infrastructure. Ideally, the California Department of Fish and Wildlife would be the first agency to respond in a “wildlife situation,” as Sherman put it. But, he said, what often ends up happening is that people default to calling the police, especially when the circumstances involve animals that some urban-dwelling humans may find threatening, like bears.Police departments typically do not possess the same expertise and ability as CDFW to manage and then relocate bears. If a bear poses a threat to human life, police policy is to kill the bear. However, protocols for responding to wildlife conflicts that are not life-threatening can vary from one community to another. And how police use non-lethal methods of deterrence — such as rubber bullets and loud noises — can shape bear behavior.Meanwhile, the growing prevalence of security cameras and motion-triggered alerts has provided residents with new forms of visibility into urban biodiversity. “That might mean that people are suddenly aware that a coyote is using their yard,” Sherman said. In turn, that could trigger a homeowner to purposefully rework the landscape of their property so as to discourage coyotes from using it. Surveillance systems, he said, are quietly reshaping both public perception and policy around who belongs in the city, and who doesn’t. A mountain lion sits in a tree after being tranquilized along San Vicente Boulevard in Brentwood on Oct. 27, 2022. (Wally Skalij / Los Angeles Times) Korinna Domingo, founder and director of the Cougar Conservancy, emphasized how cougar behavior in Los Angeles is similarly shaped by decades of urban development, fragmented landscapes and the social and political choices that structure them. “Policies like freeway construction, zoning and even how communities have been historically policed or funded can affect where and how cougars move throughout L.A.,” she said. For example, these forces have prompted cougars to adapt by becoming more nocturnal, using culverts or taking riskier crossings across fragmented landscapes.Urban planning and evolutionary consequences are deeply intertwined, Domingo says. For example, mountain lion populations in the Santa Monica and Santa Ana mountains have shown signs of reduced genetic diversity due to inbreeding, an issue created not by natural processes, but by political and planning decisions — such as freeway construction and zoning decisions— that restricted their movement decades ago.Today, the Wallis Annenberg Wildlife Crossing, is an attempt to rectify that. The massive infrastructure project is happening only, Domingo said, “because of community, scientific and political will all being aligned.”However, infrastructure alone isn’t enough. “You can have habitat connectivity all you want,” she said, but you also have to think about social tolerance. Urban planning that allows for animal movement also increases the likelihood of contact with people, pets and livestock — which means humans need to learn how to interact with wild animals in a healthier way.In L.A., coexistence strategies can look very different depending on the resources, ordinances and attitudes of each community. Although wealthier residents may have the means to build predator-proof enclosures, others lack the financial or institutional support to do the same. And some with the means simply choose not to, instead demanding lethal removal., “Wildlife management is not just about biology,” Domingo said. “It’s about values, power, and really, who’s at the table.”Wildlife management in the United States has long been informed by dominant cultural and religious worldviews, particularly those grounded in notions of human exceptionalism and control over nature. Carlen, Sherman and Domingo all brought up how these values shaped early policies that framed predators as threats to be removed rather than species to be understood or respected. In California, this worldview contributed not only to the widespread killing of wolves, bears and cougars but also to the displacement of American Indian communities whose land-based practices and beliefs conflicted with these approaches. A male peacock makes its way past Ian Choi, 21 months old, standing in front of his home on Altura Road in Arcadia. (Mel Melcon / Los Angeles Times) Wildlife management in California, specifically, has long been shaped by these same forces of violence, originating in bounty campaigns not just against predators like cougars and wolves but also against American Indian peoples. These intertwined legacies of removal, extermination and land seizure continue to influence how certain animals and communities are perceived and treated today.For Alan Salazar, a tribal elder with the Fernandeño Tataviam Band of Mission Indians, those legacies run deep. “What happened to native peoples happened to our large predators in California,” he said. “Happened to our plant relatives.” Reflecting on the genocide of Indigenous Californians and the coordinated extermination of grizzly bears, wolves and mountain lions, Salazar sees a clear parallel.“There were three parts to our world — the humans, the animals and the plants,” he explained. “We were all connected. We respected all of them.” Salazar explains that his people’s relationship with the land, animals and plants is itself a form of religion, one grounded in ceremony, reciprocity and deep respect. Salazar said his ancestors lived in harmony with mountain lions for over 10,000 years, not by eliminating them but by learning from them. Other predators — cougars, bears, coyotes and wolves — were also considered teachers, honored through ceremony and studied for their power and intelligence. “Maybe we had a better plan on how to live with mountain lions, wolves and bears,” he said. “Maybe you should look at tribal knowledge.”He views the Wallis Annenberg Wildlife Crossing — for which he is a Native American consultant — as a cultural opportunity. “It’s not just for mountain lions,” he said. “It’s for all animals. And that’s why I wanted to be involved.” He believes the project has already helped raise awareness and shift perceptions about coexistence and planning, and hopes that it will help native plants, animals and peoples.As L.A. continues to grapple with the future of wildlife in its neighborhoods, canyons and corridors, Salazar and others argue that it is an opportunity to rethink the cultural frameworks, governance systems and historical injustices that have long shaped human-animal relations in the city. Whether through policy reform, neighborhood education or sacred ceremony, residents need reminders that evolutionary futures are being shaped not only in forests and preserves but right here, across freeways, backyards and local council meetings. The Wallis Annenberg Wildlife Crossing under construction over the 101 Freeway near Liberty Canyon Road in Agoura Hills on July 12, 2024. (Myung J. Chun / Los Angeles Times) The research makes clear that wildlife is not simply adapting to urban environments in isolation; it is adapting to a range of factors, including policing, architecture and neighborhood design. Carlen believes this opens a crucial frontier for interdisciplinary research, especially in cities like Los Angeles, where uneven geographies, biodiversity and political decisions intersect daily. “I think there’s a lot of injustice in cities that are happening to both humans and wildlife,” she said. “And I think the potential is out there for justice to be brought to both of those things.”

Something Strange Is Happening to Tomatoes Growing on the Galápagos Islands

Scientists say wild tomato plants on the archipelago's western islands are experiencing "reverse evolution" and reverting back to ancestral traits

Something Strange Is Happening to Tomatoes Growing on the Galápagos Islands Scientists say wild tomato plants on the archipelago’s western islands are experiencing “reverse evolution” and reverting back to ancestral traits Sarah Kuta - Daily Correspondent July 9, 2025 4:29 p.m. Scientists are investigating the production of ancestral alkaloids by tomatoes in the Galápagos Islands. Adam Jozwiak / University of California, Riverside Some tomatoes growing on the Galápagos Islands appear to be going back in time by producing the same toxins their ancestors did millions of years ago. Scientists describe this development—a controversial process known as “reverse evolution”—in a June 18 paper published in the journal Nature Communications. Tomatoes are nightshades, a group of plants that also includes eggplants, potatoes and peppers. Nightshades, also known as Solanaceae, produce bitter compounds called alkaloids, which help fend off hungry bugs, animals and fungi. When plants produce alkaloids in high concentrations, they can sicken the humans who eat them. To better understand alkaloid synthesis, researchers traveled to the Galápagos Islands, the volcanic chain roughly 600 miles off the coast of mainland Ecuador made famous by British naturalist Charles Darwin. They gathered and studied more than 30 wild tomato plants growing in different places on various islands. The Galápagos tomatoes are the descendents of plants from South America that were probably carried to the archipelago by birds. The team’s analyses revealed that the tomatoes growing on the eastern islands were behaving as expected, by producing alkaloids that are similar to those found in modern, cultivated varieties. But those growing on the western islands, they found, were creating alkaloids that were more closely related to those produced by eggplants millions of years ago. Tomatoes growing on the western islands (shown here) are producing ancestral alkaloids.  Adam Jozwiak / University of California, Riverside Researchers suspect the environment may be responsible for the plants’ unexpected return to ancestral alkaloids. The western islands are much younger than the eastern islands, so the soil is less developed and the landscape is more barren. To survive in these harsh conditions, perhaps it was advantageous for the tomato plants to revert back to older alkaloids, the researchers posit. “The plants may be responding to an environment that more closely resembles what their ancestors faced,” says lead author Adam Jozwiak, a biochemist at the University of California, Riverside, to BBC Wildlife’s Beki Hooper. However, for now, this is just a theory. Scientists say they need to conduct more research to understand why tomato plants on the western islands have adapted this way. Scientists were able to uncover the underlying molecular mechanisms at play: Four amino acids in a single enzyme appear to be responsible for the reversion back to the ancestral alkaloids, they found. They also used evolutionary modeling to confirm the direction of the adaptation—that is, that the tomatoes on the western islands had indeed returned to an earlier, ancestral state. Among evolutionary biologists, “reverse evolution” is somewhat contentious. The commonly held belief is that evolution marches forward, not backward. It’s also difficult to prove an organism has reverted back to an older trait through the same genetic pathways. But, with the new study, researchers say they’ve done exactly that. “Some people don’t believe in this,” says Jozwiak in a statement. “But the genetic and chemical evidence points to a return to an ancestral state. The mechanism is there. It happened.” So, if “reverse evolution” happened in wild tomatoes, could something similar happen in humans? In theory, yes, but it would take a long time, Jozwiak says. “If environmental conditions shifted dramatically over long timescales, it’s possible that traits from our distant past could re-emerge, but whether that ever happens is highly uncertain,” Jozwiak tells Newsweek’s Daniella Gray. “It’s speculative and would take millions of years, if at all.” Get the latest stories in your inbox every weekday.

Lifesize herd of puppet animals begins climate action journey from Africa to Arctic Circle

The Herds project from the team behind Little Amal will travel 20,000km taking its message on environmental crisis across the worldHundreds of life-size animal puppets have begun a 20,000km (12,400 mile) journey from central Africa to the Arctic Circle as part of an ambitious project created by the team behind Little Amal, the giant puppet of a Syrian girl that travelled across the world.The public art initiative called The Herds, which has already visited Kinshasa and Lagos, will travel to 20 cities over four months to raise awareness of the climate crisis. Continue reading...

Hundreds of life-size animal puppets have begun a 20,000km (12,400 mile) journey from central Africa to the Arctic Circle as part of an ambitious project created by the team behind Little Amal, the giant puppet of a Syrian girl that travelled across the world.The public art initiative called The Herds, which has already visited Kinshasa and Lagos, will travel to 20 cities over four months to raise awareness of the climate crisis.It is the second major project from The Walk Productions, which introduced Little Amal, a 12-foot puppet, to the world in Gaziantep, near the Turkey-Syria border, in 2021. The award-winning project, co-founded by the Palestinian playwright and director Amir Nizar Zuabi, reached 2 million people in 17 countries as she travelled from Turkey to the UK.The Herds’ journey began in Kinshasa’s Botanical Gardens on 10 April, kicking off four days of events. It moved on to Lagos, Nigeria, the following week, where up to 5,000 people attended events performed by more than 60 puppeteers.On Friday the streets of Dakar in Senegal will be filled with more than 40 puppet zebras, wildebeest, monkeys, giraffes and baboons as they run through Médina, one of the busiest neighbourhoods, where they will encounter a creation by Fabrice Monteiro, a Belgium-born artist who lives in Senegal, and is known for his large-scale sculptures. On Saturday the puppets will be part of an event in the fishing village of Ngor.The Herds’ 20,000km journey began in Kinshasa, the Democratic Republic of the Congo. Photograph: Berclaire/walk productionsThe first set of animal puppets was created by Ukwanda Puppetry and Designs Art Collective in Cape Town using recycled materials, but in each location local volunteers are taught how to make their own animals using prototypes provided by Ukwanda. The project has already attracted huge interest from people keen to get involved. In Dakar more than 300 artists applied for 80 roles as artists and puppet guides. About 2,000 people will be trained to make the puppets over the duration of the project.“The idea is that we’re migrating with an ever-evolving, growing group of animals,” Zuabi told the Guardian last year.Zuabi has spoken of The Herds as a continuation of Little Amal’s journey, which was inspired by refugees, who often cite climate disaster as a trigger for forced migration. The Herds will put the environmental emergency centre stage, and will encourage communities to launch their own events to discuss the significance of the project and get involved in climate activism.The puppets are created with recycled materials and local volunteers are taught how to make them in each location. Photograph: Ant Strack“The idea is to put in front of people that there is an emergency – not with scientific facts, but with emotions,” said The Herds’ Senegal producer, Sarah Desbois.She expects thousands of people to view the four events being staged over the weekend. “We don’t have a tradition of puppetry in Senegal. As soon as the project started, when people were shown pictures of the puppets, they were going crazy.”Little Amal, the puppet of a Syrian girl that has become a symbol of human rights, in Santiago, Chile on 3 January. Photograph: Anadolu/Getty ImagesGrowing as it moves, The Herds will make its way from Dakar to Morocco, then into Europe, including London and Paris, arriving in the Arctic Circle in early August.

Dead, sick pelicans turning up along Oregon coast

So far, no signs of bird flu but wildlife officials continue to test the birds.

Sick and dead pelicans are turning up on Oregon’s coast and state wildlife officials say they don’t yet know why. The Oregon Department of Fish and Wildlife says it has collected several dead brown pelican carcasses for testing. Lab results from two pelicans found in Newport have come back negative for highly pathogenic avian influenza, also known as bird flu, the agency said. Avian influenza was detected in Oregon last fall and earlier this year in both domestic animals and wildlife – but not brown pelicans. Additional test results are pending to determine if another disease or domoic acid toxicity caused by harmful algal blooms may be involved, officials said. In recent months, domoic acid toxicity has sickened or killed dozens of brown pelicans and numerous other wildlife in California. The sport harvest for razor clams is currently closed in Oregon – from Cascade Head to the California border – due to high levels of domoic acid detected last fall.Brown pelicans – easily recognized by their large size, massive bill and brownish plumage – breed in Southern California and migrate north along the Oregon coast in spring. Younger birds sometimes rest on the journey and may just be tired, not sick, officials said. If you find a sick, resting or dead pelican, leave it alone and keep dogs leashed and away from wildlife. State wildlife biologists along the coast are aware of the situation and the public doesn’t need to report sick, resting or dead pelicans. — Gosia Wozniacka covers environmental justice, climate change, the clean energy transition and other environmental issues. Reach her at gwozniacka@oregonian.com or 971-421-3154.Our journalism needs your support. Subscribe today to OregonLive.com.

Suggested Viewing

Join us to forge
a sustainable future

Our team is always growing.
Become a partner, volunteer, sponsor, or intern today.
Let us know how you would like to get involved!

CONTACT US

sign up for our mailing list to stay informed on the latest films and environmental headlines.

Subscribers receive a free day pass for streaming Cinema Verde.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.