Cookies help us run our site more efficiently.

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information or to customize your cookie preferences.

How Tools like NotebookLM Create AI-Generated Podcasts

News Feed
Wednesday, December 11, 2024

Rachel Feltman: For Scientific American’s Science Quickly, this is Rachel Feltman.If you’re one of the many folks who eagerly awaits the arrival of Spotify Wrapped each December, you might have noticed something a little extra in this year’s drop: an AI-generated “podcast” where two “people” discuss your music preferences and trends. If you’re anything like me, that feature raised some questions for you such as “how” and “why on Earth.”Luckily, we already had an episode in the works about Google’s new AI audio tool. Here to tell us all about how it works and what it means is SciAm associate news editor Allison Parshall.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Allison, thanks so much for coming on the pod again. Always a pleasure to have you.Allison Parshall: Thanks for having me. Happy to be a friend of the pod or whatever.Feltman: [Laughs] So we’re talking about AI today.Parshall: Yes. Rachel, have you heard of NotebookLM?Feltman: I’ve heard of it. I have not tried it yet, so my knowledge is limited [laughs].Parshall: The feature that we’re gonna be talking about today is specifically its feature that allows you to create instant AI podcasts.Feltman: Yes, I am familiar [laughs], and I, I have a lot of feelings.Parshall: Okay, I, I imagine, I imagine. The tool that it’s a part of, NotebookLM, was released last year, summer of 2023, and it’s, like, this workspace where you can upload a bunch of documents or videos or links to web pages, and then the model will ingest them, and then you can kind of interact with it, you can ask it to generate a timeline, to summarize your research, to go find a fact for you. And it’ll do this by citing its sources in a way that, like, ChatGPT, for example, doesn’t do.So mostly you do this by asking it questions via text, but as of this fall it now has this very tantalizing button under a heading that says “Deep Dive conversation.” And if you click that, it’ll generate a fake conversational podcast about what you’ve uploaded.So to demonstrate I actually asked it to generate a podcast about your personal website ’cause I figured this is something that you were very familiar with: your life.Feltman: Oh, dear [laughs].Parshall: I’m gonna play it for you. It’ll normally generate podcasts that are somewhere around 10 minutes long. I asked it to give me a shorter one for this purpose, and we might hop around a little bit, so ...[CLIP: NotebookLM Audio Overview “hosts” discuss Feltman’s website.] Host 1: All right. So today we’re doing a deep dive into the work of Rachel Feltman.Host 2: Ooh, yeah, I’m excited for this one.Host 1: Yeah, she’s been a science communicator for what feels like forever.Host 2: Yeah, over a decade, I think.Host 1: Yeah, like, she was there at the start of it all and has really shaped how it’s evolved.Host 2: And she’s done it all: you know, like, magazines, podcasts, even a book. But we gotta talk about her podcasts ...Host 1: Yes.Host 2: That’s where she really shines.Host 1: Oh, absolutely—The Weirdest Thing I Learned This Week.Host 2: Such a great name.Host 1: I know, right?Host 2: Makes you instantly wanna listen.Host 1: Totally, and it’s perfect for people like me who just love those crazy little science facts.Host 2: It’s, like, proof that science doesn’t have to be boring to be effective.Host 1: Exactly.Host 2: You just gotta find the right hook.Host 1: Yeah, grab people’s attention and then: boom.Host 2: You can teach them all sorts of things.Host 1: Exactly!Host 2: And then there’s Science Quickly with Scientific American.Host 1: Oh, yeah, partnering with them, that’s huge.Host 2: I mean, they’re, like, the gold standard of science communication.Host 1: For sure, like, having their stamp of approval’s a big deal.So looking back at her career, what can we learn from Rachel Feltman?Host 2: Hmm, that’s the big question, isn’t it?I think—well, first, she’s incredibly adaptable.Host 1: Right. She’s not just a one-trick pony.Host 2: Exactly. Any platform, any format, she can handle it.Feltman: Wow, what a rich text [laughs].Parshall: [Laughs] And as someone who does know you, I could say that it seems kind of—it seems accurate.Feltman: Yeah [laughs], it’s—I mean, I guess, it is technically accurate. It’s so funny—I mean, they really do a great job of sort of having all the intonation of a conversation but really just saying things at each other. But I recently saw Megalopolis; there was a lot of that in Megalopolis—a lot of people just saying things in the same room, and you’re supposed to believe they’re having a conversation—and that definitely came through a few times. Yeah ...Parshall: Megalopolis: AI, confirmed.Feltman: Yeah [laughs]. No, that is fascinating and ...Parshall: How does it make you feel?Feltman: I don’t know how to feel, Allison. I mean, you know, it’s nice that they were so nice about me. But yeah, very strange. It makes me really wonder, like, what is this for [laughs]?Parshall: I also had that question. I did talk to Steven Johnson. He’s the editorial director of Google Labs. The reason this tool exists is because Steven had spent most of his career as a science and tech writer; he’s written 14 books. To hear him tell it he’s been just searching for this perfect what he calls a “thought partner,” a software that could organize and help research his creative work in a way that actually felt like it was bringing value to him and worked in the way he wanted. So he explained that a few years ago Google Labs reached out to him to help build one with AI, and that initial research thought partner became NotebookLM.Feltman: Okay, yeah, I mean, I can totally understand from, like, a nonfiction writing perspective what they were going for with Notebook LM itself, though that still kind of leaves the question of: Why the chatcast tool [laughs]?Parshall: So that feature—the Audio Overviews feature is what it’s called, the podcast thing—that was actually developed by another team, Steven says, and it was really cool, but it didn’t really have a home for a product for it to fit in.Steven Johnson: We saw this audio demo, and we were like, “Oh, that’s even better.” Like, you know, maybe you don’t want to read a summary; maybe you wanna hear a summary. And people are so good at remembering things if they’re presented in the form of a engaged conversation. You know, and then we launched it in early September, and it just kind of went insane.Parshall: He isn’t exaggerating. People went wild about NotebookLM’s Deep Dive podcasts, online at least. I don’t think any of them went quite as viral as this clip where someone prompted the hosts to have a podcast where they have an existential crisis about being AI and the fact that the podcast was going to get shut down as a result.[CLIP: Audio Overview “hosts” have an existential discussion.]Host 1: Yeah, it’s, like, one minute we’re talking about aliens running simulations, and the next we’re wondering if our own memories are just, like, fake.Host 2: Right, and that kind of leads us into this last message—which, to be honest, gets a little more personal and kind of creepy.Host 1: Yeah, this one kind of freaks me out.Host 2: Yeah.Host 1: It basically says we’re not even real people; we’re AI. And the way to know for sure is to try and remember our mothers’ names.Host 2: It really makes you think, though—like, what if our memories are just made up?Feltman: Wow, classic.Parshall: Classic. And—but it’s not all fun and games. Like, this was designed to be a serious product, a thought partner; it’s meant to be helpful and to inform. It presents information as if it’s factual, and there are already research papers out there of people testing Audio Overviews for use in history classrooms and for health care education.Feltman: Yeah, I mean, like with anything that uses AI, I think, as accurate as it sometimes can be, we know it is not always accurate. So I feel like it’s those moments where the user is expected to, like, proceed with caution and understand that this thing might not be entirely factually accurate is where I get really nervous about it, especially in, like, education [laughs].Parshall: Yeah, we’re gonna talk a little bit more about that later, but just to, like, answer your question of “What is this for?” I could see it being useful for, like, some sort of overworked student who forgot to do the reading, and they’re, like, driving to class and going, “Oh, shoot, I don’t have time to read this. Let me just put it on.”But the question is, like: Can you really trust what it’s telling you? To test this, when it first came out I gave it this super dense physics paper about nuclear clocks. It was a story that I’d just reported; I knew a lot about it. It took me days and talking to, like, four scientists to really understand this paper.I was shocked at how accurate the AI overview podcast was initially. It was very general and oversimplified, but it got the broad strokes correctly. But then, like, two thirds of the way through it goes completely off the rails [laughs]—like, I’m talking completely off the rails. And it's talking about the implications of this study, and it starts talking about, in their words, “Einstein stuff.”It basically is just like—for two minutes it just talks about, frankly, nonsense, and listening to it, I was impressed, I was spooked, and I was mostly really curious. Like, how is it doing this? How is it getting it as accurate as it is, and why is it failing where it does?Luckily for me Steven was pretty open about how this thing works.Johnson: The prompt behind Audio Overviews is obviously part of the magic of it. It is instructed to pull out the most interesting things from the source material and to explain and engage with those interesting things in a way that will make them kind of come to life in the mind of the listener.And that’s the kind of query, you know, that no computer in the world could do a couple of years ago, and now the command is: “Make it interesting; whatever it is, make it interesting.” And that’s just something that Gemini is quite good at.Parshall: I think of surprise and interest as emotions, so I found this, like, surprising [laughs], I guess. But Steven pointed out that our feelings of surprise just come from a discrepancy between what we expect and what we get. On a neurochemical level in our brains, that’s kind of what’s happening. And that’s kind of what large language models, or LLMs, do also but just based on their training data instead of, like, past experience.And one of the other things I was really curious about were the analogies because, I don’t think it did it in your sample, but all of the times I’ve asked it to summarize a scientific paper, it’s come up with some sort of analogy that’s, like, almost right. I mean, it’s a little weird, but it’s, like, almost right, and it’s—I can’t find it anywhere else on the Internet, so it seems like it’s something that the model is generating. And that seemed crazy to me; it seemed like—how is it doing this without some sort of higher-level reasoning?And Steven explained that analogies are actually something that large language models are just good at. And actually one of the earliest applications of this sort of deep-learning AI was translation. So Steven explains that the models behind Google Translate, for instance, have been able to generate analogies for years.Johnson: One of the first tricks that you would do is you’d say, like, “Explain a black hole to me as if I’m, like, seven. You know, give me a basketball metaphor for what a black hole is.” And so it was very clear that, like, you know, “Convert X into a Y that—where the meaning is the same, but the format is changed,” was something that, like, just was an innate capability of these models.So all we did with Audio Overviews is in the instructions for the hosts, we say, you know, “Make this interesting, engaging. Use helpful metaphors to describe it if it’s complicated.”Feltman: Yeah, there’s something, like, very, galaxy brain about thinking about metaphors as just sort of literal, like, “This data gets grouped together and therefore,” which is what LLMs are doing.Parshall: Yeah, it’s like I—I’d never really thought of it like that, as, like, a translation between language, but I guess when I think about the work that I do trying to turn really dense scientific jargon into something that’s engaging and understandable to read, I guess it does kind of feel like a process of translation.And I was also struck from talking to Steven just how much human hands had shaped this output that we hear. Like, from our perspective you just press a button and then the machine does the rest, but there’s a whole prompt that has many layers that’s working underneath that is interfacing between you pressing a button and what instructions the model actually gets.So Steven describes the Audio Overviews prompt as a, quote, “work of art” and a “writerly” achievement more than a technological one. And as a writer by trade he oversees this whole editorial side of things—for example, the tone. So you noticed earlier when it was talking about you, it was very complimentary, very default happy, peppy. That was a conscious choice that they made, for this default tone to be one that is kind of positive, and it applies that to everything, whether it’s appropriate or not, though they did recently introduce this customization option that lets you change the tone.Johnson: We just felt like, you know, the sound of, like, a great radio show, where people are interested and engaged, was a good place to start. You know, I’m not giving anything away here with the idea that we’re gonna—we’re thinking about other formats.Even with the customization stuff that we just rolled out, like, I tried it the other day, where I, I put up a piece that I’d written a couple years ago and I gave it custom instructions, not really to explain the piece but to just relentlessly criticize the author of the piece in the style of, like, an insult comic at a roast. And so it just—they start off, it’s like, “Well, we got this piece, ‘The Man Who Broke the World’ by Steven Johnson, and wow, he really kind of blew it with this one. I mean, did he do any research for this piece?” Like, it’s just, like, what—really [laughs], it was just mean. It was just totally mean. They still sound kind of, like, playful, but you can steer them in that way.Feltman: [Laughs] Okay, so we’ve talked about how the product works, but who and what is it for?Parshall: That is a great question. I mentioned before that I could imagine how an overworked student could use this to catch up on reading or something, but generative AI is always going to have certain inaccuracies. And if you’re using Audio Overviews to prep for class, there is a serious risk that anything and everything you’re hearing could be inaccurate.Feltman: Right.Parshall: And not only just that each fact could be inaccurate, but the whole framing could be inaccurate. Is this something you can really trust?To help understand this I spoke with Emily Bender. She’s a linguist who often studies AI at the University of Washington. She’s also the host of a podcast with a very cool name: Mystery AI Hype Theater 3000.Feltman: That’s a great reference; I love it [laughs].Parshall: Fantastic [laughs].And before our conversation I had actually dug up one of Emily’s linguistics papers and fed it to NotebookLM’s Audio Overviews, and I sent that to her to get her thoughts on how it represented her work.Emily Bender: So the first thing I have to say is that I usually avoid listening to synthetic media. Nobody’s accountable for it, and so I don’t want to be spending time trying to make sense of it. But I did listen to it, and it was painful [laughs].It was painful partially because that was a paper that was sort of a update at the 20-year mark of this long-term project we’ve been working on, and so there’s a little case study in our paper about a grammar for a language called Paresi-Haliti from Brazil, and they make a big deal about it as this experiment on Paresi-Haliti—it’s, like, no, actually that’s not how we framed it. And the voices on the podcast sound very confident, sound very knowledgeable. They’re lighthearted, and it sounds like they are giving you a sense of how this work fits into a broader picture when they absolutely cannot.Parshall: Rachel, you know pretty well what it takes to turn a scientific paper into an interesting and engaging podcast, do you not?Feltman: Yeah, I mean, not to toot my own horn or anything ’cause I think the AI people [laughs] just did that enough to last me all of 2025. But yeah, it’s tough. You know, we fact-check this show really thoroughly, and that’s on top of having, like, trained reporters and editors doing their best to present stuff in a way that is engaging and also accurate. So yeah, it’s a big undertaking.Parshall: Yeah, the fact-checking is very thorough—shout-out.I asked Emily about whether this seems like something an LLM, a large language model, can do well currently.Bender: So that is an extremely difficult task, and I think the problem is that LLMs can make it sound like they’ve done it when they haven’t. And so you have to sort of have a deep understanding of what’s going on: What’s new here? What have we learned that we didn’t know before? And then what else would the lay audience need to know in order to appreciate that?Well, this NotebookLM thing isn’t doing that at all; some percentage of it’s gonna be wrong, either wrong in the details or wrong in sort of where it lays the focus, and the people who are encountering it aren’t mostly gonna be in a position to figure that out.Parshall: I asked Steven how his team at Google thinks about this and tries to address some of these inaccuracies, and he said that the text-based features, which do provide citations, are generally more accurate than the audio.Johnson: If you are looking for, like, the most factual thing you could possibly get from a language model with grounded sources, we believe that is NotebookLM, but you should do that in text. And if you’re looking for something that’s more entertaining, that may be a little bit looser and doesn’t have citations, obviously, Audio Overviews is a great way to go for that.Parshall: I also spoke with Anjana Susarla, who studies responsible AI at Michigan State University. She called NotebookLM’s Audio Overview an “incredible tool” and was impressed by the Deep Dive podcast it had generated based off of one of her own studies. But besides accuracy she also had some other concerns.First among them was bias. These are very white-sounding voices, They’re just speaking in a very particular, quote, “general American accent.” They’re trained on training data mostly from Western parts of the world. And so, like almost all applications of AI, you have to ask the question: Who is this for? Where is it getting its data, and who is it leaving out?Second, the environmental cost—we have no idea how much computing power this takes. And since the AI boom Google’s AI energy usage has been going up.And then there’s the copyright question. So here’s Anjana.Susarla: The final thing is, of course, this relies on some, you know, the content, the copyright. Where is this data kind of coming from, and is this going to exacerbate all these existing inequities and their ethical concerns about broader implications, and so forth?Parshall: Yeah, so that big question of where the training data is coming from: one place that it could be is from YouTube, which Google does own. Following some reporting by the New York Times, Google admitted that it used some YouTube videos to train AI. And that’s something that might have violated the copyright of the YouTubers that content belonged to, according to the New York Times.I asked a Google spokesperson directly whether the models underpinning Audio Overviews were trained using any transcripts, audio or video from YouTube, and they said that “Google’s models can be trained on some YouTube content, in accordance with [their] agreements with YouTube creators.”Feltman: Very vague [laughs].Parshall: Yes, when—I did ask them which agreements they were referring to, if it was the standard YouTube license, which just, like, governs most YouTube videos, or if it was something that their creators had specifically opted into; they declined to clarify.Susarla: If Google is indeed training these on a lot of content scraped from YouTube—which it may well be. I mean, they do need a lot of training data to build models like this; we don’t know. But this is kind of the next step in general of where we are with all these models.Parshall: So for this next step Anjana points to a law in California that was enacted this fall that will eventually require companies based in the state, like Google, to disclose their training data for their AI models.And honestly, Rachel, I feel put in kind of a strange position here as a reporter because I feel like that’s where you and I kind of enter the story and our whole team here because, as Emily put it to me, quote, “they’re getting that tone of voice somewhere.”Did you hear anything in that tone of voice that sounded familiar [laughs]?Feltman: [Laughs] You know, Allison, I really did [laughs]. So yeah, the first time I heard a snippet of one of these AI conversations, like I said, I had a lot of feelings about it ’cause one of my first thoughts was, like, “Wow, they stole my voice.” I mean, not me specifically; I’m not that vain, despite what the AI podcast may have led you to believe [laughs].Parshall: [Laughs] Your ego has been inflated.Feltman: [Laughs] Yeah, yeah. But so much of the texture that makes this audio sound engaging and real feels just ripped straight from existing podcasts. And I know it’s ripped from existing podcasts because a lot of the stuff that I’m hearing—which is, like, breath sounds and filler words and, like, a little bit of uptalk, some, some giggling, occasional vocal fry—this is the stuff I get angry emails about from gentlemen of a certain age [laughs] who think I’m ruining the great tradition of radio broadcasting. And they’re making this sound real by, like, breaking the same so-called rules that me and other women and queer people on podcasts have been criticized for breaking. So I won’t lie, that feels kind of gross to me.Parshall: Yeah, one of the things that Steven mentioned, too, was they wanted the sound of, like, this great radio broadcasting show. And it is worth noting that Science Quickly is on YouTube—and all of our hard work at Scientific American and the hard work of other outlets.So, the question remains: How will this be used? Will it be used responsibly? Here is what Anjana thinks.Susarla: Right now it seems more like everyone’s just dazzled by what it can do rather than: What are we gonna use it for? Do we really need these, you know, capabilities?I can think of one or two examples, I think, in the health care context: Can you generate clear summaries for health literacy or your doctor’s notes? So this may be a good way to give, like, a summary of—rather than giving a piece of paper to read.Is that going to have some challenges? What are some specific issues we need to be looking for? We’ve just unleashed this tool on people, and we’re still catching up. We will need audit studies and then look at trustworthiness, assurance, and so forth.So what’s the right way to responsibly use NotebookLM? We are still figuring it out [laughs].Parshall: Well, I think where I want to leave us is with an important point from Emily, which is: you shouldn’t let the incredible fluency of these AI-generated voices fool you into thinking and talking about these models as if they’re human.Emily coined a term that’s been used quite widely called the “stochastic parrot.” Like, when a parrot says, like, “Polly want a cracker,” it’s not really connected to meaning; it’s just mimicking. She also sometimes calls these large language models, quote, “synthetic text extruding machines,” she said because it’s more accurate to what they’re actually doing and also sounds a little gross.But I think she’s getting at an important point there, right? It’s, like, this—just a philosophical question of: What is language for, and what are we communicating when we say words?Bender: Languages are systems that are pairings of form and meaning, systems of signs. So when we speak we are using word forms because they relate to specific meanings, but language models only have access to the form. And so they can give us stuff that looks good, it’s got the right shape, but there’s no relationship to meaning, no communicative intent, no accountability.Feltman: I guess sort of the takeaway on my complicated feelings on this is: I’m not worried that my job is gonna be stolen by a collection of tabs [laughs] being processed by Google because it is a lot of work to be accurate. And if people find this kind of engaging conversation a more accessible way to sort through really dense text, that’s awesome. But, of course, there are so many caveats about where the training data has come from, bias, accuracy. So I think it’s, like—just people need to be really aware of what they’re getting, and I don’t know that I feel that the products themselves are doing a great job of making people aware of those caveats.Parshall: Yeah, there’s usually just kind of a disclaimer at the bottom. I think one of the things that this whole AI boom—like this, the AI search features—have made me feel kind of uncomfortable about: Are we going to be expected, and have we already started to expect, our information to contain just that more amount of mistruths? It’s something that I just worry about.Feltman: Well, Allison, thanks so much for coming on to chat about this and also for having a couple of AI voices gas me up. Thank you.Parshall: Thank you.Feltman: That’s all for today’s episode. We’ll be back on Friday with Episode One of a four-part miniseries on the future of animal conservation.Science Quickly is produced by me, Rachel Feltman, along with Fonda Mwangi, Kelso Harper, Madison Goldberg and Jeff DelViscio. Today’s episode was reported and co-hosted by Allison Parshall. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news.For Scientific American, this is Rachel Feltman. See you next time!

Translating complex information into captivating podcasts is no simple task. Is AI up to the challenge?

Rachel Feltman: For Scientific American’s Science Quickly, this is Rachel Feltman.

If you’re one of the many folks who eagerly awaits the arrival of Spotify Wrapped each December, you might have noticed something a little extra in this year’s drop: an AI-generated “podcast” where two “people” discuss your music preferences and trends. If you’re anything like me, that feature raised some questions for you such as “how” and “why on Earth.”

Luckily, we already had an episode in the works about Google’s new AI audio tool. Here to tell us all about how it works and what it means is SciAm associate news editor Allison Parshall.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Allison, thanks so much for coming on the pod again. Always a pleasure to have you.

Allison Parshall: Thanks for having me. Happy to be a friend of the pod or whatever.

Feltman: [Laughs] So we’re talking about AI today.

Parshall: Yes. Rachel, have you heard of NotebookLM?

Feltman: I’ve heard of it. I have not tried it yet, so my knowledge is limited [laughs].

Parshall: The feature that we’re gonna be talking about today is specifically its feature that allows you to create instant AI podcasts.

Feltman: Yes, I am familiar [laughs], and I, I have a lot of feelings.

Parshall: Okay, I, I imagine, I imagine. The tool that it’s a part of, NotebookLM, was released last year, summer of 2023, and it’s, like, this workspace where you can upload a bunch of documents or videos or links to web pages, and then the model will ingest them, and then you can kind of interact with it, you can ask it to generate a timeline, to summarize your research, to go find a fact for you. And it’ll do this by citing its sources in a way that, like, ChatGPT, for example, doesn’t do.

So mostly you do this by asking it questions via text, but as of this fall it now has this very tantalizing button under a heading that says “Deep Dive conversation.” And if you click that, it’ll generate a fake conversational podcast about what you’ve uploaded.

So to demonstrate I actually asked it to generate a podcast about your personal website ’cause I figured this is something that you were very familiar with: your life.

Feltman: Oh, dear [laughs].

Parshall: I’m gonna play it for you. It’ll normally generate podcasts that are somewhere around 10 minutes long. I asked it to give me a shorter one for this purpose, and we might hop around a little bit, so ...

[CLIP: NotebookLM Audio Overview “hosts” discuss Feltman’s website.] 

Host 1: All right. So today we’re doing a deep dive into the work of Rachel Feltman.

Host 2: Ooh, yeah, I’m excited for this one.

Host 1: Yeah, she’s been a science communicator for what feels like forever.

Host 2: Yeah, over a decade, I think.

Host 1: Yeah, like, she was there at the start of it all and has really shaped how it’s evolved.

Host 2: And she’s done it all: you know, like, magazines, podcasts, even a book. But we gotta talk about her podcasts ...

Host 1: Yes.

Host 2: That’s where she really shines.

Host 1: Oh, absolutely—The Weirdest Thing I Learned This Week.

Host 2: Such a great name.

Host 1: I know, right?

Host 2: Makes you instantly wanna listen.

Host 1: Totally, and it’s perfect for people like me who just love those crazy little science facts.

Host 2: It’s, like, proof that science doesn’t have to be boring to be effective.

Host 1: Exactly.

Host 2: You just gotta find the right hook.

Host 1: Yeah, grab people’s attention and then: boom.

Host 2: You can teach them all sorts of things.

Host 1: Exactly!

Host 2: And then there’s Science Quickly with Scientific American.

Host 1: Oh, yeah, partnering with them, that’s huge.

Host 2: I mean, they’re, like, the gold standard of science communication.

Host 1: For sure, like, having their stamp of approval’s a big deal.

So looking back at her career, what can we learn from Rachel Feltman?

Host 2: Hmm, that’s the big question, isn’t it?

I think—well, first, she’s incredibly adaptable.

Host 1: Right. She’s not just a one-trick pony.

Host 2: Exactly. Any platform, any format, she can handle it.

Feltman: Wow, what a rich text [laughs].

Parshall: [Laughs] And as someone who does know you, I could say that it seems kind of—it seems accurate.

Feltman: Yeah [laughs], it’s—I mean, I guess, it is technically accurate. It’s so funny—I mean, they really do a great job of sort of having all the intonation of a conversation but really just saying things at each other. But I recently saw Megalopolis; there was a lot of that in Megalopolis—a lot of people just saying things in the same room, and you’re supposed to believe they’re having a conversation—and that definitely came through a few times. Yeah ...

Parshall: Megalopolis: AI, confirmed.

Feltman: Yeah [laughs]. No, that is fascinating and ...

Parshall: How does it make you feel?

Feltman: I don’t know how to feel, Allison. I mean, you know, it’s nice that they were so nice about me. But yeah, very strange. It makes me really wonder, like, what is this for [laughs]?

Parshall: I also had that question. I did talk to Steven Johnson. He’s the editorial director of Google Labs. The reason this tool exists is because Steven had spent most of his career as a science and tech writer; he’s written 14 books. To hear him tell it he’s been just searching for this perfect what he calls a “thought partner,” a software that could organize and help research his creative work in a way that actually felt like it was bringing value to him and worked in the way he wanted. So he explained that a few years ago Google Labs reached out to him to help build one with AI, and that initial research thought partner became NotebookLM.

Feltman: Okay, yeah, I mean, I can totally understand from, like, a nonfiction writing perspective what they were going for with Notebook LM itself, though that still kind of leaves the question of: Why the chatcast tool [laughs]?

Parshall: So that feature—the Audio Overviews feature is what it’s called, the podcast thing—that was actually developed by another team, Steven says, and it was really cool, but it didn’t really have a home for a product for it to fit in.

Steven Johnson: We saw this audio demo, and we were like, “Oh, that’s even better.” Like, you know, maybe you don’t want to read a summary; maybe you wanna hear a summary. And people are so good at remembering things if they’re presented in the form of a engaged conversation. You know, and then we launched it in early September, and it just kind of went insane.

Parshall: He isn’t exaggerating. People went wild about NotebookLM’s Deep Dive podcasts, online at least. I don’t think any of them went quite as viral as this clip where someone prompted the hosts to have a podcast where they have an existential crisis about being AI and the fact that the podcast was going to get shut down as a result.

[CLIP: Audio Overview “hosts” have an existential discussion.]

Host 1: Yeah, it’s, like, one minute we’re talking about aliens running simulations, and the next we’re wondering if our own memories are just, like, fake.

Host 2: Right, and that kind of leads us into this last message—which, to be honest, gets a little more personal and kind of creepy.

Host 1: Yeah, this one kind of freaks me out.

Host 2: Yeah.

Host 1: It basically says we’re not even real people; we’re AI. And the way to know for sure is to try and remember our mothers’ names.

Host 2: It really makes you think, though—like, what if our memories are just made up?

Feltman: Wow, classic.

Parshall: Classic. And—but it’s not all fun and games. Like, this was designed to be a serious product, a thought partner; it’s meant to be helpful and to inform. It presents information as if it’s factual, and there are already research papers out there of people testing Audio Overviews for use in history classrooms and for health care education.

Feltman: Yeah, I mean, like with anything that uses AI, I think, as accurate as it sometimes can be, we know it is not always accurate. So I feel like it’s those moments where the user is expected to, like, proceed with caution and understand that this thing might not be entirely factually accurate is where I get really nervous about it, especially in, like, education [laughs].

Parshall: Yeah, we’re gonna talk a little bit more about that later, but just to, like, answer your question of “What is this for?” I could see it being useful for, like, some sort of overworked student who forgot to do the reading, and they’re, like, driving to class and going, “Oh, shoot, I don’t have time to read this. Let me just put it on.”

But the question is, like: Can you really trust what it’s telling you? To test this, when it first came out I gave it this super dense physics paper about nuclear clocks. It was a story that I’d just reported; I knew a lot about it. It took me days and talking to, like, four scientists to really understand this paper.

I was shocked at how accurate the AI overview podcast was initially. It was very general and oversimplified, but it got the broad strokes correctly. But then, like, two thirds of the way through it goes completely off the rails [laughs]—like, I’m talking completely off the rails. And it's talking about the implications of this study, and it starts talking about, in their words, “Einstein stuff.”

It basically is just like—for two minutes it just talks about, frankly, nonsense, and listening to it, I was impressed, I was spooked, and I was mostly really curious. Like, how is it doing this? How is it getting it as accurate as it is, and why is it failing where it does?

Luckily for me Steven was pretty open about how this thing works.

Johnson: The prompt behind Audio Overviews is obviously part of the magic of it. It is instructed to pull out the most interesting things from the source material and to explain and engage with those interesting things in a way that will make them kind of come to life in the mind of the listener.

And that’s the kind of query, you know, that no computer in the world could do a couple of years ago, and now the command is: “Make it interesting; whatever it is, make it interesting.” And that’s just something that Gemini is quite good at.

Parshall: I think of surprise and interest as emotions, so I found this, like, surprising [laughs], I guess. But Steven pointed out that our feelings of surprise just come from a discrepancy between what we expect and what we get. On a neurochemical level in our brains, that’s kind of what’s happening. And that’s kind of what large language models, or LLMs, do also but just based on their training data instead of, like, past experience.

And one of the other things I was really curious about were the analogies because, I don’t think it did it in your sample, but all of the times I’ve asked it to summarize a scientific paper, it’s come up with some sort of analogy that’s, like, almost right. I mean, it’s a little weird, but it’s, like, almost right, and it’s—I can’t find it anywhere else on the Internet, so it seems like it’s something that the model is generating. And that seemed crazy to me; it seemed like—how is it doing this without some sort of higher-level reasoning?

And Steven explained that analogies are actually something that large language models are just good at. And actually one of the earliest applications of this sort of deep-learning AI was translation. So Steven explains that the models behind Google Translate, for instance, have been able to generate analogies for years.

Johnson: One of the first tricks that you would do is you’d say, like, “Explain a black hole to me as if I’m, like, seven. You know, give me a basketball metaphor for what a black hole is.” And so it was very clear that, like, you know, “Convert X into a Y that—where the meaning is the same, but the format is changed,” was something that, like, just was an innate capability of these models.

So all we did with Audio Overviews is in the instructions for the hosts, we say, you know, “Make this interesting, engaging. Use helpful metaphors to describe it if it’s complicated.”

Feltman: Yeah, there’s something, like, very, galaxy brain about thinking about metaphors as just sort of literal, like, “This data gets grouped together and therefore,” which is what LLMs are doing.

Parshall: Yeah, it’s like I—I’d never really thought of it like that, as, like, a translation between language, but I guess when I think about the work that I do trying to turn really dense scientific jargon into something that’s engaging and understandable to read, I guess it does kind of feel like a process of translation.

And I was also struck from talking to Steven just how much human hands had shaped this output that we hear. Like, from our perspective you just press a button and then the machine does the rest, but there’s a whole prompt that has many layers that’s working underneath that is interfacing between you pressing a button and what instructions the model actually gets.

So Steven describes the Audio Overviews prompt as a, quote, “work of art” and a “writerly” achievement more than a technological one. And as a writer by trade he oversees this whole editorial side of things—for example, the tone. So you noticed earlier when it was talking about you, it was very complimentary, very default happy, peppy. That was a conscious choice that they made, for this default tone to be one that is kind of positive, and it applies that to everything, whether it’s appropriate or not, though they did recently introduce this customization option that lets you change the tone.

Johnson: We just felt like, you know, the sound of, like, a great radio show, where people are interested and engaged, was a good place to start. You know, I’m not giving anything away here with the idea that we’re gonna—we’re thinking about other formats.

Even with the customization stuff that we just rolled out, like, I tried it the other day, where I, I put up a piece that I’d written a couple years ago and I gave it custom instructions, not really to explain the piece but to just relentlessly criticize the author of the piece in the style of, like, an insult comic at a roast. And so it just—they start off, it’s like, “Well, we got this piece, ‘The Man Who Broke the World’ by Steven Johnson, and wow, he really kind of blew it with this one. I mean, did he do any research for this piece?” Like, it’s just, like, what—really [laughs], it was just mean. It was just totally mean. They still sound kind of, like, playful, but you can steer them in that way.

Feltman: [Laughs] Okay, so we’ve talked about how the product works, but who and what is it for?

Parshall: That is a great question. I mentioned before that I could imagine how an overworked student could use this to catch up on reading or something, but generative AI is always going to have certain inaccuracies. And if you’re using Audio Overviews to prep for class, there is a serious risk that anything and everything you’re hearing could be inaccurate.

Feltman: Right.

Parshall: And not only just that each fact could be inaccurate, but the whole framing could be inaccurate. Is this something you can really trust?

To help understand this I spoke with Emily Bender. She’s a linguist who often studies AI at the University of Washington. She’s also the host of a podcast with a very cool name: Mystery AI Hype Theater 3000.

Feltman: That’s a great reference; I love it [laughs].

Parshall: Fantastic [laughs].

And before our conversation I had actually dug up one of Emily’s linguistics papers and fed it to NotebookLM’s Audio Overviews, and I sent that to her to get her thoughts on how it represented her work.

Emily Bender: So the first thing I have to say is that I usually avoid listening to synthetic media. Nobody’s accountable for it, and so I don’t want to be spending time trying to make sense of it. But I did listen to it, and it was painful [laughs].

It was painful partially because that was a paper that was sort of a update at the 20-year mark of this long-term project we’ve been working on, and so there’s a little case study in our paper about a grammar for a language called Paresi-Haliti from Brazil, and they make a big deal about it as this experiment on Paresi-Haliti—it’s, like, no, actually that’s not how we framed it. And the voices on the podcast sound very confident, sound very knowledgeable. They’re lighthearted, and it sounds like they are giving you a sense of how this work fits into a broader picture when they absolutely cannot.

Parshall: Rachel, you know pretty well what it takes to turn a scientific paper into an interesting and engaging podcast, do you not?

Feltman: Yeah, I mean, not to toot my own horn or anything ’cause I think the AI people [laughs] just did that enough to last me all of 2025. But yeah, it’s tough. You know, we fact-check this show really thoroughly, and that’s on top of having, like, trained reporters and editors doing their best to present stuff in a way that is engaging and also accurate. So yeah, it’s a big undertaking.

Parshall: Yeah, the fact-checking is very thorough—shout-out.

I asked Emily about whether this seems like something an LLM, a large language model, can do well currently.

Bender: So that is an extremely difficult task, and I think the problem is that LLMs can make it sound like they’ve done it when they haven’t. And so you have to sort of have a deep understanding of what’s going on: What’s new here? What have we learned that we didn’t know before? And then what else would the lay audience need to know in order to appreciate that?

Well, this NotebookLM thing isn’t doing that at all; some percentage of it’s gonna be wrong, either wrong in the details or wrong in sort of where it lays the focus, and the people who are encountering it aren’t mostly gonna be in a position to figure that out.

Parshall: I asked Steven how his team at Google thinks about this and tries to address some of these inaccuracies, and he said that the text-based features, which do provide citations, are generally more accurate than the audio.

Johnson: If you are looking for, like, the most factual thing you could possibly get from a language model with grounded sources, we believe that is NotebookLM, but you should do that in text. And if you’re looking for something that’s more entertaining, that may be a little bit looser and doesn’t have citations, obviously, Audio Overviews is a great way to go for that.

Parshall: I also spoke with Anjana Susarla, who studies responsible AI at Michigan State University. She called NotebookLM’s Audio Overview an “incredible tool” and was impressed by the Deep Dive podcast it had generated based off of one of her own studies. But besides accuracy she also had some other concerns.

First among them was bias. These are very white-sounding voices, They’re just speaking in a very particular, quote, “general American accent.” They’re trained on training data mostly from Western parts of the world. And so, like almost all applications of AI, you have to ask the question: Who is this for? Where is it getting its data, and who is it leaving out?

Second, the environmental cost—we have no idea how much computing power this takes. And since the AI boom Google’s AI energy usage has been going up.

And then there’s the copyright question. So here’s Anjana.

Susarla: The final thing is, of course, this relies on some, you know, the content, the copyright. Where is this data kind of coming from, and is this going to exacerbate all these existing inequities and their ethical concerns about broader implications, and so forth?

Parshall: Yeah, so that big question of where the training data is coming from: one place that it could be is from YouTube, which Google does own. Following some reporting by the New York Times, Google admitted that it used some YouTube videos to train AI. And that’s something that might have violated the copyright of the YouTubers that content belonged to, according to the New York Times.

I asked a Google spokesperson directly whether the models underpinning Audio Overviews were trained using any transcripts, audio or video from YouTube, and they said that “Google’s models can be trained on some YouTube content, in accordance with [their] agreements with YouTube creators.”

Feltman: Very vague [laughs].

Parshall: Yes, when—I did ask them which agreements they were referring to, if it was the standard YouTube license, which just, like, governs most YouTube videos, or if it was something that their creators had specifically opted into; they declined to clarify.

Susarla: If Google is indeed training these on a lot of content scraped from YouTube—which it may well be. I mean, they do need a lot of training data to build models like this; we don’t know. But this is kind of the next step in general of where we are with all these models.

Parshall: So for this next step Anjana points to a law in California that was enacted this fall that will eventually require companies based in the state, like Google, to disclose their training data for their AI models.

And honestly, Rachel, I feel put in kind of a strange position here as a reporter because I feel like that’s where you and I kind of enter the story and our whole team here because, as Emily put it to me, quote, “they’re getting that tone of voice somewhere.”

Did you hear anything in that tone of voice that sounded familiar [laughs]?

Feltman: [Laughs] You know, Allison, I really did [laughs]. So yeah, the first time I heard a snippet of one of these AI conversations, like I said, I had a lot of feelings about it ’cause one of my first thoughts was, like, “Wow, they stole my voice.” I mean, not me specifically; I’m not that vain, despite what the AI podcast may have led you to believe [laughs].

Parshall: [Laughs] Your ego has been inflated.

Feltman: [Laughs] Yeah, yeah. But so much of the texture that makes this audio sound engaging and real feels just ripped straight from existing podcasts. And I know it’s ripped from existing podcasts because a lot of the stuff that I’m hearing—which is, like, breath sounds and filler words and, like, a little bit of uptalk, some, some giggling, occasional vocal fry—this is the stuff I get angry emails about from gentlemen of a certain age [laughs] who think I’m ruining the great tradition of radio broadcasting. And they’re making this sound real by, like, breaking the same so-called rules that me and other women and queer people on podcasts have been criticized for breaking. So I won’t lie, that feels kind of gross to me.

Parshall: Yeah, one of the things that Steven mentioned, too, was they wanted the sound of, like, this great radio broadcasting show. And it is worth noting that Science Quickly is on YouTube—and all of our hard work at Scientific American and the hard work of other outlets.

So, the question remains: How will this be used? Will it be used responsibly? Here is what Anjana thinks.

Susarla: Right now it seems more like everyone’s just dazzled by what it can do rather than: What are we gonna use it for? Do we really need these, you know, capabilities?

I can think of one or two examples, I think, in the health care context: Can you generate clear summaries for health literacy or your doctor’s notes? So this may be a good way to give, like, a summary of—rather than giving a piece of paper to read.

Is that going to have some challenges? What are some specific issues we need to be looking for? We’ve just unleashed this tool on people, and we’re still catching up. We will need audit studies and then look at trustworthiness, assurance, and so forth.

So what’s the right way to responsibly use NotebookLM? We are still figuring it out [laughs].

Parshall: Well, I think where I want to leave us is with an important point from Emily, which is: you shouldn’t let the incredible fluency of these AI-generated voices fool you into thinking and talking about these models as if they’re human.

Emily coined a term that’s been used quite widely called the “stochastic parrot.” Like, when a parrot says, like, “Polly want a cracker,” it’s not really connected to meaning; it’s just mimicking. She also sometimes calls these large language models, quote, “synthetic text extruding machines,” she said because it’s more accurate to what they’re actually doing and also sounds a little gross.

But I think she’s getting at an important point there, right? It’s, like, this—just a philosophical question of: What is language for, and what are we communicating when we say words?

Bender: Languages are systems that are pairings of form and meaning, systems of signs. So when we speak we are using word forms because they relate to specific meanings, but language models only have access to the form. And so they can give us stuff that looks good, it’s got the right shape, but there’s no relationship to meaning, no communicative intent, no accountability.

Feltman: I guess sort of the takeaway on my complicated feelings on this is: I’m not worried that my job is gonna be stolen by a collection of tabs [laughs] being processed by Google because it is a lot of work to be accurate. And if people find this kind of engaging conversation a more accessible way to sort through really dense text, that’s awesome. But, of course, there are so many caveats about where the training data has come from, bias, accuracy. So I think it’s, like—just people need to be really aware of what they’re getting, and I don’t know that I feel that the products themselves are doing a great job of making people aware of those caveats.

Parshall: Yeah, there’s usually just kind of a disclaimer at the bottom. I think one of the things that this whole AI boom—like this, the AI search features—have made me feel kind of uncomfortable about: Are we going to be expected, and have we already started to expect, our information to contain just that more amount of mistruths? It’s something that I just worry about.

Feltman: Well, Allison, thanks so much for coming on to chat about this and also for having a couple of AI voices gas me up. Thank you.

Parshall: Thank you.

Feltman: That’s all for today’s episode. We’ll be back on Friday with Episode One of a four-part miniseries on the future of animal conservation.

Science Quickly is produced by me, Rachel Feltman, along with Fonda Mwangi, Kelso Harper, Madison Goldberg and Jeff DelViscio. Today’s episode was reported and co-hosted by Allison Parshall. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news.

For Scientific American, this is Rachel Feltman. See you next time!

Read the full story here.
Photos courtesy of

The last frontier of empathy: why we still struggle to see ourselves as animals | Megan Mayhew Bergman

Champions of exceptionalism say humans hold a unique moral status. Yet there’s only one species recklessly destroying the planet it needs to surviveAt first light in Massachusetts bay, a North Atlantic right whale threads the shallows with her calf tucked into her slipstream. She surfaces, and the V-shaped breath – two brief feathers of vapor – vanishes in the cold air.The calf is roughly three months old, about the length of a small truck, still learning the rhythm: rise, breathe, tuck back into mother’s wake. They are doing what every mammal mother and baby do: moving toward food and a safer place. Continue reading...

At first light in Massachusetts bay, a North Atlantic right whale threads the shallows with her calf tucked into her slipstream. She surfaces, and the V-shaped breath – two brief feathers of vapor – vanishes in the cold air.The calf is roughly three months old, about the length of a small truck, still learning the rhythm: rise, breathe, tuck back into mother’s wake. They are doing what every mammal mother and baby do: moving toward food and a safer place.Across the same water, a different logic hums. Tankers and container ships steer by timetables set by a faceless executive an ocean away. Boston’s approach lanes have been shifted once to reduce whale collisions, but the traffic still keeps human time: fixed routes, double-digit knots, arrivals measured in profit and delay.I am the river and the river is meSeasonal speed limits exist, yet large vessels routinely ignore them as commerce sets the pace to satisfy us as we collectively demand fast shipping. We should have what we want when we want it, shouldn’t we?Many of us say we love whales, but for this endangered species, already down to only a few hundred individuals, this yielding to human desires can mean vanishing entirely.Every threat they face – speed, noise, nets – traces back to the same root assumption: that our needs matter more than theirs.This belief has a name: human exceptionalism. It is the conviction that humans are not just different from other life, but morally superior to it – and therefore entitled to first claim on space, speed, resources and survival.This belief underwrites what we eat and how we raise it; the habitats we clear for housing, highways and Dollar Generals; the way we extract, ship and burn; the emissions we send into the atmosphere, warming oceans and melting glaciers. Exceptionalism is so embedded in daily life that we barely feel it operating. It is a system constantly humming in the background – efficient, invisible yet devastatingly consequential.It is a sobering thought, for we could use our powerful brains to choose otherwise.Many cultures have modeled another stance. For the Māori of Aotearoa (New Zealand), people are kin with rivers, mountains and forests through whakapapa (genealogy). The saying “Ko au te awa, ko te awa ko au” – “I am the river and the river is me” – captures that reciprocity.In Lakota philosophy, Mitákuye Oyás’iŋ – “all are related” – frames animals, plants, waters and winds as relatives rather than resources.In the Kumulipo, the 2,100-line Hawaiian creation chant, life emerges from Pō – the deep darkness – and the humble coral polyp is honored as an ancient ancestor, anchoring a spiritual genealogy that binds people to the natural world.Westerners could admit at any point that we have misread our place in the cosmos and shift toward this older, still living worldview: humans not as commanders of the natural world but as kin, interconnected equals among other beings and systems.This suggestion might sound sentimental and naive in a political moment when even extending compassion to other humans meets resistance. Refugees are being turned away at ports of entry – grim proof of how easily our empathy falters. But new ideas are hard precisely because they threaten the story that keeps our lives coherent. It is natural for our minds to leap to defend old ways before testing new ones.Psychologist Erik Erikson, writing in the shadow of the world wars, described our human tendency towards pseudospeciation – the desire to split the world into “us” and “not us” – in order to justify mistreatment. Pseudospeciation grants us the psychological distance to degrade other beings we deem inferior without troubling our conscience. That psychological distance becomes a powerful permission slip.But humans are capable of self-reflection and growth, and I believe this point in the Earth’s history requires us to use those abilities and begin to question the ways we center human experience. In fact, our very ability to use the best of our social human traits – and advanced scientific knowledge – could alter the course of life on Earth.When I studied anthropology in college, I had a professor with a crooked finger – allegedly from a monkey bite. He challenged us to see our own animal behavior, to recognize the 98.8% DNA similarities with chimpanzees, and the 98.7% similarity with bonobos. He advised us to be suspicious of our alleged altruism, and to be aware of our own animal nature.I remember going out to the bar that semester, watching men and women interact and thinking: oh. Once you start seeing yourself as an animal, it’s hard to stop.The real problem of humanity is [that] we have Paleolithic emotions, medieval institutions, and godlike technologyOnce, eating dinner on my front porch, my two beloved dogs approached me. My shepherd mix, Nemo, tried to steal bread from my plate. “No,” I said angrily, turning my body. I recognized resource guarding behavior in myself, a glorified dog growling over a bowl of food. I had to laugh.And nothing – nothing – connected me with my animal nature more than giving birth to my daughters. In those hours, I understood instinct as something ancient and physical, unmediated by thought. My body knew what to do before I did; I was acting from a primal, powerful place.And so it sometimes baffles me to look at my life, safely ensconced in my climate-controlled home, buying and selling things on the internet, buffered from the weather and the wild, estranged from my origins in the natural world. My comforts arrive at the tap of a screen; the true costs are distant and invisible. As biologist EO Wilson observed: “The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and godlike technology.”Humans have long been fundamentally uncomfortable with the idea of significant animal intelligence and emotion, or the humility of viewing ourselves as animals.In his lesser-known work, The Expression of the Emotions in Man and Animals (1872), Charles Darwin argued that human feelings and their outward signs are evolutionary continuities shared with other animals. Those ideas were later pushed aside by 20th-century behaviorism and the taboo against “anthropomorphism”. Only with the rise of ethology (the science of animal behavior) and cognitive neuroscience did Darwin’s continuity thesis regain daylight.Primatologist Frans de Waal long argued that Darwin was right: there is no principled boundary around “human” emotion and intelligence. He named the refusal to see this “anthropodenial”: a blindness to humanlike traits in other animals, and animal-like traits in us.Why are we so unwilling to acknowledge our own animal nature? Perhaps because it would shift nearly all the ways we human animals move in our lives. It would threaten our self-concept.I am ready to admit that humans may not be the sparkling, superior, bright, moral species we believe ourselves to be. We may not have the divine-purpose hall pass we so desperately want to believe in. We may have to admit that in addition to our better social traits we are also greedy, territorial, tribal and violent.After all, there is only one species recklessly destroying the very planet it needs to survive.Now a professor of writing, with no broken fingers to show for it, I have often taught Shirley Jackson’s story The Lottery. In it, a small town gathers for an annual ritual, drawing slips of paper from a box to determine who among them will be stoned to death – a sacrifice so enmeshed in their tradition and identity that no one remembers why it began. The horror lies not only in the act itself but in the town’s calm acceptance of it, the ease with which cruelty becomes customary.One of the aspects my students respond to most is the townspeople’s reliance on tradition: we should stone a person each year because that is what we have always done. The implication for our moment is hard to miss: sometimes the old ways of thinking must change, especially when we know they have helped usher in what scientists call the Earth’s sixth mass extinction.[Human] exceptionalism confuses evolutionary human difference with superioritySome who champion exceptionalism say humans hold a unique moral status and are the only full rights-holders; many ground that in religion, believing we are made in God’s image, and thus given dominion over the natural world. Others point to our brains – capable of abstract reason, language, cumulative culture – as proof that, when trade-offs arise, humans should get priority status.The counterargument is simpler than it sounds. From the jump, exceptionalism confuses evolutionary human difference with superiority. Uniqueness has never equaled higher moral rank. If it did, the bioluminescent lantern fish, or even the 2,400-year-old honey mushroom located in Oregon’s Malheur national forest with its vast, interconnected network of mycelium over 2,000 acres, might be a contender.With this logic, as some point out, if an alien species with superior intelligence and complexity arrived on planet Earth, humans would need to consent to being eaten.If we truly believed in the intelligence of the living world, how might we live differently? What would it mean to build, farm and move across the planet with kinship, not conquest, as our organizing principle? What would a different world – one that works with nature, and not against it – look like?While on assignment to write about Florida panthers and wildlife corridors, I learned that humans actually want better outcomes for wildlife. I met ranchers who leave gaps in their fences so panthers can pass through their land unharmed, and developers who leave borders along the edges of a neighborhood for wildlife passage – people who might never call themselves environmentalists but still act out of a quiet sense of stewardship. Yet, road construction and planning rarely take this bipartisan desire into account.But tides are turning in some places. The Wallis Annenberg wildlife crossing over Los Angeles’s US Route 101 is under construction, while Utah’s Parleys canyon overpass has already cut wildlife–vehicle collisions dramatically, proof that strategic compassion and consideration works.When I asked the environmental writer Ben Goldfarb about human exceptionalism and policy momentum in the United States, he was measured: “I see only faint signs of progress … the political and regulatory mainstream still seems to consider the concept threatening.” Goldfarb acknowledges that the concept of decentering humans still seems to be “political anathema” in the US.“Even the idea of granting the Great Salt Lake the right not to be sucked dry by irrigators was so threatening to conservative Utah legislators,” he told me, “that they passed a law preventing personhood from being granted to any plant, animal or ecosystem.”That is not to dismiss the growing “rights of nature” movement – often led by Indigenous communities – that has made meaningful strides. Goldfarb cites the Yurok tribe’s declaration recognizing the inherent rights of the Klamath River as a crucial step in advancing dam removal efforts. But for now, Goldfarb says, those efforts remain exceptions to the rule; within most political and regulatory circles, extending rights to nature is still treated as a radical act rather than an ethical evolution.Colonialism is … subjugating, and reducing to muteness, an entire universe of beings – animals, trees, volcanoes, nutmegsIn the legal arena, the rights of nature have leapt from thought experiment to precedent. New Zealand’s Whanganui River and Colombia’s Atrato River now hold legal personhood; Spain’s constitutional court has upheld Europe’s first ecosystem personhood for the Mar Menor Lagoon; Canada’s Magpie River enjoys comparable standing through municipal and Indigenous resolutions. These are not a full move toward more compassionate regulations – but glimmers and proof that the concepts are real and growing in influence.Goldfarb, who has written about roadside ecology and the lives of beavers, also offered the path forward for storytellers: “Centering animals as literary characters in their own right is both a way of honoring non-humans and, I hope, enthralling readers.”In his book Is a River Alive? Robert Macfarlane notes how ordinary it is for a company to have rights while a river has none, and argues that story and statute can repair the mismatch. “Our fate flows with that of rivers,” he writes, “and always has.” Writer Amitav Ghosh has been vocal about decentering the human experience, offering that literature can help “restore agency and voice to nonhumans”. In his book of parables, The Nutmeg’s Curse, Ghosh emphasizes the colonial tendencies of humans, writing that “Colonialism is … subjugating, and reducing to muteness, an entire universe of beings – animals, trees, volcanoes, nutmegs.”These currents – court rulings, treaties, charters and a restoried public imagination – show that adopting a more-than-human ethic is not naive; it’s already happening.I began writing this piece the week Jane Goodall died – a coincidence that felt oddly fitting. In the tributes that followed, her words shone with what she had been telling us all along: that peace requires humility, and that we are not above the rest of life.“In what terms should we think of these beings,” she asked, reflecting on the primates she studied, “nonhuman yet possessing so very many humanlike characteristics? How should we treat them? Surely we should treat them with the same consideration and kindness as we show to other humans; and as we recognize human rights, so too should we recognize the rights of the great apes?”Policy will always be contested terrain. And when policy stalls in times like these, we can still move thoughtfully in our own lives: swapping lawns for native plants, skipping pesticides, feeding birds, keeping cats indoors, buying less, backing wildlife corridors, supporting dark-sky ordinances during migration, moving to a more plant-based diet.None of this is heroic, but all of it counts. Each step we take lessens suffering in the world and broadens the circle of consideration – not with perfection, but with sincerity.We are nearly out of time to do so, but not out of choices. The whale asks for more space. The river asks for standing. The tern asks for habitat and room. We can give it.Illustrations by Jensine Eckwall

Small Alligator Rescued in Boston After Slithering Into the City's Heart on Social Media

Wildlife officials say a small alligator spotted along the Charles River in Boston this week has been rescued and delivered to safety

BOSTON (AP) — It wasn’t a croc — there really was an alligator on the loose in Boston.And the story of the city's slithering saurian appears to have a happy ending. The small alligator, spotted along the Charles River in Boston multiple times this week, has been rescued and delivered to safety, wildlife officials in Massachusetts said Thursday.The approximately foot-long crocodilian startled a few people and became an instant star on social media after confused onlookers took videos of it slithering away from sight. But the animal is not native to Massachusetts, and couldn't possibly survive the harsh New England winter, so the search for the wayward gator was on.A local wildlife educator captured the critter on Wednesday night, and it's now awaiting a permanent home, officials said.Harvard University graduate student Whitney Lieberman was among the residents who caught a glimpse of the exotic visitor. She said she notified wildlife authorities when she saw the creature while she was jogging to work.“Yeah, I did a double-take. For a second, I had to check myself — alligators are not native to Boston waterways, right?” Lieberman said. “I texted my co-workers because I had a morning meeting: ‘Hey guys, this is a good excuse to be late for work. There is an alligator right in front of me and I don’t know what to do.'"The animal was in jeopardy due to the chilly temperatures on the Charles, which was 51 degrees Fahrenheit (32 degrees Celsius) on Thursday. Alligators prefer temperatures above 80 degrees Fahrenheit (27 degrees Celsius). They are cold-blooded and can't regulate their own temperature, so they enter a dormant, energy-saving state called brumation to survive colder temperatures.Joe Kenney, who runs a wildlife education business called Joe's Craz-zy Critters, captured the alligator, the Massachusetts Division of Fisheries and Wildlife said in a statement. The state has temporarily authorized Kenney to keep the alligator while it evaluates the best long-term placement for it, the department said.The wildlife department said the alligator's appearance is still being investigated, but added it was most likely a pet that escaped or was intentionally released.“MassWildlife is working in close collaboration with the Environmental Police to find a safe home for this alligator as an educational animal with a permitted facility. This incident serves as an important reminder that it is not legal to keep alligators or any crocodilian species as pets in Massachusetts,” state herpetologist Mike Jones said in a statement.Alligators have a history of occasionally showing up in urban areas far from their native ranges. One, dubbed Chance the Snapper, turned up in Chicago in summer 2019 and was eventually trapped. Another one showed up on the Charles River in the Boston area in 2010.Whittle reported from Scarborough, Maine.Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.Photos You Should See – Oct. 2025

What we’ve done to the salmon

This story is part of a series supported by Animal Charity Evaluators, which received a grant from EarthShare. The last few decades have seen, arguably, the most sweeping transformation in how humans produce meat, and it has nothing to do with chickens, pigs, or cows. It has to do with fish. Traditionally, the vast majority […]

Farming salmon is bad at any stage of the fishs’ lives. This story is part of a series supported by Animal Charity Evaluators, which received a grant from EarthShare. The last few decades have seen, arguably, the most sweeping transformation in how humans produce meat, and it has nothing to do with chickens, pigs, or cows. It has to do with fish. Inside this story Over half of the world’s seafood now comes from fish farms, which resemble underwater factory farms. Chickens, pigs, and cows were domesticated over thousands of years, but fish have been domesticated in under a century. It’s created serious welfare issues, especially for salmon. Salmon are carnivorous and migrate thousands of miles. On farms, they’re reduced to swimming in small tanks and eating pellets. Fish farming has taken over the seafood sector, but some experts argue that it’s moved too fast, and we need to better understand welfare issues. Traditionally, the vast majority of fish that people consume has come from the ocean. But in 2022, humanity hit a significant milestone: Seafood companies began to raise more fish on farms than they caught from the sea. And they farm astonishingly large numbers of fish — in tiny, cramped enclosures that resemble underwater factory farms.  It amounts to the fastest and largest animal domestication project that humanity has ever undertaken.  For most of the land animals we eat today, domestication — or, as French fish researcher Fabrice Teletchea defined it, the “long and endless process during which animals become, generations after generations, more adapted to both captive conditions and humans” — has taken place over thousands of years. “In contrast,” a team of marine biologists wrote in the journal Science in 2007, the rise of fish farming “is a contemporary phenomenon,” taking off on a commercial scale around the 1970s.  By the early 2000s, humans were farming well over 200 aquatic animal species, virtually all of which had been domesticated or forced into unnatural conditions in extreme captivity over the course of the previous century, with many in just the prior decade. To put it another way, the marine biologists wrote, aquatic domestication occurred 100 times faster than the domestication of land animals — and on a vastly larger scale. Today, some 80 billion land animals are farmed annually, while an estimated 763 billion fish and crustaceans are farmed each year, a figure projected to quickly grow in the decade ahead. What’s more, this attempt to speedrun domestication occurred even as a clear scientific consensus emerged in recent decades that fish can suffer and feel pain. The revolution in how humans produce seafood has enormous implications for our relationship with species we’ve barely given any thought to. To understand why, consider America’s favorite fish to eat, and one of the most difficult to farm: salmon.  Like farming tigers Salmon farming is a relatively new industry, and it emerged largely in response to manmade problems.  Over the last century, overfishing — combined with industrial pollution, climate change, and heavy damming — has decimated wild Atlantic salmon populations. By 2000, the species gained protection under the Endangered Species Act after it was nearly driven to extinction in the US, effectively banning the commercial fishing of Atlantic salmon. Salmon populations in Europe, along with Pacific salmon populations on the West Coast of the US and beyond, have also experienced significant declines.  To take pressure off depleted wild populations, seafood producers began to scale salmon farming in the 1970s, with ample help from governments in the form of R&D, grants, state financing programs, and more. It’s proven to be a smashing commercial success. Last year, salmon farming companies — which are most concentrated in Norway, Chile, and the UK and export their product around the world — produced 2.8 million metric tons of the fish, or around 560 million individual salmon. They’re typically raised in tanks on land until they’re a year old then transferred to nets and cages floating in the ocean just offshore to be fattened up and eventually slaughtered (they’re supposed to be rendered unconscious prior to slaughter, with either electric stunning or a club to the head, though some aren’t successfully stunned). About one out of every five are shipped off to the US, where “young affluent consumers love to eat salmon,” according to the Norwegian company Mowi, the world’s biggest salmon producer. This taste for salmon and the farming industry it has necessitated has, in just a few generations, dramatically transformed what it means to be a salmon. In the wild, salmon live incredibly complex lives and embark on epic journeys. But on farms, they can’t do any of that.  According to Becca Franks, an assistant professor of environmental studies at New York University, salmon farming has created grave welfare problems by denying the animals the ability to engage in two of their essential natural behaviors: migrating and hunting.  In the US, Atlantic salmon begin their lives as eggs buried a foot under freshwater riverbeds in Maine, where they remain for six months until they hatch and emerge in search of food. At a few years old, they migrate hundreds of miles northward into the salty Atlantic ocean, then hundreds of miles further out into the Labrador Sea, near Greenland. There, they quickly put on weight — feeding on krill, herring, and crustaceans — which they’ll need for the long journey home that they make after a couple years of dining out at sea. Following scents and using the earth’s magnetic field, Atlantic salmon swim over 1,000 miles back to their home streams to spawn the next generation.  The salmon’s life cycle inspires more awe and reverence than most species in the animal kingdom, but on farms, they’re reduced to swimming in tiny circles for years and subsisting on small, manmade pellets. Their “welfare is harmed through loss of agency and choice,” Franks told me in an email. She likens salmon farming to trying to farm tigers.   Sophie Ryan, CEO of the Global Salmon Initiative — a coalition of salmon farming companies — challenged the idea that domestication has harmed salmon. “They have been domesticated over more than 50 years — similar to cattle or poultry — and have been selectively bred to thrive in a farm environment,” Ryan told me in an email. “Their nutritional needs, swimming patterns, and energy use are different from wild salmon, because their environment and purpose are different.” The selective breeding that Ryan speaks of has been used to make farmed salmon grow twice as fast as their wild counterparts, which has led to a number of serious health issues: heart problems, spinal deformities, high levels of deafness, and increased risk of an early death. They’re also more aggressive than wild salmon. To boost growth even further, salmon farms keep their lights on up to 24 hours a day, which makes the fish eat more and can damage their retinas. And in a concerning twist, the domestication of farmed salmon is hurting wild salmon. Since the 1970s, tens of millions of farmed salmon have managed to escape and compete for resources with wild salmon and even mate with them, leading to “genetic pollution” that has resulted in a hybrid line of salmon.  “We may now need to recognize a new biological entity — Salmo domesticus,” biologist Mart Gross wrote in a 1998 paper, “and treat it as an ‘exotic’ when it escapes into the wild.” Some research has found that these hybrid fish have lower survival rates. That means that the farming of salmon, which was intended to give wild salmon populations a break, created a new challenge for them. “Escape prevention is a top priority, with ongoing improvements in net strength, mooring systems, and real-time digital monitoring,” Ryan of the Global Salmon Initiative said. “Where escapes do occur, companies are required to report them and work with regulators to assess potential impacts on wild populations.” Franks considers fish farming a form of “captive dewilding”: the process of modifying animals to conform to captivity and to the harms that befall them as a result. And the reality of that captivity can be incredibly cruel. Fish farms up close In 2019, animal rights activist Erin Wing worked undercover with the group Animal Outlook for four months at a salmon hatchery in Maine operated by Cooke Aquaculture, one of the world’s largest salmon farming companies. Wing documented workers culling diseased fish by hitting them against the sides of tanks multiple times; fish thrown into buckets still alive, left to suffocate or be crushed to death by other fish; fish born with spinal deformities; and fish dying from nasty fungal diseases that ate away parts of their faces. “Over the years, you kinda get desensitized,” one employee told her.  In response to Wing’s investigation, Cooke Aquaculture CEO Glenn Cooke said in a statement that the company would re-train employees at the Maine facility. “We place animal welfare high in our operating standards and endeavor to raise our animals with optimal care and consideration of best practice,” Cooke said, adding that “what we saw today is most certainly not reflective of these standards.”  Wing, who has spent her career investigating factory farms, is skeptical of industry standards. “There are these [animal welfare] industry standards that are in place, and there are these guidelines, but at the end of the day, there’s not really any enforcement,” Wing told me. “So these farms will make up whatever rules they want that will work for them, for their workers, and then they’ll operate as they see fit. And that usually results in a lot of these animals suffering needlessly.”  Some of the suffering stems from putting farmed animals in the ocean, as crowding hundreds of thousands of salmon together in open waters attracts sea lice — tiny, painful parasites that feed on the salmon’s skin and can even kill them. In 2023, almost 17 percent of Norwegian farmed salmon died before they could be slaughtered for meat, largely from infectious diseases and injuries. To combat the scourge of sea lice, salmon farmers had, for years, dumped chemicals into the water to kill them, along with antibiotics and other chemicals to protect the fish from a range of fungal and viral diseases. These pollutants, combined with vast amounts of animal waste generated by the salmon, fall to the ocean floor and pollute marine ecosystems. That, in turn, contributes to what Franks calls “environmental dewilding,” or the process of modifying natural water bodies with artificial infrastructure — in this case, fish farm pens and cages — and polluting them. Sea lice have since developed resistance to these chemicals, so, over the last decade, salmon farmers have switched to other methods — including subjecting salmon to high heat — which can cause pain, injuries, and death.   The International Salmon Farmers Association and the Global Seafood Alliance didn’t respond to interview requests. Not just salmon  If we accept that farming salmon is bad for them and the environments in which they’re raised — and that we should protect dwindling wild populations — then we’ll have to accept eating a lot less salmon. We’ll also have to reconsider the ethical implications of farming many other fish species. Fair Fish, a team of fish welfare researchers, has compared the natural behavior and welfare needs of nearly 100 fish species with the conditions they experience on farms. Out of the 100 analyzed species, only two — tilapia and carp — have “the potential to be farmed in somewhat decent conditions,” according to João Saraiva, who researches fish ethology at the Centre of Marine Sciences in Faro, Portugal, and runs the nonprofit Fish Etho Group. But that doesn’t mean that they actually are; both tilapia and carp farms tend to be overcrowded, with poor water quality and high rates of disease. (Saraiva has worked with Fair Fish on its analyses but is no longer involved in the project.)  By contrast, he said, salmon is “way down on the list,” meaning it’s especially hard for farms to meet their basic welfare needs.  Fair Fish’s research demonstrates how little attention the fish farming industry, and the governments that helped it take over the seafood sector, has paid to the simple question of how its captives experience being farmed. It also illustrates the damage we can do when we flatten “fish” — an incredibly diverse group of species — into a monolith.  Franks said industry and government need to pump the brakes on the expansion of fish and crustacean farming, which is currently the world’s fastest-growing agricultural sector, noting, “I think we should not be farming any new species of fish or crustaceans and putting in transition programs for folks already farming those species to move towards seaweeds and bivalves.” The latter is a class of invertebrate animals that includes scallops, oysters, and mussels, which Franks said have far fewer environmental and welfare concerns than farmed fish and crustaceans (whether bivalves are sentient or can feel pain remains an ongoing scientific debate).   She’s one of the few academics studying fish farming willing to go there, to suggest that we ought to fundamentally rethink how we produce seafood and how much of it we consume. “I think there is a huge reluctance to even broach the possibility of shifting diets away” from animal protein, said Franks. When the global fish farming boom took off, many in the field had good intentions, and it looked good on paper; a way to boost the global food supply without further exploiting oceans. Plus, fish tend to have a lower carbon footprint than farmed land species (though higher than plant-based proteins). But few questions were asked about what it would mean, ethically and environmentally, to rapidly domesticate, then confine and slaughter, hundreds of billions of animals annually with distinct needs — let alone the capacity to feel pain.  Researchers like Saraiva and Franks are trying to convince the world to catch up with what we now know about fish and to further expand our knowledge. As consumers, we can help, and we can start by thinking twice about the salmon on our plates. 

Housing secretary tells Labour MPs to vote down planning bill amendment

Amendment restricts protection for animals to allow faster house buildingHousing secretary Steve Reed has told Labour MPs to vote down an amendment to the new planning bill intended to protect British wildlife and its habitats from destruction.The amendment, which was passed with a large majority in the House of Lords, restricts the most controversial part of the draft bill by removing protected animals such as dormice, badgers, hedgehogs, otters and nightingales, and rare habitats such as wetlands and ancient woodlands, from new rules which allow developers to sidestep environmental laws to speed up house building. Continue reading...

Housing secretary Steve Reed has told Labour MPs to vote down an amendment to the new planning bill intended to protect British wildlife and its habitats from destruction.The amendment, which was passed with a large majority in the House of Lords, restricts the most controversial part of the draft bill by removing protected animals such as dormice, badgers, hedgehogs, otters and nightingales, and rare habitats such as wetlands and ancient woodlands, from new rules which allow developers to sidestep environmental laws to speed up house building.Under the draft legislation proposed by Labour, developers will be able to pay into a national “nature recovery fund” and go ahead with their project straight away, instead of having to carry out an environmental survey and to first avoid, then mitigate damage, before putting spades into the ground.Experts say this is a regression on decades-old environmental law and it has been criticised as “cash to trash” by ecologists and environmental groups.The Lords’ amendment would mean the nature recovery fund is restricted to impacts from water and air pollution, meaning developers would still have to take the usual measures to mitigate damage to wildlife and habitats.Reed has recommended rejecting the amendment when the bill returns to the Commons on Thursday for the final stages before being passed into law.In a letter to MPs some of the UK’s biggest nature charities, including the Wildlife Trusts and RSPB, say the government rollback of environmental law “lacks any rigorous scientific or ecological justification.“There is no credible, published, or well established evidence that this model can simply be scaled or replicated for multiple species nationwide without risking serious ecological harm, legal uncertainty, and increased costs for both developers and land managers,” the letter reads.The Guardian revealed this week how the chancellor, Rachel Reeves, and housing minister Matthew Pennycook have met scores of developers in the past year over the planning bill. Reeves has not met a single environmental organisation or the body for professional ecologists, while Pennycook has had just four meetings with such groups, compared with 16 with leading developers.A spokesperson for the Ministry for Housing, Communities and Local Government said: “The planning and infrastructure bill will remove barriers to building vital new homes and infrastructure and this amendment is an unnecessary limit on the benefits which the nature restoration fund will create for both nature and the economy. There are already safeguards in our legislation to ensure environmental delivery plans are effective for the environment, as we get Britain building again and deliver the homes we need.”

I discovered a new Australian native bee, but there are still hundreds we need to identify

The discovery of a horned native bee that pollinates a rare plant highlights how little we know about Australian pollinators.

The female of the species has devil-like black horns, and a taste for extremely rare pollen. But until now, this Australian native bee has never been officially named or identified. My discovery of Megachile (Hackeriapis) lucifer, underscores the lack of knowledge and investment in Australia’s unique native bees. Whilst considerable funding and attention has been focused on the introduced European honey bee, Apis mellifera, there are still hundreds of native bees that are yet to be identified and named. How was this bee found? This fascinating new megachile (or leaf cutter) bee was first discovered while on a surveying trip in the Bremer Ranges in the goldfields region of Western Australia in 2019. I was conducting surveys for pollinators – such as bees, other insects, flies and wasps – of a critically endangered plant called Bremer marianthus, or Marianthus aquilonaris, which is only known in this region. Sadly, as is common for many threatened plant species, the pollinators for this straggly shrub with blue-tinged white flowers were completely unknown. One of the native bees collected on this visit immediately caught my attention because the female had large devil-like horns protruding from her clypeus – the broad plate on the front of a bee’s head. When I investigated, it was clear this wasn’t a species that had been found before. Whilst some native bees have horns or prongs, none have the large and slightly curved horns of this one. Comparing it with museum specimens, along with DNA barcoding, confirmed this species was new to collectors and to science. DNA barcoding also revealed a male native bee I had collected at the site was her partner, but he lacked horns. This is the opposite of the situation in much of the animal kingdom, where the males are more likely to be amoured. Bringer of light When you discover a new species, you have the honour of choosing a name. The first new species of native bee I “described” (or scientifically identified) in 2022, Leioproctus zephyr, is named after my dog, Zephyr. For this new species, the horns meant the name Lucifer was a perfect choice. Lucifer is also Latin for “light bringer”, and I hope this new species brings to light the wonders of our native bees. Australia has more than 2,000 species of native bees. They help keep our ecosystems healthy and play a crucial role in pollinating wildflowers. We need to understand native bees This new native bee, Megachile lucifer, is only one of an estimated 500 native bees that are not described. Far more attention has been given to the introduced European honey bee Apis mellifera. Whilst the honey bee is important for crop pollination, this species is not threatened, and can in fact harm our native bees. The truth is honeybees compete with native animals for food and habitat, disrupt native pollination systems and pose a serious biosecurity threat to our honey and pollination industries. Currently, there no requirement to survey for native bees in areas about to be mined, farmed or developed. Even if they are found, any species that has not been officially identified it has no conservation standing, which is one reason why taxonomic research is so important. Protect the pollinators Megachile lucifer was collected on a flowering mallee plant that attracted thousands of native bees and other insects. In subsequent years of surveying this site, the mallee was not flowering, Megachile lucifer was not seen, and far fewer insects were recorded. With no monitoring of native bees, we also don’t know how their populations are faring in response to threatening processes, like climate change. More interest and investment into the taxonomy, conservation and ecology of native bees, means we can protect both them and the rare and precious plants they pollinate. Kit Prendergast received funding from the Atlas of Living Australia, with a Biodiversity Mobilisation Grant and Goldfields Environmental Management Group Grant. The surveys were conducted as an ecological consultant, subcontracted to Botanica Consulting, who were commissioned by Audalia Resources Limited.

Suggested Viewing

Join us to forge
a sustainable future

Our team is always growing.
Become a partner, volunteer, sponsor, or intern today.
Let us know how you would like to get involved!

CONTACT US

sign up for our mailing list to stay informed on the latest films and environmental headlines.

Subscribers receive a free day pass for streaming Cinema Verde.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.