Cookies help us run our site more efficiently.

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information or to customize your cookie preferences.

GoGreenNation News

Learn more about the issues presented in our films
Show Filters

Making clean energy investments more successful

Tools for forecasting and modeling technological improvements and the impacts of policy decisions can result in more effective and impactful decision-making.

Governments and companies constantly face decisions about how to allocate finite amounts of money to clean energy technologies that can make a difference to the world’s climate, its economies, and to society as a whole. The process is inherently uncertain, but research has been shown to help predict which technologies will be most successful. Using data-driven bases for such decisions can have a significant impact on allowing more informed decisions that produce the desired results.The role of these predictive tools, and the areas where further research is needed, are addressed in a perspective article published Nov. 24 in Nature Energy, by professor Jessika Trancik of MIT’s Sociotechnical Systems Research Center and Institute of Data, Systems, and Society and 13 co-authors from institutions around the world.She and her co-authors span engineering and social science and share “a common interest in understanding how to best use data and models to inform decisions that influence how technology evolves,” Trancik says. They are interested in “analyzing many evolving technologies — rather than focusing on developing only one particular technology — to understand which ones can deliver.” Their paper is aimed at companies and governments, as well as researchers. “Increasingly, companies have as much agency as governments over these technology portfolio decisions,” she says, “although government policy can still do a lot because it can provide a sort of signal across the market.”The study looked at three stages of the process, starting with forecasting the actual technological changes that are likely to play important roles in coming years, then looking at how those changes could affect economic, social, and environmental conditions, and finally, how to apply these insights into the actual decision-making processes as they occur.Forecasting usually falls into two categories, either data-driven or expert-driven, or a combination of those. That provides an estimate of how technologies may be improving, as well as an estimate of the uncertainties in those predictions. Then in the next step, a variety of models are applied that are “very wide ranging,” Trancik says, “different models that cover energy systems, transportation systems, electricity, and also integrated assessment models that look at the impact of technology on the environment and on the economy.”And then, the third step is “finding structured ways to use the information from predictive models to interact with people that may be using that information to inform their decision-making process,” she says. “In all three of these steps, how you need to recognize the vast uncertainty and tease out the predictive aspects. How you deal with uncertainty is really important.”In the implementation of these decisions, “people may have different objectives, or they may have the same objective but different beliefs about how to get there. And so, part of the research is bringing in this quantitative analysis, these research results, into that process,” Trancik says. And a very important aspect of that third step, she adds, is “recognizing that it’s not just about presenting the model results and saying, ‘here you go, this is the right answer.’ Rather, you have to bring people into the process of designing the studies and interacting with the modeling results.”She adds that “the role of research is to provide information to, in this case, the decision-making processes. It’s not the role of the researchers to push for one outcome or another, in terms of balancing the trade-offs,” such as between economic, environmental, and social equity concerns. It’s about providing information, not just for the decision-makers themselves, but also for the public who may influence those decisions. “I do think it’s relevant for the public to think about this, and to think about the agency that actually they could have over how technology is evolving.”In the study, the team highlighted priorities for further research that needs to be done. Those priorities, Trancik says, include “streamlining and validating models, and also streamlining data collection,” because these days “we often have more data than we need, just tons of data,” and yet “there’s often a scarcity of data in certain key areas like technology performance and evolution. How technologies evolve is just so important in influencing our daily lives, yet it’s hard sometimes to access good representative data on what’s actually happening with this technology.” But she sees opportunities for concerted efforts to assemble large, comprehensive data on technology from publicly available sources.Trancik points out that many models are developed to represent some real-world process, and “it’s very important to test how well that model does against reality,” for example by using the model to “predict” some event whose outcome is already known and then “seeing how far off you are.” That’s easier to do with a more streamlined model, she says.“It’s tempting to develop a model that includes many, many parameters and lots of different detail. But often what you need to do is only include detail that’s relevant for the particular question you’re asking, and that allows you to make your model simpler.” Sometimes that means you can simplify the decision down to just solving an equation, and other times, “you need to simulate things, but you can still validate the model against real-world data that you have.”“The scale of energy and climate problems mean there is much more to do,” says Gregory Nemet, faculty chair in business and regulation at the University of Wisconsin at Madison, who was a co-author of the paper. He adds, “while we can’t accurately forecast individual technologies on their own, a variety of methods have been developed that in conjunction can enable decision-makers to make public dollars go much further, and enhance the likelihood that future investments create strong public benefits.”This work is perhaps particularly relevant now, Trancik says, in helping to address global challenges including climate change and meeting energy demand, which were in focus at the global climate conference COP 30 that just took place in Brazil. “I think with big societal challenges like climate change, always a key question is, ‘how do you make progress with limited time and limited financial resources?’” This research, she stresses, “is all about that. It’s about using data, using knowledge that’s out there, expertise that’s out there, drawing out the relevant parts of all of that, to allow people and society to be more deliberate and successful about how they’re making decisions about investing in technology.”As with other areas such as epidemiology, where the power of analytical forecasting may be more widely appreciated, she says, “in other areas of technology as well, there’s a lot we can do to anticipate where things are going, how technology is evolving at the global or at the national scale … There are these macro-level trends that you can steer in certain directions, that we actually have more agency over as a society than we might recognize.”The study included researchers in Massachusetts, Wisconsin, Colorado, Maryland, Maine, California, Austria, Norway, Mexico, Finland, Italy, the U.K., and the Netherlands. 

Coalmine expansions would breach climate targets, NSW government warned in ‘game-changer’ report

Environmental advocates welcome Net Zero Commission’s report which found the fossil fuel was ‘not consistent’ with emissions reductions commitments Sign up for climate and environment editor Adam Morton’s free Clear Air newsletter hereGet our breaking news email, free app or daily news podcastThe New South Wales government has been warned it can no longer approve coalmine developments after the state’s climate agency found new expansions would be inconsistent with its legislated emissions targets.In what climate advocates described as a significant turning point in campaigns against new fossil fuel programs, the NSW Net Zero Commission said coalmine expansions were “not consistent” with the state’s legal emissions reductions commitments of a 50% cut (compared with 2005 levels) by 2030, a 70% cut by 2035, and reaching net zero by 2050.Sign up to get climate and environment editor Adam Morton’s Clear Air column as a free newsletter Continue reading...

The New South Wales government has been warned it can no longer approve coalmine developments after the state’s climate agency found new expansions would be inconsistent with its legislated emissions targets.In what climate advocates described as a significant turning point in campaigns against new fossil fuel programs, the NSW Net Zero Commission said coalmine expansions were “not consistent” with the state’s legal emissions reductions commitments of a 50% cut (compared with 2005 levels) by 2030, a 70% cut by 2035, and reaching net zero by 2050.The commission’s Coal Mining Emissions Spotlight Report said the government should consider the climate impact – including from the “scope 3” emissions released into the atmosphere when most of the state’s coal is exported and burned overseas – in all coalmine planning decisions.Environmental lawyer Elaine Johnson said the report was a “game-changer” as it argued coalmining was the state’s biggest contribution to the climate crisis and that new coal proposals were inconsistent with the legislated targets.She said it also found demand for coal was declining – consistent with recent analyses by federal Treasury and the advisory firm Climate Resource – and the state government must support affected communities to transition to new industries.“What all this means is that it is no longer lawful to keep approving more coalmine expansions in NSW,” Johnson wrote on social media site LinkedIn. “Let’s hope the Department of Planning takes careful note when it’s looking at the next coalmine expansion proposal.”The Lock the Gate Alliance, a community organisation that campaigns against fossil fuel developments, said the report showed changes were required to the state’s planning framework to make authorities assess emissions and climate damage when considering mine applications.It said this should apply to 18 mine expansions that have been proposed but not yet approved, including two “mega-coalmine expansions” at the Hunter Valley Operations and Maules Creek mines. Eight coalmine expansions have been approved since the Minns Labor government was elected in 2023.Lock the Gate’s Nic Clyde said NSW already had 37 coalmines and “we can’t keep expanding them indefinitely”. He called for an immediate moratorium on approving coal expansions until the commission’s findings had been implemented.“This week, multiple NSW communities have been battling dangerous bushfires, which are becoming increasingly severe due to climate change fuelled by coalmining and burning. Our safety and our survival depends on how the NSW government responds to this report,” he said.Net zero emissions is a target that has been adopted by governments, companies and other organisations to eliminate their contribution to the climate crisis. It is sometimes called “carbon neutrality”.The climate crisis is caused by carbon dioxide and other greenhouse gases being pumped into the atmosphere, where they trap heat. They have already caused a significant increase in average global temperatures above pre-industrial levels recorded since the mid-20th century. Countries and others that set net zero emissions targets are pledging to stop their role in worsening this by cutting their climate pollution and balancing out whatever emissions remain by sucking an equivalent amount of CO2 out of the atmosphere.This could happen through nature projects – tree planting, for example – or using carbon dioxide removal technology.CO2 removal from the atmosphere is the “net” part in net zero. Scientists say some emissions will be hard to stop and will need to be offset. But they also say net zero targets will be effective only if carbon removal is limited to offset “hard to abate” emissions. Fossil use will still need to be dramatically reduced.After signing the 2015 Paris agreement, the global community asked the Intergovernmental Panel on Climate Change (IPCC) to assess what would be necessary to give the world a chance of limiting global heating to 1.5C.The IPCC found it would require deep cuts in global CO2 emissions: to about 45% below 2010 levels by 2030, and to net zero by about 2050.The Climate Action Tracker has found more than 145 countries have set or are considering setting net zero emissions targets. Photograph: Ashley Cooper pics/www.alamy.comThe alliance’s national coordinator, Carmel Flint, added: “It’s not just history that will judge the government harshly if they continue approving such projects following this report. Our courts are likely to as well.”The NSW Minerals Council criticised the commission’s report. Its chief executive, Stephen Galilee, said it was a “flawed and superficial analysis” that put thousands of coalmining jobs at risk. He said some coalmines would close in the years ahead but was “no reason” not to approve outstanding applications to extend the operating life of about 10 mines.Galilee said emissions from coal in NSW were falling faster than the average rate of emission reduction across the state and were “almost fully covered” by the federal government’s safeguard mechanism policy, which required mine owners to either make annual direct emissions cuts or buy offsets.He said the NSW government should “reflect on why it provides nearly $7m annually” for the commission to “campaign against thousands of NSW mining jobs”.But the state’s main environment organisation, the Nature Conservation Council of NSW, said the commission report showed coalmining was “incompatible with a safe climate future”.“The Net Zero Commission has shone a spotlight. Now the free ride for coalmine pollution has to end,” the council’s chief executive, Jacqui Mumford, said.The state climate change and energy minister, Penny Sharpe, said the commission was established to monitor, report and provide independent advice on how the state was meeting its legislated emissions targets, and the government would consider its advice “along with advice from other groups and agencies”.

Nope, Billionaire Tom Steyer Is Not a Bellwether of Climate Politics

What should we make of billionaire Tom Steyer’s reinvention as a populist candidate for California governor, four years after garnering only 0.72 percent of the popular vote in the 2020 Democratic presidential primary, despite obscene spending from his personal fortune? Is it evidence that he’s a hard man to discourage? (In that race, he dropped almost $24 million on South Carolina alone.) Is it evidence that billionaires get to do a lot of things the rest of us don’t? Or is it evidence that talking about climate change is for losers and Democrats need to abandon it?Politico seems to think it’s the third one: Steyer running a populist gubernatorial campaign means voters don’t care about global warming.“The billionaire environmental activist who built his political profile on climate change—and who wrote in his book last year that ‘climate is what matters most right now, and nothing else comes close’—didn’t mention the issue once in the video launching his campaign for California governor,” reporter Noah Baustin wrote recently. “That was no oversight.” Instead, “it reflects a political reality confronting Democrats ahead of the midterms, where onetime climate evangelists are running into an electorate more worried about the climbing cost of electricity bills and home insurance than a warming atmosphere.”It’s hard to know how to parse a sentence like this. The “climbing cost of electricity bills and home insurance” is, indisputably, a climate issue. Renewable energy is cheaper than fossil fuels, and home insurance is spiking because increasingly frequent and increasingly severe weather events—driven by climate change—are making large swaths of the country expensive or impossible to insure. The fact that voters are struggling to pay for utilities and insurance, therefore, is not evidence that they don’t care about climate change. Instead, it’s evidence that climate change is a kitchen table issue, and politicians are, disadvantageously, failing to embrace the obviously populist message that accompanies robust climate policy. This is a problem with Democratic messaging, not a problem with climate as a topic.The piece goes on: “Climate concern has fallen in the state over time. In 2018, when Gov. Gavin Newsom was running for office, polling found that 57 percent of likely California voters considered climate change a very serious threat to the economy and quality of life for the state’s future. Now, that figure is 50 percent.”This may sound persuasive to you. But in fact, it’s a highly selective reading of the PPIC survey data linked above. What the poll actually found is that the proportion of Californians calling climate change a “very serious” threat peaked at 57 percent in 2019, fell slightly in subsequent years, then fell precipitously by 11 points between July 2022 and July 2023, before rising similarly precipitously from July 2024 to July 2025. Why did it fall so quickly from 2022 to 2023? Sure, maybe people stopped caring about climate change. Or maybe instead, the month after the 2022 poll, Congress passed the Inflation Reduction Act, the most significant climate policy in U.S. history, and people stopped being quite so worried. Why did concern then rise rapidly between July 2024 and July 2025? Well, between those two dates, Trump won the presidential election and proceeded, along with Republicans in Congress, to dismantle anything remotely resembling climate policy. The Inflation Reduction Act fell apart. I’m not saying this is the only way to read this data. But consider this: The percentage of respondents saying they were somewhat or very worried about members of their household being affected by natural disasters actually went up over the same period. The percentage saying air pollution was “a more serious health threat in lower-income areas” nearby went up. Those saying flooding, heat waves, and wildfires should be considered “a great deal” when siting new affordable housing rose a striking 12 percentage points from 2024 to 2025, and those “very concerned” about rising insurance costs “due to climate risks” rose 14 percentage points.This is not a portrait of an electorate that doesn’t care about climate change. It’s a portrait of an electorate that may actually be very ready to hear a politician convincingly embrace climate populism—championing affordability and better material conditions for working people, in part by protecting them from the predatory industries driving a cost-of-living crisis while poisoning people.This is part of a broader problem. Currently, there’s a big push from centrist Democratic institutions to argue that the party should abandon climate issues in order to win elections. The evidence for this is mixed, at best. As TNR’s Liza Featherstone recently pointed out, Democrats’ striking victories last month showed that candidates fusing climate policy with an energy affordability message did very well. Aaron Regunberg went into further detail on why talking about climate change is a smart strategy: “Right now,” he wrote, “neither party has a significant trust advantage on ‘electric utility bills’ (D+1) or ‘the cost of living’ (R+1). But Democrats do have major trust advantages on ‘climate change’ (D+14) and ‘renewable energy development’ (D+6). By articulating how their climate and clean energy agenda can address these bread-and-butter concerns, Democrats can leverage their advantage on climate to win voters’ trust on what will likely be the most significant issues in 2026 and 2028.”One of the troubles with climate change in political discourse is that some people’s understanding of environmental politics begins and ends with the spotted owl logging battles in the 1990s. This is the sort of attitude that drives the assumption that affordability policy and climate policy are not only distinct but actually opposed. But that’s wildly disconnected from present reality. Maybe Tom Steyer isn’t the guy to illustrate that! But his political fortunes, either way, don’t say much at all about climate messaging more broadly.Stat of the Week3x as many infant deathsA new study finds that babies of mothers “whose drinking water wells were downstream of PFAS releases” died at almost three times the rate in their first year of life as babies of mothers who did not live downstream of PFAS contamination. Read The Washington Post’s report on the study here.What I’m ReadingMore than 200 environmental groups demand halt to new US datacentersAn open letter calls on Congress to pause all approvals of new data centers until regulation catches up, due to problems such as data centers’ voracious energy consumption, greenhouse gas emissions, and water use. From The Guardian’s report:The push comes amid a growing revolt against moves by companies such as Meta, Google and Open AI to plow hundreds of billions of dollars into new datacenters, primarily to meet the huge computing demands of AI. At least 16 datacenter projects, worth a combined $64bn, have been blocked or delayed due to local opposition to rising electricity costs. The facilities’ need for huge amounts of water to cool down equipment has also proved controversial, particularly in drier areas where supplies are scarce.These seemingly parochial concerns have now multiplied to become a potent political force, helping propel Democrats to a series of emphatic recent electoral successes in governor elections in Virginia and New Jersey as well as a stunning upset win in a special public service commission poll in Georgia, with candidates campaigning on lowering power bill costs and curbing datacenters.Read Oliver Milman’s full report at The Guardian.This article first appeared in Life in a Warming World, a weekly TNR newsletter authored by deputy editor Heather Souvaine Horn. Sign up here.

What should we make of billionaire Tom Steyer’s reinvention as a populist candidate for California governor, four years after garnering only 0.72 percent of the popular vote in the 2020 Democratic presidential primary, despite obscene spending from his personal fortune? Is it evidence that he’s a hard man to discourage? (In that race, he dropped almost $24 million on South Carolina alone.) Is it evidence that billionaires get to do a lot of things the rest of us don’t? Or is it evidence that talking about climate change is for losers and Democrats need to abandon it?Politico seems to think it’s the third one: Steyer running a populist gubernatorial campaign means voters don’t care about global warming.“The billionaire environmental activist who built his political profile on climate change—and who wrote in his book last year that ‘climate is what matters most right now, and nothing else comes close’—didn’t mention the issue once in the video launching his campaign for California governor,” reporter Noah Baustin wrote recently. “That was no oversight.” Instead, “it reflects a political reality confronting Democrats ahead of the midterms, where onetime climate evangelists are running into an electorate more worried about the climbing cost of electricity bills and home insurance than a warming atmosphere.”It’s hard to know how to parse a sentence like this. The “climbing cost of electricity bills and home insurance” is, indisputably, a climate issue. Renewable energy is cheaper than fossil fuels, and home insurance is spiking because increasingly frequent and increasingly severe weather events—driven by climate change—are making large swaths of the country expensive or impossible to insure. The fact that voters are struggling to pay for utilities and insurance, therefore, is not evidence that they don’t care about climate change. Instead, it’s evidence that climate change is a kitchen table issue, and politicians are, disadvantageously, failing to embrace the obviously populist message that accompanies robust climate policy. This is a problem with Democratic messaging, not a problem with climate as a topic.The piece goes on: “Climate concern has fallen in the state over time. In 2018, when Gov. Gavin Newsom was running for office, polling found that 57 percent of likely California voters considered climate change a very serious threat to the economy and quality of life for the state’s future. Now, that figure is 50 percent.”This may sound persuasive to you. But in fact, it’s a highly selective reading of the PPIC survey data linked above. What the poll actually found is that the proportion of Californians calling climate change a “very serious” threat peaked at 57 percent in 2019, fell slightly in subsequent years, then fell precipitously by 11 points between July 2022 and July 2023, before rising similarly precipitously from July 2024 to July 2025. Why did it fall so quickly from 2022 to 2023? Sure, maybe people stopped caring about climate change. Or maybe instead, the month after the 2022 poll, Congress passed the Inflation Reduction Act, the most significant climate policy in U.S. history, and people stopped being quite so worried. Why did concern then rise rapidly between July 2024 and July 2025? Well, between those two dates, Trump won the presidential election and proceeded, along with Republicans in Congress, to dismantle anything remotely resembling climate policy. The Inflation Reduction Act fell apart. I’m not saying this is the only way to read this data. But consider this: The percentage of respondents saying they were somewhat or very worried about members of their household being affected by natural disasters actually went up over the same period. The percentage saying air pollution was “a more serious health threat in lower-income areas” nearby went up. Those saying flooding, heat waves, and wildfires should be considered “a great deal” when siting new affordable housing rose a striking 12 percentage points from 2024 to 2025, and those “very concerned” about rising insurance costs “due to climate risks” rose 14 percentage points.This is not a portrait of an electorate that doesn’t care about climate change. It’s a portrait of an electorate that may actually be very ready to hear a politician convincingly embrace climate populism—championing affordability and better material conditions for working people, in part by protecting them from the predatory industries driving a cost-of-living crisis while poisoning people.This is part of a broader problem. Currently, there’s a big push from centrist Democratic institutions to argue that the party should abandon climate issues in order to win elections. The evidence for this is mixed, at best. As TNR’s Liza Featherstone recently pointed out, Democrats’ striking victories last month showed that candidates fusing climate policy with an energy affordability message did very well. Aaron Regunberg went into further detail on why talking about climate change is a smart strategy: “Right now,” he wrote, “neither party has a significant trust advantage on ‘electric utility bills’ (D+1) or ‘the cost of living’ (R+1). But Democrats do have major trust advantages on ‘climate change’ (D+14) and ‘renewable energy development’ (D+6). By articulating how their climate and clean energy agenda can address these bread-and-butter concerns, Democrats can leverage their advantage on climate to win voters’ trust on what will likely be the most significant issues in 2026 and 2028.”One of the troubles with climate change in political discourse is that some people’s understanding of environmental politics begins and ends with the spotted owl logging battles in the 1990s. This is the sort of attitude that drives the assumption that affordability policy and climate policy are not only distinct but actually opposed. But that’s wildly disconnected from present reality. Maybe Tom Steyer isn’t the guy to illustrate that! But his political fortunes, either way, don’t say much at all about climate messaging more broadly.Stat of the Week3x as many infant deathsA new study finds that babies of mothers “whose drinking water wells were downstream of PFAS releases” died at almost three times the rate in their first year of life as babies of mothers who did not live downstream of PFAS contamination. Read The Washington Post’s report on the study here.What I’m ReadingMore than 200 environmental groups demand halt to new US datacentersAn open letter calls on Congress to pause all approvals of new data centers until regulation catches up, due to problems such as data centers’ voracious energy consumption, greenhouse gas emissions, and water use. From The Guardian’s report:The push comes amid a growing revolt against moves by companies such as Meta, Google and Open AI to plow hundreds of billions of dollars into new datacenters, primarily to meet the huge computing demands of AI. At least 16 datacenter projects, worth a combined $64bn, have been blocked or delayed due to local opposition to rising electricity costs. The facilities’ need for huge amounts of water to cool down equipment has also proved controversial, particularly in drier areas where supplies are scarce.These seemingly parochial concerns have now multiplied to become a potent political force, helping propel Democrats to a series of emphatic recent electoral successes in governor elections in Virginia and New Jersey as well as a stunning upset win in a special public service commission poll in Georgia, with candidates campaigning on lowering power bill costs and curbing datacenters.Read Oliver Milman’s full report at The Guardian.This article first appeared in Life in a Warming World, a weekly TNR newsletter authored by deputy editor Heather Souvaine Horn. Sign up here.

‘Soil is more important than oil’: inside the perennial grain revolution

Scientists in Kansas believe Kernza could cut emissions, restore degraded soils and reshape the future of agricultureOn the concrete floor of a greenhouse in rural Kansas stands a neat grid of 100 plastic plant pots, each holding a straggly crown of strappy, grass-like leaves. These plants are perennials – they keep growing, year after year. That single characteristic separates them from soya beans, wheat, maize, rice and every other major grain crop, all of which are annuals: plants that live and die within a single growing season.“These plants are the winners, the ones that get to pass their genes on [to future generations],” says Lee DeHaan of the Land Institute, an agricultural non-profit based in Salina, Kansas. If DeHaan’s breeding programme maintains its current progress, the descendant of these young perennial crop plants could one day usher in a wholesale revolution in agriculture. Continue reading...

On the concrete floor of a greenhouse in rural Kansas stands a neat grid of 100 plastic plant pots, each holding a straggly crown of strappy, grass-like leaves. These plants are perennials – they keep growing, year after year. That single characteristic separates them from soya beans, wheat, maize, rice and every other major grain crop, all of which are annuals: plants that live and die within a single growing season.“These plants are the winners, the ones that get to pass their genes on [to future generations],” says Lee DeHaan of the Land Institute, an agricultural non-profit based in Salina, Kansas. If DeHaan’s breeding programme maintains its current progress, the descendant of these young perennial crop plants could one day usher in a wholesale revolution in agriculture.The plants are intermediate wheatgrass. Since 2010, DeHaan has been transforming this small-seeded, wild species into a high-yielding, domesticated grain crop called Kernza. He believes it will eventually be a viable – and far more sustainable – alternative to annual wheat, the world’s most widely grown crop and the source of one in five of all calories consumed by humanity.Elite Kernza plants selected from 4,000 seedlings in the Land Institute’s perennial grain breeding programme. Photograph: Ben MartynogaAnnual plants thrive in bare ground. Growing them requires fields to be prepared, usually by ploughing or intensive herbicide treatment, and new seeds planted each year. For this reason, Tim Crews, chief scientist at the Land Institute, describes existing agricultural systems as “the greatest disturbance on the planet”. “There’s nothing like it,” he says.The damage inflicted by today’s food system is clear: one-third of global greenhouse gas emissions; ocean dead zones covering thousands of square miles; and 25bn-40bn tonnes of fertile topsoil lost each year.Replacing annual plants with perennial varieties would massively reduce agriculture’s environmental impact. Soil erosion would drop; perennials would instead build soil health, limiting runoff of nutrients and toxic farm chemicals, cutting fertiliser and pesticide use, and storing climate-heating carbon within farm soils.There is just one problem. Reliable, high-yielding perennial grain crops barely exist.The inspiration for the Land Institute’s push to develop perennial grains came from its founder, Wes Jackson, 89. For Jackson, the health of soils that generate 95% of human calories should be a primary concern for all civilisations. “Soil is more important than oil,” he says in a recent documentary. “Soil is as much of a non-renewable resource as oil. Start there, and ask: ‘What does that require of us?’”Lee DeHaan at the Land Institute in Salina, Kansas. Photograph: Ben MartynogaJackson hit upon an answer during a visit to a native prairie reserve in Kansas in the late 1970s. Prairies are highly productive and biodiverse perennial grassland ecosystems. They don’t erode soils; they build them. Indeed, the rich soils that make much of the US midwest and Great Plains such prime agricultural lands were formed, over thousands of years, by prairie plants working with underground microbes.Why is it that we cannot have perennial grains that grow like prairie plants, Jackson wondered. “That was the epiphany that set me off,” he said in a recent interview.DeHaan, 52, learned about Jackson’s mission while he was a teenager in the early 1990s. Having grown up on a Minnesota farm, he was immediately inspired. “I would love to try to create the first perennial grain crop,” he resolved. “That became my dream.”Though still under development, Kernza is already a viable crop, grown at modest scale in 15 US states. Kernza seeds and flour are used in a range of products, from beers to breakfast cereals.The key challenge is yields. In Kansas, the best Kernza yields are about one-quarter those of annual wheat. But DeHaan says this is changing rapidly. “My best current extrapolation is that some Kernza plants could have wheat-like yields within about 15 years.”“We have to go fast,” he says. To hit this target, his breeding scheme deploys DNA profiling, computer modelling and far-red LED lighting to push the experimental plants through two full breeding cycles each year.But yields are just one metric of success. Whereas annual wheat roots are about half a metre long and temporary, Kernza’s roots are permanent and can plunge 3 metres deep. Such roots unlock a whole suite of environmental and agricultural benefits: stabilising and enriching soils, gathering nutrients and providing water, even during droughts.A comparison of wheatgrass (left) and wheat roots at the Land Institute. Photograph: Ben Martynoga/The Land InstitutePerennial plants also tend to have far stronger in-built resistance to pests, diseases and weeds than annual plants, especially when grown in mixed plant polycultures.The Land Institute is working with collaborators across 30 countries to develop many new perennial crops: oil seeds, wheat, pulses, quinoa and several other grains.The potential applications are diverse. In Uganda, researchers are developing perennial sorghum for drought tolerance. In war-torn Ukraine, where supply chains are disrupted and rich soils are degrading, Kernza is being tested as a low-input crop. As DeHaan, Crews and colleagues write in a recent scientific paper, perennial grains represent “a farmer’s dream … a cultivar that is planted once and then harvested every season for several years with a minimum of land management.”Success is far from guaranteed. But perennial rice, grown in China since 2018, provides crucial proof of concept. Led by Yunnan University with Land Institute support, the work took just 20 years. Perennial rice now matches the yields of elite annual varieties, with research demonstrating significant greenhouse gas reductions.Perennial rice grown in a research trial in Yunnan. Photograph: Ben Martynoga/The Land InstituteDeHaan believes perennial grains are uniquely capable of rebalancing what he calls the “three-legged stool” of agricultural sustainability, whereby productivity, farm economics and environmental impact must be in balance.This metaphor is not abstract for DeHaan – he has lived it. During the 1980s, his family’s Minnesota farm produced plenty of grain but the economics failed. Spiking interest rates forced them to sell, along with thousands of other midwest farms. The environmental costs – eroding soil, contaminated water – did not appear on any ledger, but they were visible in the landscape.Current agriculture, DeHaan argues, is supported by $600bn in annual subsidies worldwide, which too often prop up production, while farming communities struggle and ecological damage mounts.Perennial grains could eventually deliver on all three fronts simultaneously. But formidable challenges must still be solved to achieve that.Kernza growing on the Land Institute’s research fields. Lee DeHaan estimates the crop’s yields could match wheat within 15 years. Photograph: Ben MartynogaYields must improve substantially. The problem of harvests tapering off, year-by-year, must also be solved. Farmers will have to develop new methods for growing and harvesting these crops. Markets present another hurdle. Current supply chains are optimised for a narrow range of staple crops, grown in monoculture, making processing costs prohibitive for new crops with different properties.Kernza grain – smaller than wheat – ready for milling. Photograph: The Land InstituteFor all these reasons, DeHaan firmly rejects the idea that perennials are a “silver bullet”. “The reason is that it’s difficult,” he says. “The trade-off is time and investment. That’s why they don’t exist yet. It’s going to take decades of work and millions of dollars.”Remarkably, DeHaan does not paint the current agricultural-industrial complex as the enemy. “Every disruptive technology is always opposed by those being disrupted,” he says. “But if the companies [that make up] the current system can adjust to the disruption, they can play in that new world just the same.”The Land Institute’s strategy is redirection rather than replacement. “Our trajectory is to eventually get the resources that are currently dedicated to annual grain crops directed to developing varieties of perennials,” says DeHaan. “That’s our [route to] success.”There are signs that this is already working, with the food firm General Mills now incorporating Kernza into its breakfast cereals.Back in the Kansas greenhouse, DeHaan strikes a reflective note. “When I started working here in 2001, these ideas were regarded as very radical. It was embarrassing to even bring up the ideas we were working on. It was laughable.”That, he says, is no longer true. Major research institutions, businesses and an expanding network of global partners are now engaging with perennial grain development.DeHaan points to his “winners” – the 100 young Kernza plants before us. Within a human generation, their descendants could be feeding millions while repairing soils that took millennia to form. “We don’t just have our head in the clouds,” he says. “We’re not just dreaming of this impossible future.”

This moss survived in space for 9 months

In an experiment on the outside of the International Space Station, a species of moss survived in space for 9 months. And it could have lasted much longer. The post This moss survived in space for 9 months first appeared on EarthSky.

Meet a spreading earthmoss known as Physcomitrella patens. It’s frequently used as a model organism for studies on plant evolution, development, and physiology. In this image, a reddish-brown sporophyte sits at the top center of a leafy gametophore. This capsule contains numerous spores inside. Scientists tested samples like these on the outside of the International Space Station (ISS) to see if they could tolerate the extreme airless environment. And they did. The moss survived in space for 9 months and could have lasted even longer. Image via Tomomichi Fujita/ EurekAlert! (CC BY-SA). Space is a deadly environment, with no air, extreme temperature swings and harsh radiation. Could any life survive there? Reasearchers in Japan tested a type of moss called spreading earthmoss on the exterior of the International Space Station. The moss survived for nine months, and the spores were still able to reproduce when brought back to Earth. Moss survived in space for 9 months Can life exist in space? Not simply on other planets or moons, but in the cold, dark, airless void of space itself? Most organisms would perish almost immediately, to be sure. But researchers in Japan recently experimented with moss, with surprising results. They said on November 20, 2025, that more than 80% of their moss spores survived nine months on the outside of the International Space Station. Not only that, but when brought back to Earth, they were still capable of reproducing. Nature, it seems, is even tougher than we thought! Amazingly, the results show that some primitive plants – not even just microorganisms – can survive long-term exposure to the extreme space environment. The researchers published their peer-reviewed findings in the journal iScience on November 20, 2025. A deadly environment for life Space is a horrible place for life. The lack of air, radiation and extreme cold make it pretty much unsurvivable for life as we know it. As lead author Tomomichi Fujita at Hokkaido University in Japan stated: Most living organisms, including humans, cannot survive even briefly in the vacuum of space. However, the moss spores retained their vitality after nine months of direct exposure. This provides striking evidence that the life that has evolved on Earth possesses, at the cellular level, intrinsic mechanisms to endure the conditions of space. This #moss survived 9 months directly exposed to the vacuum space and could still reproduce after returning to Earth. ? ? spkl.io/63322AdFrpTomomichi Fujita & colleagues@cp-iscience.bsky.social — Cell Press (@cellpress.bsky.social) 2025-11-24T16:00:02.992Z What about moss? Researchers wanted to see if any Earthly life could survive in space’s deadly environment for the long term. To find out, they decided to do some experiments with a type of moss called spreading earthmoss, or Physcomitrium patens. The researchers sent hundreds of sporophytes – encapsulated moss spores – to the International Space Station in March 2022, aboard the Cygnus NG-17 spacecraft. They attached the sporophyte samples to the outside of the ISS, where they were exposed to the vacuum of space for 283 days. By doing so, the samples were subjected to high levels of UV (ultraviolet) radiation and extreme swings of temperature. The samples later returned to Earth in January 2023. The researchers tested three parts of the moss. These were the protonemata, or juvenile moss; brood cells, or specialized stem cells that emerge under stress conditions; and the sporophytes. Fujita said: We anticipated that the combined stresses of space, including vacuum, cosmic radiation, extreme temperature fluctuations and microgravity, would cause far greater damage than any single stress alone. Astronauts placed the moss samples on the outside of the International Space Station for the 9-month-long experiment. Incredibly, more than 80% of the the encapsulated spores survived the trip to space and back to Earth. Image via NASA/ Roscosmos. The moss survived! So, how did the moss do? The results were mixed, but overall showed that the moss could survive in space. The radiation was the most difficult aspect of the space environment to withstand. The sporophytes were the most resilient. Incredibly, they were able to survive and germinate after being exposed to -196 degrees Celsius (-320 degrees Fahrenheit) for more than a week. At the other extreme, they also survived in 55° degrees C (131 degrees F) heat for a month. Some brood cells survived as well, but the encased spores were about 1,000 times more tolerant to the UV radiation. On the other hand, none of the juvenile moss survived the high UV levels or the extreme temperatures. Samples of moss spores that germinated after their 9-month exposure to space. Image via Dr. Chang-hyun Maeng/ Maika Kobayashi/ EurekAlert!. (CC BY-SA). How did the spores survive? So why did the encapsulated spores do so well? The researchers said the natural structure surrounding the spore itself helps to protect the spore. Essentially, it absorbs the UV radiation and surrounds the inner spore both physically and chemically to prevent damage. As it turns out, this might be associated with the evolution of mosses. This is an adaptation that helped bryophytes – the group of plants to which mosses belong – to make the transition from aquatic to terrestrial plants 500 million years ago. Overall, more than 80% of the spores survived the journey to space and then back to Earth. And only 11% were unable to germinate after being brought back to the lab on Earth. That’s impressive! In addition, the researchers also tested the levels of chlorophyll in the spores. After the exposure to space, the spores still had normal amounts of chlorophyll, except for chlorophyll a specifically. In that case, there was a 20% reduction. Chlorophyll a is used in oxygenic photosynthesis. It absorbs the most energy from wavelengths of violet-blue and orange-red light. Tomomichi Fujita at Hokkaido University in Japan is the lead author of the new study about moss in space. Image via Hokkaido University. Spores could have survived for 15 years The time available for the experiment was limited to the several months. However, the researchers wondered if the moss spores could have survived even longer. And using mathematical models, they determined the spores would likely have continued to live in space for about 15 years, or 5,600 days, altogether. The researchers note this prediction is a rough estimate. More data would still be needed to make that assessment even more accurate. So the results show just how resilient moss is, and perhaps some other kinds of life, too. Fujita said: This study demonstrates the astonishing resilience of life that originated on Earth. Ultimately, we hope this work opens a new frontier toward constructing ecosystems in extraterrestrial environments such as the moon and Mars. I hope that our moss research will serve as a starting point. Bottom line: In an experiment on the outside of the International Space Station, a species of moss survived in space for nine months. And it could have lasted much longer. Source: Extreme environmental tolerance and space survivability of the moss, Physcomitrium patens Via EurekAlert! Read more: This desert moss could grow on Mars, no greenhouse needed Read more: Colorful life on exoplanets might be lurking in cloudsThe post This moss survived in space for 9 months first appeared on EarthSky.

New method improves the reliability of statistical estimations

The technique can help scientists in economics, public health, and other fields understand whether to trust the results of their experiments.

Let’s say an environmental scientist is studying whether exposure to air pollution is associated with lower birth weights in a particular county.They might train a machine-learning model to estimate the magnitude of this association, since machine-learning methods are especially good at learning complex relationships.Standard machine-learning methods excel at making predictions and sometimes provide uncertainties, like confidence intervals, for these predictions. However, they generally don’t provide estimates or confidence intervals when determining whether two variables are related. Other methods have been developed specifically to address this association problem and provide confidence intervals. But, in spatial settings, MIT researchers found these confidence intervals can be completely off the mark.When variables like air pollution levels or precipitation change across different locations, common methods for generating confidence intervals may claim a high level of confidence when, in fact, the estimation completely failed to capture the actual value. These faulty confidence intervals can mislead the user into trusting a model that failed.After identifying this shortfall, the researchers developed a new method designed to generate valid confidence intervals for problems involving data that vary across space. In simulations and experiments with real data, their method was the only technique that consistently generated accurate confidence intervals.This work could help researchers in fields like environmental science, economics, and epidemiology better understand when to trust the results of certain experiments.“There are so many problems where people are interested in understanding phenomena over space, like weather or forest management. We’ve shown that, for this broad class of problems, there are more appropriate methods that can get us better performance, a better understanding of what is going on, and results that are more trustworthy,” says Tamara Broderick, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS), a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society, an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and senior author of this study.Broderick is joined on the paper by co-lead authors David R. Burt, a postdoc, and Renato Berlinghieri, an EECS graduate student; and Stephen Bates an assistant professor in EECS and member of LIDS. The research was recently presented at the Conference on Neural Information Processing Systems.Invalid assumptionsSpatial association involves studying how a variable and a certain outcome are related over a geographic area. For instance, one might want to study how tree cover in the United States relates to elevation.To solve this type of problem, a scientist could gather observational data from many locations and use it to estimate the association at a different location where they do not have data.The MIT researchers realized that, in this case, existing methods often generate confidence intervals that are completely wrong. A model might say it is 95 percent confident its estimation captures the true relationship between tree cover and elevation, when it didn’t capture that relationship at all.After exploring this problem, the researchers determined that the assumptions these confidence interval methods rely on don’t hold up when data vary spatially.Assumptions are like rules that must be followed to ensure results of a statistical analysis are valid. Common methods for generating confidence intervals operate under various assumptions.First, they assume that the source data, which is the observational data one gathered to train the model, is independent and identically distributed. This assumption implies that the chance of including one location in the data has no bearing on whether another is included. But, for example, U.S. Environmental Protection Agency (EPA) air sensors are placed with other air sensor locations in mind.Second, existing methods often assume that the model is perfectly correct, but this assumption is never true in practice. Finally, they assume the source data are similar to the target data where one wants to estimate.But in spatial settings, the source data can be fundamentally different from the target data because the target data are in a different location than where the source data were gathered.For instance, a scientist might use data from EPA pollution monitors to train a machine-learning model that can predict health outcomes in a rural area where there are no monitors. But the EPA pollution monitors are likely placed in urban areas, where there is more traffic and heavy industry, so the air quality data will be much different than the air quality data in the rural area.In this case, estimates of association using the urban data suffer from bias because the target data are systematically different from the source data.A smooth solutionThe new method for generating confidence intervals explicitly accounts for this potential bias.Instead of assuming the source and target data are similar, the researchers assume the data vary smoothly over space.For instance, with fine particulate air pollution, one wouldn’t expect the pollution level on one city block to be starkly different than the pollution level on the next city block. Instead, pollution levels would smoothly taper off as one moves away from a pollution source.“For these types of problems, this spatial smoothness assumption is more appropriate. It is a better match for what is actually going on in the data,” Broderick says.When they compared their method to other common techniques, they found it was the only one that could consistently produce reliable confidence intervals for spatial analyses. In addition, their method remains reliable even when the observational data are distorted by random errors.In the future, the researchers want to apply this analysis to different types of variables and explore other applications where it could provide more reliable results.This research was funded, in part, by an MIT Social and Ethical Responsibilities of Computing (SERC) seed grant, the Office of Naval Research, Generali, Microsoft, and the National Science Foundation (NSF).

Government reveals taxpayer-funded deal to keep Australia’s largest aluminium smelter open. How long we will pay?

The federal government has done a deal - underwritten by the taxpayer - to keep Australia’s largest aluminum smelter open. What’s the exit strategy if it doesn’t go to plan?

It seemed inevitable – politically at least – that the federal government would step in to save Tomago Aluminium in New South Wales, Australia’s largest aluminium smelter. Rio Tinto, the owners of Tomago, has enjoyed attractively priced electricity for a long time, most recently with AGL. But this contract ends in 2028. Unable to find a replacement at a price it could accept, Rio Tinto warned that Tomago was facing closure. Tomago produces more than one-third of Australia’s aluminium and accounts for 12% of NSW’s energy consumption. On Friday, Prime Minister Anthony Albanese announced a Commonwealth-led deal for electricity supply beyond 2028. This deal will provide the smelter with billions of dollars in subsidised power from the Commonwealth-owned Snowy Hydro through a portfolio of renewables, backed by storage and gas. This follows months of negotiation to avoid the smelter closing and sacking its roughly 1,000 workers. The government has provided funding to support other struggling manufacturers such as the Whyalla steelworks and the Mount Isa copper smelter, and wants to see aluminium production continue in Australia. About 30–40% of the cost of making aluminium is the energy, so it’s a huge input. Electricity from the market would have been considerably more expensive, so the government is subsidising the commercial price. The deal may have been a necessary and immediate solution to a political problem with local economic and social impacts. However, it raises several important questions about the risks involved and the longevity of the plant. Risks and benefits First, to what risk is the federal government exposed? Commodity markets such as aluminium are prone to difficult cycles, and there’s a chance Tomago might not survive at all, in which case the government is off the hook. Not only are we looking to subsidise Tomago’s electricity, but we are looking for Snowy Hydro to invest in renewable energy projects and build more renewable energy in NSW. The history of building renewable energy and its support transmission infrastructure suggests that both cost and time constraints become problematic. The NSW government may have a role in supporting this side of the deal. The Commonwealth’s case for making this deal is presumably underpinned by its Future made in Australia policy. This says we should be supporting industries where there’s a national interest in a low-emissions world. So if, for example, we can see a future where subsidising Tomago’s electricity for five or ten years would mean it can produce low-emission aluminium the world wants to buy, that would be a success. But what happens if, after five or ten years, the world hasn’t sufficiently changed to provide enough renewable energy to make our electricity cost less? What if the rest of the world wants green, low-emissions aluminium, but that’s not what Australia produces? If the risks the government is underwriting crystallise in a bad way, does the government have an exit strategy? We’ve been here before In 1984, under the leadership of John Cain, the Labor government signed a joint venture agreement with Alcoa to build an aluminium smelter at Portland, including a deal to subsidise electricity until 2016. Forty years later, we’re still pay for it. With Tomago, we don’t want Australian taxpayers exposed to something over which we have no control – the global price of aluminium. If the price of aluminium collapses, or Snowy Hydro is permanently uncompetitive or China dominates the world market, the hypothesis that Tomago can be competitive in the long term collapses. Interestingly, this deal is very different to the one the Commonwealth and Queensland governments have done to support Rio Tinto’ Boyne smelter in Gladstone. In October, Rio Tinto announced plans to possibly bring forward the closure of Gladstone Power Station to 2029, six years ahead of the current schedule, and supply the smelter with predominantly renewable electricity. The move was welcomed by environmental groups, as Gladstone is Queensland’s oldest and largest coal-fired station. But some commentators have said closing the plant in four years’ time is unrealistic, and a staged phase-out would be better. The announcement this week, welcomed by the business and its workers, is probably unsurprising. But we haven’t seen the detail. The government may very well have a case for this deal, but the future of the plant and its power supply remain unknowable. The risks with taxpayer funds may have been worth taking, but they should be clearly explained and justified. Tony Wood does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

California Coastal Commission approves land deal to extend last nuclear plant through 2030

A landmark deal with Pacific Gas & Electric will extend the life of the state's remaining nuclear power plant in exchange for thousands of acres of new conservation in San Luis Obispo County.

California environmental regulators on Thursday struck a landmark deal with Pacific Gas & Electric to extend the life of the state’s last remaining nuclear power plant in exchange for thousands of acres of new land conservation in San Luis Obispo County.PG&E’s agreement with the California Coastal Commission is a key hurdle for the Diablo Canyon nuclear plant to remain online until at least 2030. The plant was slated to close this year, largely due to concerns over seismic safety, but state officials pushed to delay it — saying the plant remains essential for the reliable operation of California’s electrical grid. Diablo Canyon provides nearly 9% of the electricity generated in the state, making it the state’s single largest source. The Coastal Commission voted 9-3 to approve the plan, settling the fate of some 12,000 acres that surround the power plant as a means of compensation for environmental harm caused by its continued operation. Nuclear power does not emit greenhouse gases. But Diablo Canyon uses an estimated 2.5 billion gallons of ocean water each day to absorb heat in a process known as “once-through cooling,” which kills an estimated two billion or more marine organisms each year. Some stakeholders in the region celebrated the conservation deal, while others were disappointed by the decision to trade land for marine impacts — including a Native tribe that had hoped the land would be returned to them. Diablo Canyon sits along one of the most rugged and ecologically rich stretches of the California coast.Under the agreement, PG&E will immediately transfer a 4,500-acre parcel on the north side of the property known as the “North Ranch” into a conservation easement and pursue transfer of its ownership to a public agency such as the California Department of Parks and Recreation, a nonprofit land conservation organization or tribe. A purchase by State Parks would result in a more than 50% expansion of the existing Montaña de Oro State Park. PG&E will also offer a 2,200-acre parcel on the southern part of the property known as “Wild Cherry Canyon” for purchase by a government agency, nonprofit land conservation organization or tribe. In addition, the utility will provide $10 million to plan and manage roughly 25 miles of new public access trails across the entire property. “It’s going to be something that changes lives on the Central Coast in perpetuity,” Commissioner Christopher Lopez said at the meeting. “This matters to generations that have yet to exist on this planet ... this is going to be a place that so many people mark in their minds as a place that transforms their lives as they visit and recreate and love it in a way most of us can’t even imagine today.”Critically, the plan could see Diablo Canyon remain operational much longer than the five years dictated by Thursday’s agreement. While the state Legislature only authorized the plant to operate through 2030, PG&E’s federal license renewal would cover 20 years of operations, potentially keeping it online until 2045. Should that happen, the utility would need to make additional land concessions, including expanding an existing conservation area on the southern part of the property known as the “South Ranch” to 2,500 acres. The plan also includes rights of first refusal for a government agency or a land conservation group to purchase the entirety of the South Ranch, 5,000 acres, along with Wild Cherry Canyon — after 2030. Pelicans along the concrete breakwater at Pacific Gas and Electric’s Diablo Canyon Power Plant (Brian van der Brug/Los Angeles Times) Many stakeholders were frustrated by the carve-out for the South Ranch, but still saw the agreement as an overall victory for Californians. “It is a once in a lifetime opportunity,” Sen. John Laird (D-Santa Cruz) said in a phone call ahead of Thursday’s vote. “I have not been out there where it has not been breathtakingly beautiful, where it is not this incredible, unique location, where you’re not seeing, for much of it, a human structure anywhere. It is just one of those last unique opportunities to protect very special land near the California coast.”Others, however, described the deal as disappointing and inadequate.That includes many of the region’s Native Americans who said they felt sidelined by the agreement. The deal does not preclude tribal groups from purchasing the land in the future, but it doesn’t guarantee that or give them priority.The yak titʸu titʸu yak tiłhini Northern Chumash Tribe of San Luis Obispo County and Region, which met with the Coastal Commission several times in the lead-up to Thursday’s vote, had hoped to see the land returned to them. Scott Lanthrop is a member of the tribe’s board and has worked on the issue for several years. “The sad part is our group is not being recognized as the ultimate conservationist,” he told The Times. “Any normal person, if you ask the question, would you rather have a tribal group that is totally connected to earth and wind and water, or would you like to have some state agency or gigantic NGO manage this land, I think the answer would be, ‘Hey, you probably should give it back to the tribe.’” Tribe chair Mona Tucker said she fears that free public access to the land could end up harming it instead of helping it, as the Coastal Commission intends. “In my mind, I’m not understanding how taking the land ... is mitigation for marine life,” Tucker said. “It doesn’t change anything as far as impacts to the water. It changes a lot as far as impacts to the land.” (Christopher Reynolds / Los Angeles Times) The deal has been complicated by jurisdictional questions, including who can determine what happens to the land. While PG&E owns the North Ranch parcel that could be transferred to State Parks, the South Ranch and Wild Cherry Canyon are owned by its subsidiary, Eureka Energy Company. What’s more, the California Public Utilities Commission, which regulates utilities such as PG&E, has a Tribal Land Transfer Policy that calls for investor-owned power companies to transfer land they no longer want to Native American tribes. In the case of Diablo Canyon, the Coastal Commission became the decision maker because it has the job of compensating for environmental harm from the facility’s continued operation. Since the commission determined Diablo’s use of ocean water can’t be avoided, it looked at land conservation as the next best method.This “out-of-kind” trade-off is a rare, but not unheard of way of making up for the loss of marine life. It’s an approach that is “feasible and more likely to succeed” than several other methods considered, according to the commission’s staff report. “This plan supports the continued operation of a major source of reliable electricity for California, and is in alignment with our state’s clean energy goals and focus on coastal protection,” Paula Gerfen, Diablo Canyon’s senior vice president and chief nuclear officer, said in a statement. But Assemblymember Dawn Addis (D-Morro Bay) said the deal was “not the best we can do” — particularly because the fate of the South Ranch now depends on the plant staying in operation beyond 2030.“I believe the time really is now for the immediate full conservation of the 12,000 [acres], and to bring accountability and trust back for the voters of San Luis Obispo County,” Addis said during the meeting. There are also concerns about the safety of continuing to operate a nuclear plant in California, with its radioactive waste stored in concrete casks on the site. Diablo Canyon is subject to ground shaking and earthquake hazards, including from the nearby Hosgri Fault and the Shorline Fault, about 2.5 miles and 1 mile from the facility, respectively. PG&E says the plant has been built to withstand hazards. It completed a seismic hazard assessment in 2024, and determined Diablo Canyon is safe to continue operation through 2030. The Coastal Commission, however, found if the plant operates longer, it would warrant further seismic study.A key development for continuing Diablo Canyon’s operation came in 2022 with Senate Bill 846, which delayed closure by up to five additional years. At the time, California was plagued by rolling blackouts driven extreme heat waves, and state officials were growing wary about taking such a major source of power offline.But California has made great gains in the last several years — including massive investments in solar energy and battery storage — and some questioned whether the facility is still needed at all. Others said conserving thousands of acres of land still won’t make up for the harms to the ocean.“It is unmitigatable,” said David Weisman, executive director of the nonprofit Alliance for Nuclear Responsibility. He noted that the Coastal Commission’s staff report says it would take about 99 years to balance the loss of marine life with the benefits provided by 4,500 acres of land conservation. Twenty more years of operation would take about 305 years to strike that same balance.But some pointed out that neither the commission nor fisheries data find Diablo’s operations cause declines in marine life. Ocean harm may be overestimated, said Seaver Wang, an oceanographer and the climate and energy director at the Breakthrough Institute, a Berkeley-based research center.In California’s push to transition to clean energy, every option comes with downsides, Wang said. In the case of nuclear power — which produces no greenhouse gas emissions — it’s all part of the trade off, he said. “There’s no such thing as impacts-free energy,” he said.The Coastal Commission’s vote is one of the last remaining obstacles to keeping the plant online. PG&E will also need a final nod from the Regional Water Quality Control Board, which decides on a pollution discharge permit in February.The federal Nuclear Regulatory Commission will also have to sign off on Diablo’s extension.

Changes to polar bear DNA could help them adapt to global heating, study finds

Scientists say bears in southern Greenland differ genetically to those in the north, suggesting they could adjustChanges in polar bear DNA that could help the animals adapt to warmer climates have been detected by researchers, in a study thought to be the first time a statistically significant link has been found between rising temperatures and changing DNA in a wild mammal species.Climate breakdown is threatening the survival of polar bears. Two-thirds of them are expected to have disappeared by 2050 as their icy habitat melts and the weather becomes hotter. Continue reading...

Changes in polar bear DNA that could help the animals adapt to warmer climates have been detected by researchers, in a study thought to be the first time a statistically significant link has been found between rising temperatures and changing DNA in a wild mammal species.Climate breakdown is threatening the survival of polar bears. Two-thirds of them are expected to have disappeared by 2050 as their icy habitat melts and the weather becomes hotter.Now scientists at the University of East Anglia have found that some genes related to heat stress, ageing and metabolism are behaving differently in polar bears living in south-east Greenland, suggesting they may be adjusting to warmer conditions.The researchers analysed blood samples taken from polar bears in two regions of Greenland and compared “jumping genes”: small, mobile pieces of the genome that can influence how other genes work. Scientists looked at the genes in relation to temperatures in the two regions and at the associated changes in gene expression.“DNA is the instruction book inside every cell, guiding how an organism grows and develops,” said the lead researcher, Dr Alice Godden. “By comparing these bears’ active genes to local climate data, we found that rising temperatures appear to be driving a dramatic increase in the activity of jumping genes within the south-east Greenland bears’ DNA.”As local climates and diets evolve as a result of changes in habitat and prey forced by global heating, the genetics of the bears appear to be adapting, with the group of bears in the warmest part of the country showing more changes than the communities farther north. The authors of the study have said these changes could help us understand how polar bears might survive in a warming world, inform understanding of which populations are most at risk and guide future conservation efforts.This is because the findings, published on Friday in the journal Mobile DNA, suggest the genes that are changing play a crucial role in how different polar bear populations are evolving.Godden said: “This finding is important because it shows, for the first time, that a unique group of polar bears in the warmest part of Greenland are using ‘jumping genes’ to rapidly rewrite their own DNA, which might be a desperate survival mechanism against melting sea ice.”Temperatures in north-east Greenland are colder and less variable, while in the south-east there is a much warmer and less icy environment, with steep temperature fluctuations.DNA sequences in animals change over time, but this process can be accelerated by environmental stress such as a rapidly heating climate.There were some interesting DNA changes, such as in areas linked to fat processing, that could help polar bears survive when food is scarce. Bears in warmer regions had more rough, plant-based diets compared with the fatty, seal-based diets of northern bears, and the DNA of south-eastern bears seemed to be adapting to this.Godden said: “We identified several genetic hotspots where these jumping genes were highly active, with some located in the protein-coding regions of the genome, suggesting that the bears are undergoing rapid, fundamental genetic changes as they adapt to their disappearing sea ice habitat.”The next step will be to look at other polar bear populations, of which there are 20 around the world, to see if similar changes are happening to their DNA.This research could help protect the bears from extinction. But the scientists said it was crucial to stop temperature rises accelerating by reducing the burning of fossil fuels.Godden said: “We cannot be complacent, this offers some hope but does not mean that polar bears are at any less risk of extinction. We still need to be doing everything we can to reduce global carbon emissions and slow temperature increases.”

No Results today.

Our news is updated constantly with the latest environmental stories from around the world. Reset or change your filters to find the most active current topics.

Join us to forge
a sustainable future

Our team is always growing.
Become a partner, volunteer, sponsor, or intern today.
Let us know how you would like to get involved!

CONTACT US

sign up for our mailing list to stay informed on the latest films and environmental headlines.

Subscribers receive a free day pass for streaming Cinema Verde.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.