Cookies help us run our site more efficiently.

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information or to customize your cookie preferences.

Op-ed: When it comes to food chemicals, Europe’s food safety agency and the FDA are oceans apart

News Feed
Thursday, May 2, 2024

The U.S. Food and Drug Administration (FDA) and the European Food Safety Authority (EFSA) are two major global agencies in charge of food chemical safety. It is common to hear that food chemical regulations in the EU are more protective of human health than in the U.S. The latest example is the recent ban of four food additives in California. The state’s Governor, Gavin Newsom, noted that the chemicals were already banned in the EU, implying that the lack of action by the FDA was putting the health of Californians at risk. We examined the FDA and EFSA’s responsibilities on food chemical safety to better understand why EFSA decisions are in general more protective of health. We specifically looked at the agencies’ approach to the safety of bisphenol-A (BPA) as an example of disparate decision-making.We found that in the EU the risk assessment and risk management of food chemicals are made by different entities: EFSA focuses on science and the European Commission decides on how the risk is managed. EFSA is independent to follow the science on BPA, for example, which resulted in three risk assessments with the last one showing greater harm to human health. In contrast, the FDA conducts both risk assessment and management and it is unclear how decisions are made. Over the years, the FDA has reviewed BPA studies but continued to maintain that its uses are safe.As the FDA undergoes a reorganization, the agency has a prime opportunity to increase transparency, collaborations and update its approach to evaluating food chemical safety. Separation of risk assessment and management Both in the EU and the U.S., the safety of chemicals allowed in food is based on the chemical’s inherent hazard and the level of exposure. If the risk is such that public health must be protected, a risk management decision is made, often via regulation. These decisions could range from banning chemicals to establishing a consumption level that would not increase health risks. "EFSA focuses on science and the European Commission decides on how the risk is managed ... In contrast, the FDA conducts both risk assessment and management and it is unclear how decisions are made."In the EU, the risk assessment and the risk management decisions are made by different entities. EFSA conducts risk assessments and the European Commission then makes the risk management decision based on EFSA’s findings. This separation allows the risk assessment to be grounded in science and the risk management to consider not only the science but also social, political, technological and economic factors, as well as the precautionary principle.In the U.S., the FDA conducts both risk assessment and management.Striking differences in assessing and managing riskThe EFSA relies on scientific panels composed of independent experts with high standards to limit conflicts of interest and bias. There are ten permanent panels and a scientific committee that supports their work. The scientific opinions are often unanimous, but when they’re not, minority reports are published in the EFSA Journal and also inform the European Commission’s risk management decisions.Unlike the EFSA, FDA staff review safety assessment and information provided by manufacturers. In a safety assessment there usually are four sections: toxicology, chemistry, environmental impact and policy; but it is unclear whether there is an epidemiologist among the reviewers. One FDA staff member from each section writes a memo with a summary of information and the conclusions. These memos inform the risk management decision about the use of a substance. The scientific evaluation is not always publicly available. It is also unclear how and by whom risk management decisions are made and whether the risk assessors are also involved in risk management. Prioritization of chemicals for reassessmentThe EFSA is mandated by law to re-evaluate all food additives authorized for use before 2009. The EFSA also identifies emerging risks and collects data about things like consumption, exposure and biological risk and responds to similar requests from member states.In the U.S., there is no legal mandate for the FDA to re-evaluate the use of the approximately 10,000 chemicals allowed in food, many of them authorized decades ago with little or no safety data. It is unclear if there is a process to identify emerging risks. The first reevaluation of chemicals was in response to President Nixon’s 1969 directive to reassess hundreds of substances the FDA determined to be generally recognized as safe. Only recently, the FDA took the initiative to re-evaluate the safety of partially-hydrogenated oil, Irgafos 168 and brominated vegetable oil. Other reevaluations have been in response to petitions from public interest organizations. BPA: A tale of two agenciesThe risk assessment of BPA — which has been linked to myriad health problems including cancer, diabetes, obesity, reproductive, immune system and nervous and behavioral problems — in food-contact materials is a good example of how two science-based agencies have made very different risk management decisions.EFSA conducted risk assessments of BPA in 2006, 2015 and 2023, each time at the request of the European Commission in response to new science. The second and third re-evaluations resulted in reductions in the daily allowed exposure of BPA due to new evidence showing greater harm to human health. To complete the process, the Commission recently published its proposed regulation of BPA, which includes a ban of most common uses in polycarbonate plastic and metal can coating.The FDA assessment of BPA has been riddled with missteps and lack of transparency. The FDA approved BPA for use in food contact applications in the early 1960s. It didn’t a draft safety assessment until 2008, at the request of its commissioner in light of findings by the National Toxicology Program and ongoing evaluations in Europe. FDA then asked its Science Board to review the draft and establish a subcommittee; there was also a public meeting and a report. The subcommittee, which included some members of the board and external experts, had several concerns about FDA’s assessment. In 2014, the FDA published a memo summarizing an updated safety assessment of BPA. The five-page memo cites the toxicology evaluation conducted in previous years and exposure assessment using an unpublished model. The agency concluded that the estimated consumption amount of BPA was safe to protect children and adults. This was the FDA's last safety assessment. Unlike the EFSA, the FDA process is less structured and open. At the FDA "it is also unclear how and by whom risk management decisions are made and whether the risk assessors are also involved in risk management."The FDA has conducted its own studies on BPA at different life stages and in different species. The agency was a member of the Consortium Linking Academic and Regulatory Insights on BPA Toxicity (CLARITY-BPA). Launched in 2012 by the National Institute of Environmental Health Sciences, the National Toxicology Program and the FDA, the aim of CLARITY was to combine a traditional regulatory toxicology study from the government and investigational studies from academics who wield more modern techniques. As part of CLARITY, the FDA also conducted a two-year guideline compliant study on BPA toxicity. In 2018, FDA concluded that “currently authorized uses of BPA continue to be safe for consumers.” This statement was based on the results of only the first year of the CLARITY two-year study conducted by FDA according to its toxicity guideline and did not include analysis of data produced by the multiple academic laboratories involved in the project. Furthermore, it was not based on an assessment of risk which also necessitates exposure data. Meanwhile, the results of CLARITY, including the academic studies largely ignored by the FDA, played an important role in EFSA’s latest BPA risk assessment. Unlike EFSA, the FDA has not made public the criteria applied to select the data, to evaluate and appraise the studies included in the hazard assessment, or the weight of evidence methodology used in its current reassessment of BPA. The lack of transparency was a concern previously expressed by FDA’s Science Board subcommittee in 2008.A “deep misunderstanding” of the risk assessment and management distinctionEFSA’s independence from risk management decisions and recruitment of independent experts to conduct risk assessments gives the agency the freedom to follow the science. By comparison, the FDA has stagnated.One explanation for such a difference would be FDA’s strong adherence to its historical decisions, rather than considering more recent science. This bias toward their own work is not conducive to change. Another explanation would be FDA scientists conflating risk assessment and risk management. In 2013, the FDA conducted a review of its chemical safety program and an external consultant noted that there appeared to be a “deep misunderstanding of the risk assessment – risk management distinction” among the staff. This observation is apparent in a commentary in Nature in 2010, where FDA toxicologists said that dismissing “out of hand” risk management factors such as economics, benefits of existing technologies, cost of replacing banned technologies and the toxic risk of any replacement “is, to say the least, insular, and surely imprudent in a regulatory setting.” The consultant added that FDA staff suggested that the agency “should not be too quick to adopt new scientific approaches.” Such an approach has likely deterred its scientists from acting on new evidence.FDA is undergoing a reorganization, including the creation of a new Human Food Program. Almost a year ago, the agency announced it was “embarking on a more modernized, systematic reassessment of chemicals with a focus on post-market review.” For this to be successful, the FDA should adopt updated processes and methods, include outside experts when it encounters challenging scientific or technical issues, increase collaboration with other agencies, and engage with stakeholders including consumers, academic institutions, public interest organizations and industry. But above all, the FDA must restore the public’s trust in the agency with a strong commitment to transparency in decision-making and clear separation between risk assessment and risk management.For more information check these summary tables.

The U.S. Food and Drug Administration (FDA) and the European Food Safety Authority (EFSA) are two major global agencies in charge of food chemical safety. It is common to hear that food chemical regulations in the EU are more protective of human health than in the U.S. The latest example is the recent ban of four food additives in California. The state’s Governor, Gavin Newsom, noted that the chemicals were already banned in the EU, implying that the lack of action by the FDA was putting the health of Californians at risk. We examined the FDA and EFSA’s responsibilities on food chemical safety to better understand why EFSA decisions are in general more protective of health. We specifically looked at the agencies’ approach to the safety of bisphenol-A (BPA) as an example of disparate decision-making.We found that in the EU the risk assessment and risk management of food chemicals are made by different entities: EFSA focuses on science and the European Commission decides on how the risk is managed. EFSA is independent to follow the science on BPA, for example, which resulted in three risk assessments with the last one showing greater harm to human health. In contrast, the FDA conducts both risk assessment and management and it is unclear how decisions are made. Over the years, the FDA has reviewed BPA studies but continued to maintain that its uses are safe.As the FDA undergoes a reorganization, the agency has a prime opportunity to increase transparency, collaborations and update its approach to evaluating food chemical safety. Separation of risk assessment and management Both in the EU and the U.S., the safety of chemicals allowed in food is based on the chemical’s inherent hazard and the level of exposure. If the risk is such that public health must be protected, a risk management decision is made, often via regulation. These decisions could range from banning chemicals to establishing a consumption level that would not increase health risks. "EFSA focuses on science and the European Commission decides on how the risk is managed ... In contrast, the FDA conducts both risk assessment and management and it is unclear how decisions are made."In the EU, the risk assessment and the risk management decisions are made by different entities. EFSA conducts risk assessments and the European Commission then makes the risk management decision based on EFSA’s findings. This separation allows the risk assessment to be grounded in science and the risk management to consider not only the science but also social, political, technological and economic factors, as well as the precautionary principle.In the U.S., the FDA conducts both risk assessment and management.Striking differences in assessing and managing riskThe EFSA relies on scientific panels composed of independent experts with high standards to limit conflicts of interest and bias. There are ten permanent panels and a scientific committee that supports their work. The scientific opinions are often unanimous, but when they’re not, minority reports are published in the EFSA Journal and also inform the European Commission’s risk management decisions.Unlike the EFSA, FDA staff review safety assessment and information provided by manufacturers. In a safety assessment there usually are four sections: toxicology, chemistry, environmental impact and policy; but it is unclear whether there is an epidemiologist among the reviewers. One FDA staff member from each section writes a memo with a summary of information and the conclusions. These memos inform the risk management decision about the use of a substance. The scientific evaluation is not always publicly available. It is also unclear how and by whom risk management decisions are made and whether the risk assessors are also involved in risk management. Prioritization of chemicals for reassessmentThe EFSA is mandated by law to re-evaluate all food additives authorized for use before 2009. The EFSA also identifies emerging risks and collects data about things like consumption, exposure and biological risk and responds to similar requests from member states.In the U.S., there is no legal mandate for the FDA to re-evaluate the use of the approximately 10,000 chemicals allowed in food, many of them authorized decades ago with little or no safety data. It is unclear if there is a process to identify emerging risks. The first reevaluation of chemicals was in response to President Nixon’s 1969 directive to reassess hundreds of substances the FDA determined to be generally recognized as safe. Only recently, the FDA took the initiative to re-evaluate the safety of partially-hydrogenated oil, Irgafos 168 and brominated vegetable oil. Other reevaluations have been in response to petitions from public interest organizations. BPA: A tale of two agenciesThe risk assessment of BPA — which has been linked to myriad health problems including cancer, diabetes, obesity, reproductive, immune system and nervous and behavioral problems — in food-contact materials is a good example of how two science-based agencies have made very different risk management decisions.EFSA conducted risk assessments of BPA in 2006, 2015 and 2023, each time at the request of the European Commission in response to new science. The second and third re-evaluations resulted in reductions in the daily allowed exposure of BPA due to new evidence showing greater harm to human health. To complete the process, the Commission recently published its proposed regulation of BPA, which includes a ban of most common uses in polycarbonate plastic and metal can coating.The FDA assessment of BPA has been riddled with missteps and lack of transparency. The FDA approved BPA for use in food contact applications in the early 1960s. It didn’t a draft safety assessment until 2008, at the request of its commissioner in light of findings by the National Toxicology Program and ongoing evaluations in Europe. FDA then asked its Science Board to review the draft and establish a subcommittee; there was also a public meeting and a report. The subcommittee, which included some members of the board and external experts, had several concerns about FDA’s assessment. In 2014, the FDA published a memo summarizing an updated safety assessment of BPA. The five-page memo cites the toxicology evaluation conducted in previous years and exposure assessment using an unpublished model. The agency concluded that the estimated consumption amount of BPA was safe to protect children and adults. This was the FDA's last safety assessment. Unlike the EFSA, the FDA process is less structured and open. At the FDA "it is also unclear how and by whom risk management decisions are made and whether the risk assessors are also involved in risk management."The FDA has conducted its own studies on BPA at different life stages and in different species. The agency was a member of the Consortium Linking Academic and Regulatory Insights on BPA Toxicity (CLARITY-BPA). Launched in 2012 by the National Institute of Environmental Health Sciences, the National Toxicology Program and the FDA, the aim of CLARITY was to combine a traditional regulatory toxicology study from the government and investigational studies from academics who wield more modern techniques. As part of CLARITY, the FDA also conducted a two-year guideline compliant study on BPA toxicity. In 2018, FDA concluded that “currently authorized uses of BPA continue to be safe for consumers.” This statement was based on the results of only the first year of the CLARITY two-year study conducted by FDA according to its toxicity guideline and did not include analysis of data produced by the multiple academic laboratories involved in the project. Furthermore, it was not based on an assessment of risk which also necessitates exposure data. Meanwhile, the results of CLARITY, including the academic studies largely ignored by the FDA, played an important role in EFSA’s latest BPA risk assessment. Unlike EFSA, the FDA has not made public the criteria applied to select the data, to evaluate and appraise the studies included in the hazard assessment, or the weight of evidence methodology used in its current reassessment of BPA. The lack of transparency was a concern previously expressed by FDA’s Science Board subcommittee in 2008.A “deep misunderstanding” of the risk assessment and management distinctionEFSA’s independence from risk management decisions and recruitment of independent experts to conduct risk assessments gives the agency the freedom to follow the science. By comparison, the FDA has stagnated.One explanation for such a difference would be FDA’s strong adherence to its historical decisions, rather than considering more recent science. This bias toward their own work is not conducive to change. Another explanation would be FDA scientists conflating risk assessment and risk management. In 2013, the FDA conducted a review of its chemical safety program and an external consultant noted that there appeared to be a “deep misunderstanding of the risk assessment – risk management distinction” among the staff. This observation is apparent in a commentary in Nature in 2010, where FDA toxicologists said that dismissing “out of hand” risk management factors such as economics, benefits of existing technologies, cost of replacing banned technologies and the toxic risk of any replacement “is, to say the least, insular, and surely imprudent in a regulatory setting.” The consultant added that FDA staff suggested that the agency “should not be too quick to adopt new scientific approaches.” Such an approach has likely deterred its scientists from acting on new evidence.FDA is undergoing a reorganization, including the creation of a new Human Food Program. Almost a year ago, the agency announced it was “embarking on a more modernized, systematic reassessment of chemicals with a focus on post-market review.” For this to be successful, the FDA should adopt updated processes and methods, include outside experts when it encounters challenging scientific or technical issues, increase collaboration with other agencies, and engage with stakeholders including consumers, academic institutions, public interest organizations and industry. But above all, the FDA must restore the public’s trust in the agency with a strong commitment to transparency in decision-making and clear separation between risk assessment and risk management.For more information check these summary tables.



The U.S. Food and Drug Administration (FDA) and the European Food Safety Authority (EFSA) are two major global agencies in charge of food chemical safety.


It is common to hear that food chemical regulations in the EU are more protective of human health than in the U.S. The latest example is the recent ban of four food additives in California. The state’s Governor, Gavin Newsom, noted that the chemicals were already banned in the EU, implying that the lack of action by the FDA was putting the health of Californians at risk.

We examined the FDA and EFSA’s responsibilities on food chemical safety to better understand why EFSA decisions are in general more protective of health. We specifically looked at the agencies’ approach to the safety of bisphenol-A (BPA) as an example of disparate decision-making.

We found that in the EU the risk assessment and risk management of food chemicals are made by different entities: EFSA focuses on science and the European Commission decides on how the risk is managed. EFSA is independent to follow the science on BPA, for example, which resulted in three risk assessments with the last one showing greater harm to human health. In contrast, the FDA conducts both risk assessment and management and it is unclear how decisions are made. Over the years, the FDA has reviewed BPA studies but continued to maintain that its uses are safe.

As the FDA undergoes a reorganization, the agency has a prime opportunity to increase transparency, collaborations and update its approach to evaluating food chemical safety.

Separation of risk assessment and management 


Both in the EU and the U.S., the safety of chemicals allowed in food is based on the chemical’s inherent hazard and the level of exposure. If the risk is such that public health must be protected, a risk management decision is made, often via regulation. These decisions could range from banning chemicals to establishing a consumption level that would not increase health risks.

"EFSA focuses on science and the European Commission decides on how the risk is managed ... In contrast, the FDA conducts both risk assessment and management and it is unclear how decisions are made."

In the EU, the risk assessment and the risk management decisions are made by different entities. EFSA conducts risk assessments and the European Commission then makes the risk management decision based on EFSA’s findings. This separation allows the risk assessment to be grounded in science and the risk management to consider not only the science but also social, political, technological and economic factors, as well as the precautionary principle.

In the U.S., the FDA conducts both risk assessment and management.

Striking differences in assessing and managing risk


EFSA food safety

The EFSA relies on scientific panels composed of independent experts with high standards to limit conflicts of interest and bias. There are ten permanent panels and a scientific committee that supports their work. The scientific opinions are often unanimous, but when they’re not, minority reports are published in the EFSA Journal and also inform the European Commission’s risk management decisions.

Unlike the EFSA, FDA staff review safety assessment and information provided by manufacturers. In a safety assessment there usually are four sections: toxicology, chemistry, environmental impact and policy; but it is unclear whether there is an epidemiologist among the reviewers.

One FDA staff member from each section writes a memo with a summary of information and the conclusions. These memos inform the risk management decision about the use of a substance. The scientific evaluation is not always publicly available. It is also unclear how and by whom risk management decisions are made and whether the risk assessors are also involved in risk management.

Prioritization of chemicals for reassessment


The EFSA is mandated by law to re-evaluate all food additives authorized for use before 2009. The EFSA also identifies emerging risks and collects data about things like consumption, exposure and biological risk and responds to similar requests from member states.

In the U.S., there is no legal mandate for the FDA to re-evaluate the use of the approximately 10,000 chemicals allowed in food, many of them authorized decades ago with little or no safety data. It is unclear if there is a process to identify emerging risks. The first reevaluation of chemicals was in response to President Nixon’s 1969 directive to reassess hundreds of substances the FDA determined to be generally recognized as safe. Only recently, the FDA took the initiative to re-evaluate the safety of partially-hydrogenated oil, Irgafos 168 and brominated vegetable oil. Other reevaluations have been in response to petitions from public interest organizations.

BPA: A tale of two agencies


The risk assessment of BPA — which has been linked to myriad health problems including cancer, diabetes, obesity, reproductive, immune system and nervous and behavioral problems — in food-contact materials is a good example of how two science-based agencies have made very different risk management decisions.

EFSA conducted risk assessments of BPA in 2006, 2015 and 2023, each time at the request of the European Commission in response to new science. The second and third re-evaluations resulted in reductions in the daily allowed exposure of BPA due to new evidence showing greater harm to human health. To complete the process, the Commission recently published its proposed regulation of BPA, which includes a ban of most common uses in polycarbonate plastic and metal can coating.

The FDA assessment of BPA has been riddled with missteps and lack of transparency. The FDA approved BPA for use in food contact applications in the early 1960s. It didn’t a draft safety assessment until 2008, at the request of its commissioner in light of findings by the National Toxicology Program and ongoing evaluations in Europe. FDA then asked its Science Board to review the draft and establish a subcommittee; there was also a public meeting and a report.

The subcommittee, which included some members of the board and external experts, had several concerns about FDA’s assessment. In 2014, the FDA published a memo summarizing an updated safety assessment of BPA. The five-page memo cites the toxicology evaluation conducted in previous years and exposure assessment using an unpublished model. The agency concluded that the estimated consumption amount of BPA was safe to protect children and adults. This was the FDA's last safety assessment. Unlike the EFSA, the FDA process is less structured and open.

At the FDA "it is also unclear how and by whom risk management decisions are made and whether the risk assessors are also involved in risk management."

The FDA has conducted its own studies on BPA at different life stages and in different species. The agency was a member of the Consortium Linking Academic and Regulatory Insights on BPA Toxicity (CLARITY-BPA). Launched in 2012 by the National Institute of Environmental Health Sciences, the National Toxicology Program and the FDA, the aim of CLARITY was to combine a traditional regulatory toxicology study from the government and investigational studies from academics who wield more modern techniques. As part of CLARITY, the FDA also conducted a two-year guideline compliant study on BPA toxicity.

In 2018, FDA concluded that “currently authorized uses of BPA continue to be safe for consumers.” This statement was based on the results of only the first year of the CLARITY two-year study conducted by FDA according to its toxicity guideline and did not include analysis of data produced by the multiple academic laboratories involved in the project. Furthermore, it was not based on an assessment of risk which also necessitates exposure data.

Meanwhile, the results of CLARITY, including the academic studies largely ignored by the FDA, played an important role in EFSA’s latest BPA risk assessment.

Unlike EFSA, the FDA has not made public the criteria applied to select the data, to evaluate and appraise the studies included in the hazard assessment, or the weight of evidence methodology used in its current reassessment of BPA. The lack of transparency was a concern previously expressed by FDA’s Science Board subcommittee in 2008.

A “deep misunderstanding” of the risk assessment and management distinction


FDA food safety

EFSA’s independence from risk management decisions and recruitment of independent experts to conduct risk assessments gives the agency the freedom to follow the science. By comparison, the FDA has stagnated.

One explanation for such a difference would be FDA’s strong adherence to its historical decisions, rather than considering more recent science. This bias toward their own work is not conducive to change.

Another explanation would be FDA scientists conflating risk assessment and risk management. In 2013, the FDA conducted a review of its chemical safety program and an external consultant noted that there appeared to be a “deep misunderstanding of the risk assessment – risk management distinction” among the staff. This observation is apparent in a commentary in Nature in 2010, where FDA toxicologists said that dismissing “out of hand” risk management factors such as economics, benefits of existing technologies, cost of replacing banned technologies and the toxic risk of any replacement “is, to say the least, insular, and surely imprudent in a regulatory setting.”

The consultant added that FDA staff suggested that the agency “should not be too quick to adopt new scientific approaches.” Such an approach has likely deterred its scientists from acting on new evidence.

FDA is undergoing a reorganization, including the creation of a new Human Food Program. Almost a year ago, the agency announced it was “embarking on a more modernized, systematic reassessment of chemicals with a focus on post-market review.” For this to be successful, the FDA should adopt updated processes and methods, include outside experts when it encounters challenging scientific or technical issues, increase collaboration with other agencies, and engage with stakeholders including consumers, academic institutions, public interest organizations and industry.

But above all, the FDA must restore the public’s trust in the agency with a strong commitment to transparency in decision-making and clear separation between risk assessment and risk management.

For more information check these summary tables.

Read the full story here.
Photos courtesy of

Like Many Holiday Traditions, Lighting Candles and Fireplaces Is Best Done in Moderation

The warm scents of gingerbread and pine are holiday favorites, but experts warn they can affect indoor air quality

The warm spices in gingerbread, the woodsy aroma of pine and fir trees, and the fruity tang of mulled wine are smells synonymous with the holiday season. Many people enjoy lighting candles, incense and fireplaces in their homes to evoke the moods associated with these festive fragrances.Burning scented products may create a cozy ambiance, and in the case of fireplaces, provide light and heat, but some experts want people to consider how doing so contributes to the quality of the air indoors. All flames release chemicals that may cause allergy-like symptoms or contribute to long-term respiratory problems if they are inhaled in sufficient quantities.However, people don't have to stop sitting by the hearth or get rid of products like perfumed candles and essential oil diffusers, said Dr. Meredith McCormack, director of the pulmonary and critical care medicine division at John Hopkins University’s medical school. Instead, she recommends taking precautions to control the pollutants in their homes.“Clean air is fragrance free,” said McCormack, who has studied air quality and lung health for more than 20 years. “If having seasonal scents is part of your tradition or evokes feelings of nostalgia, maybe think about it in moderation.” What to know about indoor air quality People in the Northern Hemisphere tend to spend more time indoors during the end-of-year holidays, when temperatures are colder. Indoor air can be significantly more polluted than outdoor air because pollutants get trapped inside and concentrated without proper ventilation or filtration, according to the American Lung Association.For example, active fireplaces and gas appliances release tiny airborne particles that can get into the lungs and chemicals like nitrogen dioxide, a major component of smog, according to the U.S. Environmental Protection Agency. Cleaning products, air fresheners and candles also emit air pollutants at varying concentrations.The risk fragrances and other air pollutants may pose to respiratory health depends on the source, the length and intensity of a person’s exposure, and individual health, McCormack said.It is also important to note that some pollutants have no smell, so unscented products still can affect indoor air quality, experts say. Some people are more vulnerable Polluted air affects everyone but not equally. Children, older adults, minority populations and people of low socioeconomic status are more likely to be affected by poor air quality because of either physiological vulnerabilities or higher exposure, according to the environmental agency.Children are more susceptible to air pollution because of their lung size, which means they get a greater dose of exposure relative to their body size, McCormack said. Pollutants inside the home also post a greater hazard to people with heart or lung conditions, including asthma, she said.Signs of respiratory irritation include coughing, shortness of breath, headaches, a runny nose and sneezing. Experts advise stopping use of pollutant-releasing products or immediately ventilating rooms if symptoms occur.“The more risk factors you have, the more harmful air pollution or poor air quality indoors can be,” McCormack said. Practical precautions to take Ellen Wilkowe burns candles with scents like vanilla and cinnamon when she does yoga, writes or when she is showering at her home in New Jersey. Her teenage daughter, on the other hand, likes more seasonally scented candles like gingerbread.“The candle has a calming presence. They are also very symbolic and used in rituals and many religions,” she said.Wilkowe said she leans toward candles made with soy-based waxes instead of petroleum-based paraffin. Experts note that all lit candles give off air pollutants regardless of what they are made of.Buying products with fewer ingredients, opening windows if the temperatures allow, and using air purifiers with HEPA filters are ways to reduce exposure to any pollutants from indoor fireplaces, appliances and candle displays, McCormack said. She also recommends switching on kitchen exhaust fans before starting a gas-powered stovetop and using the back burners so the vent can more easily suck up pollutants.Setting polite boundaries with guests who smoke cigarettes or other tobacco products is also a good idea, she said.“Small improvements in air quality can have measurable health benefits," McCormack said. "Similarly to if we exercise and eat a little better, we can be healthier.”Rachael Lewis-Abbott, a member of the Indoor Air Quality Association, an organization for professionals who identify and address air quality problems, said people don't usually notice what they are breathing in until problems like gas leaks or mold develop.“It is out of sight, out of mind,” she said.Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.Photos You Should See – December 2025

This moss survived in space for 9 months

In an experiment on the outside of the International Space Station, a species of moss survived in space for 9 months. And it could have lasted much longer. The post This moss survived in space for 9 months first appeared on EarthSky.

Meet a spreading earthmoss known as Physcomitrella patens. It’s frequently used as a model organism for studies on plant evolution, development, and physiology. In this image, a reddish-brown sporophyte sits at the top center of a leafy gametophore. This capsule contains numerous spores inside. Scientists tested samples like these on the outside of the International Space Station (ISS) to see if they could tolerate the extreme airless environment. And they did. The moss survived in space for 9 months and could have lasted even longer. Image via Tomomichi Fujita/ EurekAlert! (CC BY-SA). Space is a deadly environment, with no air, extreme temperature swings and harsh radiation. Could any life survive there? Reasearchers in Japan tested a type of moss called spreading earthmoss on the exterior of the International Space Station. The moss survived for nine months, and the spores were still able to reproduce when brought back to Earth. Moss survived in space for 9 months Can life exist in space? Not simply on other planets or moons, but in the cold, dark, airless void of space itself? Most organisms would perish almost immediately, to be sure. But researchers in Japan recently experimented with moss, with surprising results. They said on November 20, 2025, that more than 80% of their moss spores survived nine months on the outside of the International Space Station. Not only that, but when brought back to Earth, they were still capable of reproducing. Nature, it seems, is even tougher than we thought! Amazingly, the results show that some primitive plants – not even just microorganisms – can survive long-term exposure to the extreme space environment. The researchers published their peer-reviewed findings in the journal iScience on November 20, 2025. A deadly environment for life Space is a horrible place for life. The lack of air, radiation and extreme cold make it pretty much unsurvivable for life as we know it. As lead author Tomomichi Fujita at Hokkaido University in Japan stated: Most living organisms, including humans, cannot survive even briefly in the vacuum of space. However, the moss spores retained their vitality after nine months of direct exposure. This provides striking evidence that the life that has evolved on Earth possesses, at the cellular level, intrinsic mechanisms to endure the conditions of space. This #moss survived 9 months directly exposed to the vacuum space and could still reproduce after returning to Earth. ? ? spkl.io/63322AdFrpTomomichi Fujita & colleagues@cp-iscience.bsky.social — Cell Press (@cellpress.bsky.social) 2025-11-24T16:00:02.992Z What about moss? Researchers wanted to see if any Earthly life could survive in space’s deadly environment for the long term. To find out, they decided to do some experiments with a type of moss called spreading earthmoss, or Physcomitrium patens. The researchers sent hundreds of sporophytes – encapsulated moss spores – to the International Space Station in March 2022, aboard the Cygnus NG-17 spacecraft. They attached the sporophyte samples to the outside of the ISS, where they were exposed to the vacuum of space for 283 days. By doing so, the samples were subjected to high levels of UV (ultraviolet) radiation and extreme swings of temperature. The samples later returned to Earth in January 2023. The researchers tested three parts of the moss. These were the protonemata, or juvenile moss; brood cells, or specialized stem cells that emerge under stress conditions; and the sporophytes. Fujita said: We anticipated that the combined stresses of space, including vacuum, cosmic radiation, extreme temperature fluctuations and microgravity, would cause far greater damage than any single stress alone. Astronauts placed the moss samples on the outside of the International Space Station for the 9-month-long experiment. Incredibly, more than 80% of the the encapsulated spores survived the trip to space and back to Earth. Image via NASA/ Roscosmos. The moss survived! So, how did the moss do? The results were mixed, but overall showed that the moss could survive in space. The radiation was the most difficult aspect of the space environment to withstand. The sporophytes were the most resilient. Incredibly, they were able to survive and germinate after being exposed to -196 degrees Celsius (-320 degrees Fahrenheit) for more than a week. At the other extreme, they also survived in 55° degrees C (131 degrees F) heat for a month. Some brood cells survived as well, but the encased spores were about 1,000 times more tolerant to the UV radiation. On the other hand, none of the juvenile moss survived the high UV levels or the extreme temperatures. Samples of moss spores that germinated after their 9-month exposure to space. Image via Dr. Chang-hyun Maeng/ Maika Kobayashi/ EurekAlert!. (CC BY-SA). How did the spores survive? So why did the encapsulated spores do so well? The researchers said the natural structure surrounding the spore itself helps to protect the spore. Essentially, it absorbs the UV radiation and surrounds the inner spore both physically and chemically to prevent damage. As it turns out, this might be associated with the evolution of mosses. This is an adaptation that helped bryophytes – the group of plants to which mosses belong – to make the transition from aquatic to terrestrial plants 500 million years ago. Overall, more than 80% of the spores survived the journey to space and then back to Earth. And only 11% were unable to germinate after being brought back to the lab on Earth. That’s impressive! In addition, the researchers also tested the levels of chlorophyll in the spores. After the exposure to space, the spores still had normal amounts of chlorophyll, except for chlorophyll a specifically. In that case, there was a 20% reduction. Chlorophyll a is used in oxygenic photosynthesis. It absorbs the most energy from wavelengths of violet-blue and orange-red light. Tomomichi Fujita at Hokkaido University in Japan is the lead author of the new study about moss in space. Image via Hokkaido University. Spores could have survived for 15 years The time available for the experiment was limited to the several months. However, the researchers wondered if the moss spores could have survived even longer. And using mathematical models, they determined the spores would likely have continued to live in space for about 15 years, or 5,600 days, altogether. The researchers note this prediction is a rough estimate. More data would still be needed to make that assessment even more accurate. So the results show just how resilient moss is, and perhaps some other kinds of life, too. Fujita said: This study demonstrates the astonishing resilience of life that originated on Earth. Ultimately, we hope this work opens a new frontier toward constructing ecosystems in extraterrestrial environments such as the moon and Mars. I hope that our moss research will serve as a starting point. Bottom line: In an experiment on the outside of the International Space Station, a species of moss survived in space for nine months. And it could have lasted much longer. Source: Extreme environmental tolerance and space survivability of the moss, Physcomitrium patens Via EurekAlert! Read more: This desert moss could grow on Mars, no greenhouse needed Read more: Colorful life on exoplanets might be lurking in cloudsThe post This moss survived in space for 9 months first appeared on EarthSky.

Medical Imaging Contributing To Water Pollution, Experts Say

By Dennis Thompson HealthDay ReporterTHURSDAY, Dec. 11, 2025 (HealthDay News) — Contrast chemicals injected into people for medical imaging scans...

By Dennis Thompson HealthDay ReporterTHURSDAY, Dec. 11, 2025 (HealthDay News) — Contrast chemicals injected into people for medical imaging scans are likely contributing to water pollution, a new study says.Medicare patients alone received 13.5 billion milliliters of contrast media between 2011 and 2024, and those chemicals wound up in waterways after people excreted them, researchers recently reported in JAMA Network Open.“Contrast agents are necessary for effective imaging, but they don’t disappear after use,” said lead researcher Dr. Florence Doo, an assistant professor at the University of Maryland Medical Intelligent Imaging Center in Baltimore.“Iodine and gadolinium are non-renewable resources that can enter wastewater and accumulate in rivers, oceans and even drinking water,” Doo said in a news release.People undergoing X-ray or CT scans are sometimes given iodine or barium-sulfate compounds that cause certain tissues, blood vessels or organs to light up, allowing radiologists a better look at potential health problems.For MRI scans, radiologists use gadolinium, a substance that alters the magnetic properties of water molecules in the human body.These are critical for diagnosing disease, but they are also persistent pollutants, researchers said in background notes. They aren’t biodegradable, and conventional wastewater treatment doesn’t fully remove them.For the new study, researchers analyzed 169 million contrast-enhanced imaging procedures that Medicare covered over 13 years.Iodine-based contrast agents accounted for more than 95% of the total volume, or nearly 12.9 billion milliliters. Of those, agents used in CT scans of the abdomen and pelvis alone contributed 4.4 billion milliliters.Gadolinium agents were less frequently used, but still contributed nearly 600 million milliliters, researchers said. Brain MRIs were the most common scan using these contrast materials.Overall, just a handful of procedures accounted for 80% of all contrast use, researchers concluded.“Our study shows that a small number of imaging procedures drive the majority of contrast use. Focusing on those highest-use imaging types make meaningful changes tractable and could significantly reduce health care’s environmental footprint,” researcher Elizabeth Rula, executive director of the Harvey L. Neiman Health Policy Institute in Reston, Va., said in a news release.Doctors can help by making sure their imaging orders are necessary, while radiologists can lower the doses of contrast agents by basing them on a patient’s weight, researchers said.Biodegradable contrast media are under development, researchers noted. Another solution could involve AI, which might be able to accurately analyze medical imaging scans even if less contrast media is used.“We can’t ignore the environmental consequences of medical imaging,” Doo said. “Stewardship of contrast agents is a measurable and impactful way to align patient care with planetary health and should be an important part of broader health care sustainability efforts.”SOURCES: Harvey L. Neiman Health Policy Institute, news release, Dec. 4, 2025; JAMA Network Open, Dec. 5, 2025Copyright © 2025 HealthDay. All rights reserved.

Cars to AI: How new tech drives demand for specialized materials

Generative artificial intelligence has become widely accepted as a tool that increases productivity. Yet the technology is far from mature. Large language models advance rapidly from one generation to the next, and experts can only speculate how AI will affect the workforce and people’s daily lives. As a materials scientist, I am interested in how materials and the technologies that derive from them affect society. AI is one example of a technology driving global change—particularly through its demand for materials and rare minerals. But before AI evolved to its current level, two other technologies exemplified the process created by the demand for specialized materials: cars and smartphones. Often, the mass adoption of a new invention changes human behavior, which leads to new technologies and infrastructures reliant upon the invention. In turn, these new technologies and infrastructures require new or improved materials—and these often contain critical minerals: those minerals that are both essential to the technology and strain the supply chain. The unequal distribution of these minerals gives leverage to the nations that produce them. The resulting power shifts strain geopolitical relations and drive the search for new mineral sources. New technology nurtures the mining industry. The car and the development of suburbs At the beginning of the 20th century, only 5 out of 1,000 people owned a car, with annual production around a few thousand. Workers commuted on foot or by tram. Within a 2-mile radius, many people had all they needed: from groceries to hardware, from school to church, and from shoemakers to doctors. Then, in 1913, Henry Ford transformed the industry by inventing the assembly line. Now, a middle class family could afford a car: Mass production cut the price of the Model T from US$850 in 1908 to $360 in 1916. While the Great Depression dampened the broad adoption of the car, sales began to increase again after the end of World War II. With cars came more mobility, and many people moved farther away from work. In the 1940s and 1950s, a powerful highway lobby that included oil, automobile, and construction interests promoted federal highway and transportation policies, which increased automobile dependence. These policies helped change the landscape: Houses were spaced farther apart, and located farther away from the urban centers where many people worked. By the 1960s, two-thirds of American workers commuted by car, and the average commute had increased to 10 miles. Public policy and investment favored suburbs, which meant less investment in city centers. The resulting decay made living in downtown areas of many cities undesirable and triggered urban renewal projects. Long commutes added to pollution and expenses, which created a demand for lighter, more fuel-efficient cars. But building these required better materials. In 1970, the entire frame and body of a car was made from one steel type, but by 2017, 10 different, highly specialized steels constituted a vehicle’s lightweight form. Each steel contains different chemical elements, such as molybdenum and vanadium, which are mined only in a few countries. While the car supply chain was mostly domestic until the 1970s, the car industry today relies heavily on imports. This dependence has created tension with international trade partners, as reflected by higher tariffs on steel. The cellphone and American life The cellphone presents another example of a technology creating a demand for minerals and affecting foreign policy. In 1983, Motorola released the DynaTAC, the first commercial cellular phone. It was heavy, expensive, and its battery lasted for only half an hour, so few people had one. Then in 1996, Motorola introduced the flip phone, which was cheaper, lighter, and more convenient to use. The flip phone initiated the mass adoption of cellphones. However, it was still just a phone: Unlike today’s smartphones, all it did was send and receive calls and texts. In 2007, Apple redefined communication with the iPhone, inventing the touchscreen and integrating an internet navigator. The phone became a digital hub for navigating, finding information, and building an online social identity. Before smartphones, mobile phones supplemented daily life. Now, they structure it. In 2000, fewer than half of American adults owned a cellphone, and nearly all who did used it only sporadically. In 2024, 98% of Americans over the age of 18 reported owning a cellphone, and over 90% owned a smartphone. Without the smartphone, most people cannot fulfill their daily tasks. Many individuals now experience nomophobia: They feel anxious without a cellphone. Around three-quarters of all stable elements are represented in the components of each smartphone. These elements are necessary for highly specialized materials that enable touchscreens, displays, batteries, speakers, microphones, and cameras. Many of these elements are essential for at least one function and have an unreliable supply chain, which makes them critical. Critical materials and AI Critical materials give leverage to countries that have a monopoly in mining and processing them. For example, China has gained increased power through its monopoly on rare earth elements. In April 2025, in response to U.S. tariffs, China stopped exporting rare earth magnets, which are used in cellphones. The geopolitical tensions that resulted demonstrate the power embodied in the control over critical minerals. The mass adoption of AI technology will likely change human behavior and bring forth new technologies, industries, and infrastructure on which the U.S. economy will depend. All of these technologies will require more optimized and specialized materials and create new material dependencies. By exacerbating material dependencies, AI could affect geopolitical relations and reorganize global power. America has rich deposits of many important minerals, but extraction of these minerals comes with challenges. Factors including slow and costly permitting, public opposition, environmental concerns, high investment costs, and an inadequate workforce all can prevent mining companies from accessing these resources. The mass adoption of AI is already adding pressure to overcome these factors and to increase responsible domestic mining. While the path from innovation to material dependence spanned a century for cars and a couple of decades for cellphones, the rapid advancement of large language models suggests that the scale will be measured in years for AI. The heat is already on. Peter Müllner is a distinguished professor in materials science and engineering at Boise State University. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Generative artificial intelligence has become widely accepted as a tool that increases productivity. Yet the technology is far from mature. Large language models advance rapidly from one generation to the next, and experts can only speculate how AI will affect the workforce and people’s daily lives. As a materials scientist, I am interested in how materials and the technologies that derive from them affect society. AI is one example of a technology driving global change—particularly through its demand for materials and rare minerals. But before AI evolved to its current level, two other technologies exemplified the process created by the demand for specialized materials: cars and smartphones. Often, the mass adoption of a new invention changes human behavior, which leads to new technologies and infrastructures reliant upon the invention. In turn, these new technologies and infrastructures require new or improved materials—and these often contain critical minerals: those minerals that are both essential to the technology and strain the supply chain. The unequal distribution of these minerals gives leverage to the nations that produce them. The resulting power shifts strain geopolitical relations and drive the search for new mineral sources. New technology nurtures the mining industry. The car and the development of suburbs At the beginning of the 20th century, only 5 out of 1,000 people owned a car, with annual production around a few thousand. Workers commuted on foot or by tram. Within a 2-mile radius, many people had all they needed: from groceries to hardware, from school to church, and from shoemakers to doctors. Then, in 1913, Henry Ford transformed the industry by inventing the assembly line. Now, a middle class family could afford a car: Mass production cut the price of the Model T from US$850 in 1908 to $360 in 1916. While the Great Depression dampened the broad adoption of the car, sales began to increase again after the end of World War II. With cars came more mobility, and many people moved farther away from work. In the 1940s and 1950s, a powerful highway lobby that included oil, automobile, and construction interests promoted federal highway and transportation policies, which increased automobile dependence. These policies helped change the landscape: Houses were spaced farther apart, and located farther away from the urban centers where many people worked. By the 1960s, two-thirds of American workers commuted by car, and the average commute had increased to 10 miles. Public policy and investment favored suburbs, which meant less investment in city centers. The resulting decay made living in downtown areas of many cities undesirable and triggered urban renewal projects. Long commutes added to pollution and expenses, which created a demand for lighter, more fuel-efficient cars. But building these required better materials. In 1970, the entire frame and body of a car was made from one steel type, but by 2017, 10 different, highly specialized steels constituted a vehicle’s lightweight form. Each steel contains different chemical elements, such as molybdenum and vanadium, which are mined only in a few countries. While the car supply chain was mostly domestic until the 1970s, the car industry today relies heavily on imports. This dependence has created tension with international trade partners, as reflected by higher tariffs on steel. The cellphone and American life The cellphone presents another example of a technology creating a demand for minerals and affecting foreign policy. In 1983, Motorola released the DynaTAC, the first commercial cellular phone. It was heavy, expensive, and its battery lasted for only half an hour, so few people had one. Then in 1996, Motorola introduced the flip phone, which was cheaper, lighter, and more convenient to use. The flip phone initiated the mass adoption of cellphones. However, it was still just a phone: Unlike today’s smartphones, all it did was send and receive calls and texts. In 2007, Apple redefined communication with the iPhone, inventing the touchscreen and integrating an internet navigator. The phone became a digital hub for navigating, finding information, and building an online social identity. Before smartphones, mobile phones supplemented daily life. Now, they structure it. In 2000, fewer than half of American adults owned a cellphone, and nearly all who did used it only sporadically. In 2024, 98% of Americans over the age of 18 reported owning a cellphone, and over 90% owned a smartphone. Without the smartphone, most people cannot fulfill their daily tasks. Many individuals now experience nomophobia: They feel anxious without a cellphone. Around three-quarters of all stable elements are represented in the components of each smartphone. These elements are necessary for highly specialized materials that enable touchscreens, displays, batteries, speakers, microphones, and cameras. Many of these elements are essential for at least one function and have an unreliable supply chain, which makes them critical. Critical materials and AI Critical materials give leverage to countries that have a monopoly in mining and processing them. For example, China has gained increased power through its monopoly on rare earth elements. In April 2025, in response to U.S. tariffs, China stopped exporting rare earth magnets, which are used in cellphones. The geopolitical tensions that resulted demonstrate the power embodied in the control over critical minerals. The mass adoption of AI technology will likely change human behavior and bring forth new technologies, industries, and infrastructure on which the U.S. economy will depend. All of these technologies will require more optimized and specialized materials and create new material dependencies. By exacerbating material dependencies, AI could affect geopolitical relations and reorganize global power. America has rich deposits of many important minerals, but extraction of these minerals comes with challenges. Factors including slow and costly permitting, public opposition, environmental concerns, high investment costs, and an inadequate workforce all can prevent mining companies from accessing these resources. The mass adoption of AI is already adding pressure to overcome these factors and to increase responsible domestic mining. While the path from innovation to material dependence spanned a century for cars and a couple of decades for cellphones, the rapid advancement of large language models suggests that the scale will be measured in years for AI. The heat is already on. Peter Müllner is a distinguished professor in materials science and engineering at Boise State University. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Suggested Viewing

Join us to forge
a sustainable future

Our team is always growing.
Become a partner, volunteer, sponsor, or intern today.
Let us know how you would like to get involved!

CONTACT US

sign up for our mailing list to stay informed on the latest films and environmental headlines.

Subscribers receive a free day pass for streaming Cinema Verde.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.