• HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
Tuesday, July 15, 2025
BIOENGINEER.ORG
No Result
View All Result
  • Login
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
  • HOME
  • NEWS
  • EXPLORE
    • CAREER
      • Companies
      • Jobs
        • Lecturer
        • PhD Studentship
        • Postdoc
        • Research Assistant
    • EVENTS
    • iGEM
      • News
      • Team
    • PHOTOS
    • VIDEO
    • WIKI
  • BLOG
  • COMMUNITY
    • FACEBOOK
    • INSTAGRAM
    • TWITTER
No Result
View All Result
Bioengineer.org
No Result
View All Result
Home NEWS Science News Chemistry

Certain AI Prompts Generate Up to 50 Times More CO2 Emissions Than Others, New Study Reveals

Bioengineer by Bioengineer
June 19, 2025
in Chemistry
Reading Time: 4 mins read
0
ADVERTISEMENT
Share on FacebookShare on TwitterShare on LinkedinShare on RedditShare on Telegram

blank

In recent years, the rapid advancement of large language models (LLMs) has profoundly transformed the landscape of artificial intelligence, enabling machines to generate human-like text across countless domains. However, beneath this technological marvel lies a hidden environmental cost that has remained largely unaddressed outside academic circles. New research from Germany has quantified the carbon footprint associated with interacting with these AI systems, revealing a complex trade-off between AI’s reasoning capabilities and its environmental impact. This study presents the first comprehensive comparison of CO₂ emissions produced by different pre-trained LLMs when responding to standardized queries.

Language models process human questions by converting words into “tokens,” which are smaller units—sometimes parts of words—encoded numerically for machine comprehension. These tokens form the fundamental operational currency driving LLMs’ computation. However, every operation to generate or process tokens consumes energy, and given the scale of modern AI applications, this translates into meaningful quantities of carbon dioxide emissions. Despite increased awareness of AI’s impressive capabilities, the ecological footprint of simply interacting with these models remains poorly understood by most users.

The German research team examined fourteen large language models ranging from seven billion to over seventy billion parameters — the parameters being critical factors that define how a model learns and processes information. Their approach involved asking each model one thousand benchmark questions spanning diverse academic and practical subjects, allowing an apples-to-apples comparison of energy consumption normalized against performance and reasoning complexity. Crucially, the study differentiates between models that generate concise answers and those designed to engage in more elaborate, step-by-step reasoning processes.

.adsslot_3zHXg1n4p2{width:728px !important;height:90px !important;}
@media(max-width:1199px){ .adsslot_3zHXg1n4p2{width:468px !important;height:60px !important;}
}
@media(max-width:767px){ .adsslot_3zHXg1n4p2{width:320px !important;height:50px !important;}
}

ADVERTISEMENT

The results demonstrate that reasoning-enabled LLMs, which produce extensive intermediate “thinking” tokens before delivering final answers, can be up to fifty times more carbon-intensive than their concise-answer counterparts. On average, reasoning models generated approximately 543 ‘thinking’ tokens per question, whereas concise models relied on just 38 tokens to provide their responses. These additional tokens accumulate computational demand and thus escalate CO₂ emissions. Yet, this token density does not assuredly yield greater accuracy; rather, it often introduces verbose details, some of which are extraneous to the correctness of the answer.

Among the models tested, the Cogito model—a reasoning-enabled LLM with seventy billion parameters—emerged as the most accurate, attaining nearly 85% correctness over the thousand questions. However, this high performance came at a significant environmental cost: the Cogito model emitted three times as much CO₂ as similarly sized LLMs generating succinct answers. This finding highlights a critical accuracy-sustainability trade-off in current AI systems: models that maintain carbon emissions below approximately 500 grams of CO₂ equivalent struggle to surpass 80% accuracy on these benchmarks.

Another notable insight is the subject-dependent variation in emissions. Queries involving complex reasoning—such as abstract algebra problems or philosophical dilemmas—induced up to six times more CO₂ emissions than straightforward topics like high school history. This discrepancy is linked to the increased number of reasoning tokens required by the model when addressing conceptually demanding questions, further exacerbating energy expenditure and carbon footprint.

These findings underscore the broader implications for AI deployment and responsible usage. By selectively employing models optimized for concise responses when exhaustive reasoning is unnecessary, users can dramatically reduce their environmental impact without sacrificing meaningful accuracy. Researchers emphasize that awareness is paramount; knowing the carbon cost tied to specific AI tasks enables individuals and organizations to make more informed decisions regarding model choice and usage frequency.

The study also highlights how hardware differences and regional energy grid variations may influence these emission estimates. The carbon intensity of powering AI systems depends on local energy sources, data center efficiencies, and the underlying infrastructure, all of which introduce variability into sustainability assessments. Consequently, while the reported metrics provide a compelling benchmark, they should be considered within the context of these fluctuating parameters.

A striking calculation presented by the researchers compares the CO₂ emissions of question-answering at scale to familiar everyday activities. For instance, having the DeepSeek R1 model (with seventy billion parameters) answer 600,000 questions produces carbon emissions comparable to a round-trip transatlantic flight from London to New York. Contrastingly, the Qwen 2.5 model, slightly larger in size but more efficient, can deliver nearly twice as many answers at equivalent accuracy while generating the same carbon footprint. Such comparisons contextualize AI’s environmental costs alongside other human activities, making the impact more tangible.

Ultimately, this research calls for greater transparency in AI’s environmental footprint and suggests that integrating emission metrics into user interfaces could foster more sustainable AI consumption habits. By informing users of the carbon cost of specific interactions—whether that be generating a lengthy philosophical essay or transforming a casual photo into a stylized action figure—platforms can encourage prudent and environmentally mindful usage. This aligns with broader efforts across technology sectors to quantify and mitigate the ecological ramifications of burgeoning digital tools.

The energy expenditure linked to AI communication is an emerging but pressing consideration in the debate surrounding ethical and sustainable artificial intelligence. As AI becomes increasingly ubiquitous in research, education, industry, and entertainment, understanding and managing its energy demands is paramount to balancing innovation with planetary stewardship. This study provides a crucial empirical foundation and invites future work to further refine these assessments and develop greener AI architectures.

The environmental trade-offs highlighted by this analysis remind us that technological progress is inseparable from ecological responsibility. The choices developers and users make today, from model design to everyday prompts, collectively shape the carbon trajectory of AI’s future. Thoughtful stewardship, powered by rigorous data like that provided by this research, is essential to ensure that AI advancements serve humanity without compromising our planet’s health.

Subject of Research: Not applicable

Article Title: Energy Costs of Communicating with AI

News Publication Date: 19-Jun-2025

Web References: http://dx.doi.org/10.3389/fcomm.2025.1572947

References: Dauner, M., et al. (2025). Energy Costs of Communicating with AI. Frontiers in Communication. https://doi.org/10.3389/fcomm.2025.1572947

Image Credits: Not specified

Keywords

Large Language Models, AI Energy Consumption, Carbon Footprint, Artificial Intelligence, Environmental Impact, Reasoning Models, Tokenization, AI Accuracy, Sustainability, Machine Learning, CO₂ Emissions, Green AI

Tags: AI carbon emissionsAI efficiency and emissions trade-offcarbon footprint of language modelsCO2 emissions from AI promptsecological footprint of AI interactionsenergy consumption in AI processingenvironmental cost of AI technologylarge language models environmental impactresearch on AI and climate changesustainable AI practicestoken processing and energy useunderstanding AI’s environmental effects

Share12Tweet8Share2ShareShareShare2

Related Posts

Architecture of VBayesMM

Unraveling Gut Bacteria Mysteries Through AI

July 4, 2025
Visulaization of ATLAS collision

Can the Large Hadron Collider Prove String Theory Right?

July 3, 2025

Breakthrough in Gene Therapy: Synthetic DNA Nanoparticles Pave the Way

July 3, 2025

Real-Time Electrochemical Microfluidic Monitoring of Additive Levels in Acidic Copper Plating Solutions for Metal Interconnections

July 3, 2025

POPULAR NEWS

  • Enhancing Broiler Growth: Mannanase Boosts Performance with Reduced Soy and Energy

    Enhancing Broiler Growth: Mannanase Boosts Performance with Reduced Soy and Energy

    73 shares
    Share 29 Tweet 18
  • New Organic Photoredox Catalysis System Boosts Efficiency, Drawing Inspiration from Photosynthesis

    54 shares
    Share 22 Tweet 14
  • IIT Researchers Unveil Flying Humanoid Robot: A Breakthrough in Robotics

    53 shares
    Share 21 Tweet 13
  • AI Achieves Breakthrough in Drug Discovery by Tackling the True Complexity of Aging

    70 shares
    Share 28 Tweet 18

About

We bring you the latest biotechnology news from best research centers and universities around the world. Check our website.

Follow us

Recent News

Triggering Bacterial Calcification to Combat MRSA

Microbiota Boosts Tumor Immunity via Dendritic Cells

Single-Cell Map Unveils Lung Aging After Tuberculosis

  • Contact Us

Bioengineer.org © Copyright 2023 All Rights Reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • National
  • Business
  • Health
  • Lifestyle
  • Science

Bioengineer.org © Copyright 2023 All Rights Reserved.