Skip to main content

Novel Approach to AI Improves Sustainability and Data Protection

ICS researchers are working to ensure that progress in automation does not come at the expense of our planet or our privacy.

Headshots of Imani, Moshirpour and Ahmed

One thing that may not immediately spring to mind when we think of AI is its high energy costs. Researchers from UC Irvine’s Donald Bren School of Information and Computer Sciences (ICS) are tackling this growing problem by developing a new approach for completing complex tasks with small language models.

The research focuses on commit messages — that is, messages that summarize changes in source code. Generating these messages is a vital task for software development, ensuring that code changes are transparent, traceable and comprehensible.

“While large language models (LLMs) like GPT-4 have shown promise in automating this task, their resource-intensive nature and reliance on external servers pose significant challenges regarding both environmental impact and data privacy,” says software engineering Ph.D. student Aaron Imani, who worked with ICS faculty Mohammad Moshirpour and Iftekhar Ahmed to address these challenges. “Our work introduces a novel approach that, with some context refinements, lets us use a small, open-source LLM to generate high-quality results.”

The approach is outlined in the paper, “Context Conquers Parameters: Outperforming Proprietary LLM in Commit Message Generation,” which the researchers will present at the IEEE/ACM International Conference on Software Engineering (ICSE 2025) at the end of April. ICSE is a top international venue for sharing cutting-edge software engineering research.

“This work presents a sustainable approach in language models by showing complex tasks can be done with much smaller language models than large models such as ChatGPT,” says Moshirpour. “As such, this work has substantial benefits including sustainability and data privacy that go well beyond software engineering applications.”

Accomplishing complex tasks with smaller models, which take less energy to run, dramatically reduces energy consumption. “This aligns with the growing global emphasis on reducing the carbon footprint of AI technologies,” says Imani, “ensuring that progress in automation does not come at the expense of our planet.”

Furthermore, because organizations can then run their own language models, they don’t need to share all their data with LLMs such as ChatGPT. “Unlike many large, cloud-based models that process sensitive code data on external servers, our lightweight, locally deployable model ensures that code repositories remain private and secure,” says Imani. He notes that this is particularly beneficial for organizations and developers handling proprietary or confidential code, because it eliminates the risks associated with transmitting data to external systems.

“Our work highlights the potential for AI solutions that are not only high-performing but also responsible, addressing the dual priorities of protecting user data and promoting environmental sustainability,” says Imani. “We hope this approach inspires further exploration of privacy-preserving, energy-efficient technologies across other domains in software engineering and beyond.”

Shani Murray

Skip to content