Back to Blog
Engineering

How Read AI decreased software emissions by up to 40%

August 7, 2025

On the heels of the White House releasing its AI Action Plan, we at Read AI are focused on continuing to lean into our four climate principles which help us prioritize efficiency and reduce carbon emissions in our roadmap. We are encouraged that our progress has been notable, and that as our platform scales, so too can our influence.

Our plan and approach is relatively straightforward and easy to follow: we use data-driven decision making, prioritize intentional efficiency and proactive innovation, and empower our customers so that we can lead by example and continue to operate as sustainably as possible.

Last week, we presented our findings at the Pacific Northwest’s Climate Week, providing instruction on the ways to calculate and measure emissions for software companies of any size. To help illustrate our principles, we reviewed our most recent milestones, which showcase the impact we’ve had in a short time, and hopefully serve as future inspiration for internal teams and those in the audience. We believe that the onus is now squarely on all of us to efficiently and effectively lower our emissions as we unlock AI for the masses.

Prioritizing measurement first

In late 2024, our migration to AWS Graviton-based instances helped to improve our throughput and performance while reducing our costs by about 20%. We took on the challenge of quantifying the change in carbon emissions for that migration, and found that existing industry-standard tools for measuring carbon emissions from software and hardware could solve the issue.

For us, the answer was the Green Software Foundation’s Software Carbon Intensity specification (SCI). We use the SCI to measure scores for our previous instances and compare them to our new Graviton instances. The SCI accounts for the total energy consumed by the software, the carbon intensity of the energy consumed, and the embedded carbon of the hardware it runs on to calculate a rate of emissions for software.

Overcoming calculation hurdles

Some of the data needed for this calculation is not easily accessible or shared publicly (e.g. CPU Power of Graviton processors, lifetime of a server in a public cloud) so we needed to estimate a portion of the numbers. Since we are comparing instances, we can focus on being precise, meaning calculated values are accurate relative to each other, rather than being accurate, meaning calculated values are accurate relative to the actual values. This allows us to treat parts of the SCI equation, like energy intensity, as constants since we deploy them to the same public cloud region.

In our calculations, the only 2 variables we had to estimate were the CPU Power of the Graviton processor and the lifecycle analysis (embedded carbon) of the Graviton instance. Through 3rd party benchmarking we were able to estimate Graviton’s CPU power to be 33% lower compared to our previous instance types. For lifecycle analysis data, we estimated Graviton to be roughly equivalent to other ARM-based processors within the same compute group and averaged those embedded coefficients from the Cloud Carbon Footprint repository.


Realizing our first set of goals


In the end, we calculated a 20% reduction in carbon emissions from switching our compute-optimized instance types to Graviton-based instances. Since this case study was originally published, we have identified similar cost and performance opportunities for our memory-optimized instances types and started migrating those to Graviton. About 60% of our overall workloads are done on compute-optimized instances, but about 10% of our overall workloads are done on memory-optimized instance types. Using the same steps as above, we compared the emissions in our existing memory-optimized instances to the new memory-optimized Graviton instances and calculated a 40% reduction in carbon emissions.

Driving for additional efficiencies

Measuring our energy consumption and carbon emissions meets our first climate principle, but that is just the beginning. Our second climate principle is optimizing our software and hardware for energy efficiency and minimizing carbon emissions. To achieve this, we have 3 areas to focus on:

  • Using less hardware: Examples include increasing utilization of our hardware, shutting down unused hardware, right-sizing our instances, and storing and retaining less data.
  • Using less energy: Examples include using lower energy machines like Graviton, increasing performance, and increasing efficiency of our software.
  • Using energy more intelligently: Examples include doing less intense work when the energy grid is dirty, and doing more when it is greener. This is also known as a carbon aware system. Third-party providers like WattTime.org and Electricity Maps can provide live and forecasted energy grid data to support this type of system.

In conclusion

Read AI believes it is our responsibility to operate sustainably and contribute to the global effort against climate change. AI is some of the most energy-intensive technologies in the world, and we are committed to measuring and limiting the environmental impact of our organization whenever possible.

We will continue to drive these principles into our work and share our successes (and failures!) along the way and we challenge other companies, especially those using AI, to establish and share their own climate principles and the work they are doing to reduce their energy consumption and carbon emissions. Collectively we can reduce the overall energy requirements for AI and the carbon emissions they produce.

Written by: Bill Johnson, Director of Engineering, Read AI

More from the blog