Michael Mueller

Minimising the Carbon Footprint of Workloads

What if your SLOs were based on carbon emissions instead of latency? Learn to build carbon-aware software and reduce your workload's environmental impact.

Minimising the Carbon Footprint of Workloads
#1about 2 minutes

The growing carbon footprint of the IT industry

The IT sector contributes 4-5% of global carbon emissions, a figure larger than the aviation industry and projected to triple.

#2about 3 minutes

How AI workloads accelerate energy consumption

Both training large models like Llama 3 and running inference for services like OpenAI consume massive amounts of energy, driving up emissions for major tech companies.

#3about 3 minutes

Emerging regulations for data center efficiency

Governments are beginning to regulate data center energy use and grid strain, but a general lack of awareness and transparent data from providers hinders progress.

#4about 5 minutes

Key concepts for sustainable computing

Understanding server energy proportionality, Power Usage Effectiveness (PUE), and the embedded carbon from hardware manufacturing are foundational to reducing IT's environmental impact.

#5about 3 minutes

Practical strategies to reduce workload emissions

Simple but effective measures like eliminating zombie servers, right-sizing instances, using auto-scaling, and adopting ARM CPUs can significantly lower carbon emissions and costs.

#6about 1 minute

Tools for measuring energy and carbon emissions

Open-source tools like Kepler for Kubernetes and Scaphandre for Linux can measure energy consumption, which can then be converted to carbon emissions data.

#7about 2 minutes

Tracking emissions with Software Carbon Intensity (SCI)

The Software Carbon Intensity (SCI) ISO standard provides a formula to create a score for your application, which can be used as an SLO to prevent regressions in your CI/CD pipeline.

#8about 4 minutes

Using carbon awareness to shift workloads

By understanding real-time grid carbon intensity, you can time-shift batch jobs to sunnier hours or region-shift development workloads to greener data centers.

#9about 3 minutes

Case study on optimizing a GKE cluster

An experiment deploying a microservices application on GKE demonstrates that tuning default resource requests and enabling auto-scaling leads to optimal server utilization and lower energy use.

#10about 2 minutes

Green coding and on-premise optimization strategies

For on-premise environments, consolidate workloads to power down unused nodes, and at the software level, focus on profiling to find and fix inefficiencies rather than just changing programming languages.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

From learning to earning

Jobs that call for the skills explored in this talk.

Full Stack Engineer

Full Stack Engineer

Climax.eco
Rotterdam, Netherlands

70-100K
Senior
TypeScript
PostgreSQL
Cloud (AWS/Google/Azure)
DevOps Engineer (f/m/d)

DevOps Engineer (f/m/d)

Power Plus Communications
Mannheim, Germany

Intermediate
Senior
GIT
Linux
Docker
Kubernetes