Thumbnail

3 Ways to Incorporate Sustainability into It Infrastructure Decisions

3 Ways to Incorporate Sustainability into It Infrastructure Decisions

Sustainability in IT infrastructure is becoming increasingly crucial for businesses worldwide. This article explores practical ways to incorporate eco-friendly practices into technology decisions, drawing on insights from industry experts. From extending device lifecycles to implementing targeted green IT strategies, these approaches can help organizations reduce their environmental impact while potentially improving efficiency and cutting costs.

  • Extend Device Lifecycles to Reduce E-Waste
  • Virtualize Servers for Energy Efficiency
  • Implement Targeted Green IT Strategies

Extend Device Lifecycles to Reduce E-Waste

One sustainability decision that stands out for me was when we overhauled our device refresh policy. Instead of automatically cycling out laptops and desktops every three years, we shifted to a performance-based replacement model. We invested in upgrading memory and storage on existing machines where possible, and for devices that couldn't keep up, we partnered with a certified refurbisher to extend their usable life outside our organization. At first, some team members were skeptical, but once they saw their "old" machines running like new after an SSD upgrade, it clicked.

That single initiative ended up having the biggest environmental impact because it drastically cut down on e-waste. We weren't just reducing our carbon footprint from manufacturing and shipping new hardware—we were also keeping dozens of machines out of landfills each year. The bonus was financial: we lowered our refresh costs without sacrificing productivity. That experience taught me that sustainability in IT isn't always about big infrastructure moves; sometimes it's about rethinking long-standing habits that drive unnecessary waste.

Virtualize Servers for Energy Efficiency

A key sustainability initiative for us has been consolidating and virtualizing our server infrastructure. By moving from multiple physical servers to a cloud-based virtual environment, we dramatically reduced energy consumption, hardware waste, and cooling demands. It was both an environmental and operational win.

We partnered with providers that prioritize renewable energy and efficient data center management. This not only lowered our carbon footprint but also improved scalability and uptime, allowing us to serve clients more efficiently while reducing our overall resource usage.

The most significant impact came from recognizing that sustainability and performance aren't opposites; they reinforce each other. By designing IT infrastructure that's leaner and more efficient, we've aligned our technology strategy with long-term environmental goals and shown clients that responsible operations can also drive measurable business value.

Craig Bird
Craig BirdManaging Director, CloudTech24

Implement Targeted Green IT Strategies

From all the array of options possible, we picked options that balanced immediate environmental impact, low engineering overhead, and fast ROI. Each was implementable in sprints, so there was no need for a massive redesign or expensive new infrastructure. Here's what we've implemented:

1. Single Kubernetes cluster instead of multiple environments ~8-12%

We merged dozens of small, under-utilized clusters into one managed cluster (with separate namespaces and RBAC). This reduced control-plane overhead and idle node time. Multi-cluster federation and other alternatives like multi-cluster federation were heavier to maintain and didn't cut energy costs as much, which is why we haven't opted for them.

2. CPU/memory limits (pretty obvious but still) ~10-15%

We've set precise resource limits to prevent over-provisioning. Most services were reserving 2-3x what they used. This was simpler and more predictable than autoscaling and so on, which can still leave more waste during low-load periods.

3. Autoscale-to-zero for stateless jobs ~18-25%

For microservices that don't need constant uptime (e.g., batch processors, webhooks), we've used Knative-style scaling. Other variants, like shutting down VMs manually or scheduling cron-based containers, required more manual ops effort and lacked elasticity. That's why it was an optimal way.

4. Carbon-aware batch scheduling ~4-7%

Instead of running data jobs 24/7, we shifted them to run nighttime/off-peak. Alternatives like moving workloads across regions had compliance and latency risks, and in general, are harder to implement.

5. ARM nodes for heavy workloads ~6-9%

ARM processors (like Graviton on AWS) consume less power per compute unit and cost less than x86. Rewriting for GPUs wasn't worth it, and migrating to on-prem renewable-powered servers would've required capex we didn't control.

6. Data retention tiers ~12-18%

We classified data into hot, warm, and cold storage, cutting expensive high-IO storage usage. Deleting data wasn't an option (compliance issues), but such tiering has reduced both storage energy and costs.

How it went:

We began with a baseline (telemetry + cloud bills) and made a "green scorecard."

Then we selected low-effort/high-impact adjustments, A/B tested improvements, and tracked p95 latency and kWh side by side.

The improvements below are not cumulative, but combined they gave us around 45% less energy, 50% fewer compute hours, and 40% less money.

Aleksa Baburska
Aleksa BaburskaDirector of Solution Acceleration, Devox Software

Copyright © 2025 Featured. All rights reserved.
3 Ways to Incorporate Sustainability into It Infrastructure Decisions - CIO Grid