With the rise of the energy-hungry data centre and the emergence of its greedy cousin AI, soaring energy demand for IT services is already placing substantial strain on power networks worldwide. In some places this unprecedented load is becoming intolerable. Ireland, for example, saw TSO (transmission system operator) EirGrid effectively ban further development of grid-connected data centres in the Dublin area from 2022 until 2028. This was prompted by concerns that the power demand would get so big that the grid would not be able to cope. It’s perhaps easy to see why. According to Ireland’s Central Statistics Authority, data centres used more than a fifth of all metered electricity generated in the country in 2023. This issue stretches far beyond Ireland. Estimates vary, but some suggest that data centres already consume around 3% of global electricity production, and that figure could potentially rise to more than 10% by 2030.
While most individual data centres are relatively small in terms of energy consumption, with demand of around 5-10 MW, larger, so-called hyperscale, data centres are emerging with capacities of 100 MW or more. This is becoming a strong trend as the use of AI becomes more prevalent. A search using an AI platform like ChatGPT uses around 10 times more processing power and energy than a standard, simple Google search. Again, this is being reflected in supply challenges for power utilities. Late last year, for example, the Tennessee Valley Authority (TVA) had to grant special permission to supply 150 MW to a data centre in South Memphis developed by Elon Musk’s xAI company for its Grok AI platform.
At the same time, this prodigious energy demand naturally comes with a large carbon footprint. Consequently, the major data centre players like Meta, Google, Amazon and X are all investing heavily in renewable energy as a way of offsetting their energy use. Indeed, IEA analysis notes that these data giants are also the four largest purchasers of corporate renewable energy, having contracted at least 50 GW. This is equivalent to the entire generation capacity of Sweden. While contracting for renewables achieves the low-carbon goal, these technologies also come with their own set of challenges, chiefly their variability. Above all else, data centres require a reliable energy supply regardless of whether or not the wind blows or the sun shines.
In the light of these looming challenges and the primary need to secure dependable 24/7 power, data centres are increasingly looking at ways of ensuring an additional capability of having self-generated dispatchable site power, if not as the primary source, then at least as backup. This approach can avoid many of the issues associated with a lengthening connection queue to a congested grid, can smooth any of the variability that is characteristic of renewable energy resources like wind and solar and can provide independence of operations, allowing for self-sufficiency. However, while generating your own power at the site may be a good choice for a growing number of data centres, reliability still remains the key consideration.
The aeroderivative choice
One of the prime choices for data centre self-generation of power that can be controlled and adjusted on demand is the use of aeroderivative turbines such as the GE Vernova LM2500XPRESS, Siemens/Rolls-Royce SGT-A45TR or the Solar Titan family of machines. Highly efficient, with renowned reliability, and extremely quick to install and start from cold when compared with other solutions, aeroderivatives meet many of the core data centre power requirements. For these reasons, aeroderivatives have long been used to serve as backup power units for data centres when there are disruptions to grid-supplied power. In addition, they are deployed to smooth any major fluctuations in renewable energy supplies. In Texas, for example, where wind and solar farms are sometimes recruited as the primary electrical power source, gas turbines are often used to provide rapid response flexibility and act as backup when there is cloud cover or low windspeeds.
And, with outputs of the order of 25-60 MW, such machines are ideally matched to serve the data centre sector right up to the hyperscale. Larger frame engine sized gas turbines are also finding use, however, employing multiple smaller aeroderivative gas turbines allows the N+ capacity model to be applied as AI demand grows. By deploying multiple machines, including a reserve unit or two, it is also relatively easy to ensure sufficient generation capacity is always available to meet energy demands during routine maintenance outages or even in the event of an unanticipated failure.
With their origins as aircraft jet engines, aeroderivatives have for many years found applications offshore and for mobile generation, typically used to supply emergency power following a natural disaster or to a large public event. These smaller machines also allow easier and quicker installation. When deployed as backup power for AI data centre service, such units can be started quickly but also then turned down quickly. Given that many data centre sites in North America are actually leased rather than owned outright, this is a distinct advantage. And, as many mobile units may also be considered as non-permanent structures, they can lend themselves more easily to environmental permitting and operations licensing.
For all their advantages, though, extreme care is required to ensure that aeroderivative turbines can retain their high efficiency and reliability over the long term and when operating in harsh, challenging environments.
Reliability is paramount
When operating, even smaller turbines ingest huge volumes of air as part of their combustion process. This unfiltered ambient air will be laden with contaminants that can critically impact turbine performance.
Dust, grit, water, hydrocarbons, salt, as well as insects and pollen, can all potentially find their way into the sophisticated, advanced turbine internals where they will cause failures. Particulates like dust and grit can cause erosion of the compressor section, impacting the aerofoil and GT efficiency. Even where such significant impacts are minimised through regular online water washing, contamination can still adhere to the compressor blades, changing their geometry and again affecting operational efficiency.
Airborne salt found at coastal sites will cause corrosion and salt can make its way deep into the engine to the hot gas path where corrosive chemical reactions are accelerated. With solid to liquid phase changes and when combined with dust, salt compounds will form hard-to-shift corrosive, mud-like materials.
In some locations, snow and ice can also impede airflow and airborne pollutants like hydrocarbons commonly present their own set of filtration issues. There is a world of ambient air challenges, challenges which will vary with location and season. Removing them typically introduces pressure losses, which in themselves will cause a drop in operational efficiency, directly translated into more kg of CO2 per MWh produced and lower output.
While some of these contamination issues can be alleviated by regular offline washing, this requires a shutdown of the machine and thus a temporary loss of utility.
Often, these turbine units are not running at full capacity for 8000+ hours a year, suggesting that there are likely opportunities for off-line washes that don’t impact immediate operational requirements.
However, more significant are the implications of contaminants entering the turbine and the consequent potential reliability concerns. Digital services are such a central element in many people’s daily lives that the implications of a data centre going offline are profound, even life threatening.
Given the primacy of turbine reliability – and with it, assured data centre availability – effective air intake filtration becomes a critical factor.
Addressing the full range of potential contaminants under all kinds of ambient environmental conditions from hot dusty arid regions to frigid urban sites, air filtration is extremely challenging and relies on extensive research, development and testing. Parker Hannifin, for example, invests substantial funds in its comprehensive testing programmes, including mobile test rigs that can be deployed in appropriate locations. Backed by extensive testing and quality control, the performance of durable hydrophobic and oleophobic filtration media can be assured under all conditions.

However, intake filtration is much more than the filter media alone. The entire system between the air intake and the turbine plenum has a role to play in maintaining optimum performance and reliability. The intake system typically includes inertial separation vanes to prevent water ingress, and acoustic silencing – particularly important for data centres located in urban environments – and may include air inlet heating and cooling too. The whole system within a housing must fit perfectly to prevent bypass that might allow unfiltered air into the turbine.

All of these elements require an intimate understanding of aerodynamics, thermodynamics, mechanical design and acoustics in addition to a core expertise in air filtration. How all these vital air intake components interact affects flow dynamics and system pressure loss and impacts not only air cleanliness but airflow distortion at the gas turbine bellmouth. To work effectively the system must be designed as a single compact unit and fitted with filter elements designed for the task and specified for the specific environmental conditions encountered.
For data centres, reliability is the paramount consideration. When choosing power turbines to meet that goal, air intake filtration is fundamental. Data giants like Amazon, Microsoft, Meta, Google and X understand the real challenge is not access to power but securing dependable power. That’s the real risk of poor filtration choices.