One of the biggest drivers in UPS development over the last 10-15 years has been improving efficiency. Aided and abetted by the introduction of the transformerless design, the majority of manufacturers claimed efficiency figures are all now circa 97%. An important note is that this is in on-line double conversion mode. This is, of course, good news in today’s climate and carbon footprint conscious environment and also assists in reducing the UPS’ total cost of ownership (TCO). As continued technical developments regarding improving efficiency followed the law of diminishing returns, various concepts were marketed as a means of further increasing UPS systems overall efficiency, such as operating the UPS in static bypass mode, marketed under various different pseudonyms such as Eco-Mode, High Efficiency Mode etc etc.
Another of these concepts was “Variable Load Management”, also marketed under various different names. Briefly this equated to switching off, or putting to “sleep”, those UPS modules in a multi module parallel N + x redundant system that were not required to maintain the overall system resilience.
UPS systems are generally configured for a much greater load than is actually being protected and, of course, we want the UPS to operate at the best point on its efficiency curve. With an older legacy type system their optimum efficiencies were at the higher load spectrum so if they are being used to support very low loads then it will be lower down their efficiency curve which will waste energy, and be costly to run. So, scalability and flexibility became essential considerations when purchasing, to ensure the continual ‘right sizing’ of the UPS to maximise efficiency, minimise running costs and reduce carbon footprints.
Another way to address the challenge of oversized UPS parallel redundant systems was this idea of variable load management. Essentially it means if you have, say, five modules sharing a load equally at 20% each, this may be inefficient for some UPS systems and so a number of modules can be switched off. This allows the remaining modules to operate at a higher level of their capacity point and higher up their efficiency curve.
Perhaps the ability to ‘switch modules off automatically’ in a situation with a variable load may be desirable. Empirical evidence is sparse, but the engineer in me tells me that constantly turning electronic components on and off must naturally reduce its life expectancy. Modules take time to warm up and the constant warming and cooling effect must affect the components, leading to potentially higher failure rates and therefore a higher risk when it comes to power protection?
An alternative solution is to select a system that enables all the modules to continue to protect the load where they can continue to reach the optimal point in their efficiency even at very low loads (<10%). These modules stay on all the time but are still working efficiently and the risk to the load is therefore minimised. For this reason, in theory, variable load management may sound like the a good option for an oversized system, but correct rightsizing from the outset, with an eye on easy scalability for future load changes, and selecting a UPS that can operate efficiently even at very low loads (<10%) must be a better option.
The good news is that increased efficiency and lower TCO are closely linked and the most efficient systems enjoy ongoing operating cost savings not to mention their improved environmental impact. Moving forwards, with the need to store data continuing to grow rapidly, data centres need to be working with trusted advisers who know how much a server rack burns and can calculate the most efficient option for the facility both now and in the future.
Article featured in Data Centre Management Magazine Feb 2020