Nowadays two main approaches are being pursued to reduce energy consumption of networks: the use of sleep modes in which devices enter a low-power state during inactivity periods, and the adoption of energy proportional mechanisms where the device architecture is designed to make energy consumption proportional to the actual load. Common to all the proposals is the evaluation of energy saving performance by means of simulation or experimental evidence, which typically consider a limited set of benchmarking scenarios.
In this paper, we do not focus on a particular algorithm or procedure to offer energy saving capabilities in networks, but rather we formulate a theoretical model based on random graph theory that allows to estimate the potential gains achievable by adopting sleep modes in networks where energy proportional devices are deployed. Intuitively, when some devices enter sleep modes some energy is saved. However, this saving could vanish because of the additional load (and power consumption) induced onto the active devices. The impact of this effect changes based on the degree of load proportionality. As such, it is not simple to foresee which are the scenarios that make sleep mode or energy proportionality more convenient.
Instead of conducting detailed simulations, we consider simple models of networks in which devices (i.e., nodes and links) consume energy proportionally to the handled traffic, and in which a given fraction of nodes are put into sleep mode. Our model allows to predict how much energy can be saved in different scenarios. The results show that sleep modes can be successfully combined with load proportional solutions. However, if the static power consumption component is one order of magnitude less than the load proportional component, then sleep modes become not convenient anymore. Thanks to random graph theory, our model gauges the impact of different properties of the network topology. For instance, highly connected networks tend to make the use of sleep modes more convenient.