System-Level Power Management And An Overview:Dynamic Power Management Techniques
Dynamic Power Management Techniques
This section reviews various techniques for controlling the power state of a system and its components. One may consider components as black boxes, whose behavior is abstracted by the PSM model and focus on how to design effective power management policies. Without loss of generality, consider the problem of controlling a single component (or, equivalently, the system as a whole). Furthermore, assume that transitions between different states are instantaneous and the transition energy overhead is nonexistent. In such a system, DPM is a trivial task and the optimum policy is greedy one, i.e., as soon as the system is idle, it can be transitioned to the deepest sleep state available. On the arrival of a request, the system is instantaneously activated. Unfortunately, most PMCs have nonnegligible performance and power costs for state transitions. For instance, if entering a low-power state requires power-supply shutdown, returning from this state to the active state requires a (possibly long) time to (1) turn on and stabilize the power supply and the clock; (2) reinitialize the system; and (3) restore the context. When power state transitions have a cost, finding the optimal DPM policy becomes a difficult optimization problem. In this case, the DPM policy optimization is equivalent to a decision-making problem in which the PM must decide if and when it is worthwhile (from a performance and power dissipation viewpoint) to transition to which low-power state (in case of having multiple low-power states).
Example
Consider the StrongARM SA-1100 processor described in the previous example. Transition times between RUN and STDBY states are very fast so that the STDBY state can be optimally exploited according to a greedy policy possibly implemented by an embedded PM. In contrast, the wake-up time from the SLEEP state is much longer and has to be compared with the time constants for the workload variations to determine whether or not the processor should be shut down. In the limiting case of a workload without any idle period longer than the time required to enter and exit the SLEEP state, a greedy policy for shutting down the processor (i.e., moving to SLEEP state as soon as an STDBY period is detected) will result in performance loss, but no power saving. This is because the power consumption associated with state transitions is of the same order of magnitude as that of the RUN state. An external PM which controls the power state transitions of the SA-1100 processor must make online decisions based on the workload and target performance constraints.
The aforementioned example was a simple DPM optimization problem, which illustrated the two key steps of designing a DPM solution. The first task is the policy optimization, which is the problem of solving a power optimization problem under performance constraints. The second task is the workload prediction, which is the problem of predicting the near-future workload. In the following, different approaches for implementing these two problems will be discussed.
The early works on DPM focused on predictive shutdown approaches [7,8], which make use of “timeout”- based policies. A power management approach based on discrete-time Markovian decision processes was
proposed in Ref. [9]. The discrete-time model requires policy evaluation at periodic time instances and may thereby consume a large amount of power even when no change in the system state has occurred. To surmount this shortcoming, a model based on continuous-time Markovian decision processes (CTMDP) was proposed in Ref. [10]. The policy change under this model is asynchronous and is thus more suitable for implementation as part of a real-time operating system environment. Ref. [11] proposed time-indexed semi-Markovian decision processes for system modeling. Other approaches such as adaptive-learning-based strategies [12], session clustering and prediction strategies [13], online strategies [14,15], and hierarchical system decomposition and modeling [34] have also been utilized to find a DPM policy of EMCs.
In the following sections, we describe various DPM techniques in more detail.
Comments
Post a Comment