Dynamic Voltage Scaling for Low-Power Hard Real-Time Systems:Inter Task Voltage Scaling for Hard Real-Time Systems
Inter Task Voltage Scaling for Hard Real-Time Systems
Where intratask dynamic voltage scheduling targets energy reduction by scheduling speeds inside a task, without employing knowledge about other tasks that might run in the system, intertask scheduling addresses speed selection at task group level. The advantage of using a group level speed scheduling comes from the possibility of distributing the workload more evenly over tasks and execution time. As a result, more time is spent at lower speeds, leading to lower energy consumption.
As detailed in the previous chapter, intratask DVS means, in principle, selecting the right processor speed for specific task sections. At that level, timing is strictly determined by the speed to section assignment, and therefore intratask DVS scheduling and speed selection at task level refer basically to the same notion. However, moving up to groups of tasks, intertask DVS refers not only to assigning speeds to tasks, but also to deciding a start time for each task. Alternatively, one can view this as assigning certain time intervals for each task, within which these have to start and complete their execution in an optimal manner from the energy point of view. Thus, in general the intertask DVS problem is at least as hard as the classic scheduling problem. Occasionally, research in this area addresses intertask DVS in two steps, first deciding on the order of task execution (scheduling) followed by assigning speeds to individual tasks (speed selection). However, we will use dynamic voltage scheduling or speed scheduling to refer to this process as a whole.
Typically, at task group level one is not interested in how exactly each individual task makes use of its assigned time interval. In general, it is assumed that tasks can optimally select their execution speed inside the assigned interval. In this sense, intratask and intertask DVS are orthogonal and can and should be used to complement each other. Nevertheless, it may not always pay off to use the best, and usually most complex, techniques at both individual task and task group level. For once, intertask strategies may compensate for poor choices of intratask decisions, making complex task-level approaches obsolete. Secondly, the energy optimization version of the law of diminishing returns§ says that less and less energy is gained for the same added effort. Consequently, the scheduling overhead might end up consuming more energy than it is actually gained.
Given the extensive research and results in the area of hard real-time scheduling, it is understandable that the majority of the intertask DVS strategies build on top of classic hard real-time scheduling approaches. In fact, because of the strong dependency between energy, processor speed, and timing, using DVS to reduce energy consumption while keeping hard deadlines is more of a challenge in these kind of real-time systems rather than in any other. Although DVS techniques designed for soft real-time, Quality of Service, and user perceived latency oriented systems do exist, this chapter focuses on hard real-time approaches, as these are still applicable for less constrained systems.
From the architectural point of view, multiprocessor systems appear to be at least as efficient in terms of energy as uniprocessor systems. DVS strategies especially designed for multiprocessor systems do exist [29–43]. However, given the restricted space, only techniques designed for uniprocessor systems are presented in the following. Nevertheless, these can be extended for multiprocessor architectures in the same way classic real-time algorithms can be derived for multiprocessor systems.
A Taxonomy of InterDVS
A classification of the existing intertask DVS hard real-time approaches is by no means trivial. Other than that, there are a large number of InterDVS algorithms proposed in recent years (perhaps, more than necessary in practice!), most InterDVS algorithms are a blend of various techniques and methods, which make it difficult to classify them. Very few of these belong exclusively to one class, and therefore we will
rather classify scheduling decisions than complete approaches. One can group intertask DVS scheduling methods according to:
• Their occurrence, or the moment they are employed. Approaches range from fully static, when all the decisions regarding scheduling and speed selection are taken off-line, before the system becomes functional to mostly dynamic, where run-time speed management and scheduling are employed.
• Their complexity, or the overhead required by the scheduling strategy. Since, generally inter-task DVS, just as classic scheduling, is a hard problem, optimal algorithms are expensive. Heuristics with lower overhead may be employed at the expense of efficiency.
• Their foundation, or classic scheduling algorithm they build upon. To guarantee hard real-time requirements, most approaches employ a classic scheduling algorithm, such as rate-monotonic (RM) [44] or earliest deadline first (EDF) [45], which is then extended with various off-line and run-time speed selection strategies.
• Their flexibility, or the ability to accommodate and employ run-time task execution variations. Some approaches use only the WCET of tasks to take scheduling decisions while others may employ profiling, statistic information, and even execution history, to adapt to run-time variations.
Notice that the criteria identified above are not necessarily independent, as for example, high overhead, but accurate methods are more likely to be employed off-line than at run time. Furthermore, depending on the foundation, intertask DVS methods may be more or less flexible, employing predominantly run time or off-line decisions. In this paper, we classify existing techniques mainly according to occurrence and foundation, as shown in Table 18.1.
Comments
Post a Comment