Site menu:

Research Areas

Predictable Scheduling for Multicore Processors for Real-Time Systems

While multiprocessor real-time systems is an old problem, multicore processors had brought a renew interest along with new dimensions to the problem. For instance, there is a need to tradeoff different levels of migration cost, different degrees of inter-core hardware sharing (e.g. memory bandwidth), etc. Our research is aimed at providing new knobs to perform these tradeoffs with additional application information.

Beyond real-time systems general-purpose systems are now faced with the fact that they need to parallelize their work in order to get the expected performance increment from additional cores in the new processors. However, partition the work into parallel pieces is a necessary but not sufficient condition. Equally important is the allocation of CPU cycles to these parallel pieces (tasks). In the extreme if we run all the tasks of the parallel pieces in the same core, such parallelism is completely wiped out. Hence, the task-to-core allocation and the scheduling of hardware resources between core (e.g. cache, memory bandwidth) can change completely the performance of these systems. We are working on new ways to take advantage of application knowledge to use them as parameter in the scheduling algorithms at all levels of the computer system.

Mixed-Criticality Scheduling for Real-Time Systems

Functionally consolidation can create a resource-sharing problem between tasks of different criticality. Furthermore, when tasks with different criticalities have execution times with large variations that depend on environmental conditions, we end up in a situation where overloads are common and the scheduler needs to have a well-defined overload guarantees. For instance, the execution time of an obstacle avoidance (OA) task in an autonomous vehicle depends on the number of obstacles that the vehicle detects. As a result, when an excessive number of obstacles is detected, the scheduler should be able to provide more CPU cycles to the OA task stealing them from other less important tasks.

Analytic Integration of Cyber-Physical Systems

A 2003 NIST study found that between 50% and 80% of the cost of a software project is spent in testing. Furthermore, in multi-tier industries where the final product is comprised of components provided by multiple suppliers, the bulk of the problems are found at integration time. This is the case of the avionics and automotive industries where conflicting design decisions and assumptions made by the different suppliers, can create costly integration errors.

To solve the integration problem we are working on new approaches to replace the integration of physical parts with an integration of analytical models. Once models are integrated, analysis algorithms are used to replace integration testing in the discovery of potential errors. This replacement is not a one-to-one replacement, instead, it is aimed at raising the confidence of the validation to very high levels while keeping the cost of the correction of errors as low as possible. Because, this integration process does not involve a physical integration it is called virtual integration.

The avionics industry has formed the Aerospace Vehicle Systems Institute (AVSI) to explore the use of model-based engineering approaches for virtual integration in this industry.

Two DARPA research initiatives have explored related topics. First, the META-II have explored the foundry manufacturing style where a comprehensive model is the only interfase between design and manufacturing. Such a model must contain a complete description of the desired behavior. The design stage is then responsible of both producing and testing the model for correctness. Once the model is out of this design stage then the model is the only and complete description of the system that can be use in a foundry to create the final and correct product. This character of the model implies the use of analysis algorithms that verifies the behavior of the model in the same approach we are pursuing.

Secondly, the Systems 2020 initiative is aimed at increasing the speed of development of cyber-physical systems and their foreseen changes while keeping them adaptable to unforeseen changes. Both, model-based engineering and platform-based engineering are recognized as two key technologies to achieve these goals. We believe that analytic models can play a key role in this effort at least in two areas. First, analytic models can increase the speed of the development of cyber-physical systems but also the confidence in their correctness. And secondly, these models can exploit analytic invariants of stable PBE layers (e.g. limited interaction between components) that can simplify the analysis of models increasing its scalability and even enabling new types of analysis and properties.

Finally, we recognize that there is still an important number of research challenges ahead of us. As a result, we organized the Analytic Virtual Integration of Cyber-Physical Systems Workshop (AVICPS) to lead an effort in the research community in this front. AVICPS is currently in its third edition collocated with the IEEE Real-Time Systems Symposium.