Strategic directions in real-time and embedded systems Part-2 Essay Example
Strategic directions in real-time and embedded systems Part-2 Essay Example

Strategic directions in real-time and embedded systems Part-2 Essay Example

Available Only on StudyHippo
  • Pages: 15 (3991 words)
  • Published: November 14, 2017
  • Type: Article
View Entire Sample
Text preview

3. FUTURE CHALLENGES AND RESEARCH Real-time systems will definitely become widespread as they support a wider range of applications. These systems have diverse time and resource requirements and must provide dependable and adaptable services with guaranteed temporal qualities. Additionally, technical and economic factors require the use of commodity computer networks and commonly used operating systems and communication protocols to build these systems.

While not all the challenges and important research areas in real-time computing can be addressed in this text, a few key areas are emphasized. First, high-level challenges such as system evolution, open real-time systems, composibility, and software engineering are discussed. Next, essential basic research is explored, including the science of performance guarantees, reliability and formal verification, general systems issues, real-time multimedia, and programming languages. Finally, the importance of education in real-time computing is highlighted. As these topics are

...

closely related, there will unavoidably be some overlap, which serves to underscore the strong connections between these subjects.

3.1 System Evolution

Computers have greatly transformed the manufacturing and service industries. However, the current real-time computing infrastructure poses challenges in terms of improving processes, upgrading equipment, and being adaptable to changing markets and increased global competition. Industry examples further illustrate these barriers.

A research department developed a process modification that greatly improved product yield and quality. The improvements were proven successful on a pilot plant by making minor changes to the processing sequence and adjusting key variables. However, these improvements were not implemented in the actual plant due to the line manager convincing management that it would not be cost-effective. The required software modifications for the process sequence, controlled by networked PLCs, are simple logic modifications. These PLCs coordinate

View entire sample
Join StudyHippo to see entire essay

various valves, sensors, and PID loops using several hundred lines of ladder logic code. The effects of these modifications on timing requirements are not known. The technician responsible for the PLC programs no longer works for the company, and the last attempt to modify the code resulted in a complete process shutdown, causing significant downtime costs. Consequently, the line manager is unwilling to install the process improvements developed by the research department.

Over time, the factory has replaced old equipment with new production technology. As a result, it now has a combination of old and new equipment from five different vendors, each with its own programming interface, data structures, data communication protocol, and timing characteristics.

Recently, one of the older process machines failed and was replaced with a new machine from a different vendor. The new machine had a shorter cycle time, improved reliability,and a more advanced control computer. However, installing the new equipment required developing and integrating it into yet another proprietary computing environment.

This process was costly as custom interfaces had to be created for communication with other equipment in the system.Additionally,the timing effects were unpredictable.Extensive tests had to be carried out during factory downtime to ensure safe installation.

In the end,the cost of integration was several times higher than that of the new equipment itself and it took much longer to install than initially estimated.

The real-time and embedded systems industry faces various issues. To tackle these issues, Chrysler, Ford, and GM collaborated on a white paper titled "Requirements of Open, Modular Architecture Controllers for Applications in the Automotive Industry" (Aug.15, 1994). The paper outlines their requirements for next-generation real-time computing systems.

One crucial requirement is the

ability of these systems to support quick changes while maintaining performance standards. Moreover, the controllers should allow users to easily enhance or add functionality without being dependent on technology vendors or controller suppliers.

A paradigm shift is needed in real-time computing as there should be a focus on mitigating risk and cost when implementing new technology in industrial systems rather than solely focusing on new installations. The industry requires a computing infrastructure that offers safe and predictable upgrades with minimal downtime.

The new real-time software architecture technology should incorporate important features such as extensive use of open-standard-based components like backplane buses, operating systems, networks, and communication protocols.(2) The text discusses the need for a cohesive set of interfaces for integrating process control, plant-wide scheduling, and plant management information systems. It also mentions the benefits of having a convenient and safe environment for customization, optimization, and reconfiguration of plant operations. Additionally, it highlights the importance of on-line development, testing, and integration of new technologies and products as well as trouble-free replacement of obsolete subsystems. A research challenge in implementing off-the-shelf components for real-time applications is developing scheduling and resource management schemes that demonstrate predictability properties encompassing functionality, timeliness, and fault-tolerance.

(2) One essential aspect in making real-time system components reusable is developing schemes to compose subsystems with known predictability properties into larger systems while ensuring that the predictability properties remain intact. The task of composing systems to meet functionality requirements is already challenging, but adding fault-tolerance and timeliness requirements further escalates the difficulty, although the potential benefits are immense.

The next two sections present open systems and composibility, which facilitate system evolution.

3.2 Open Real-Time Systems - Currently, real-time systems are

created with specific goals in mind, where the system's tasks are clearly established during the design phase.

The real-time systems community needs to develop versatile and open real-time systems and applications that can allow multiple independently created real-time applications to run together on a single machine or group of machines. This architecture would enable users to purchase and use various applications, including those with real-time requirements, on their personal and professional computers, similar to how they currently use nonreal-time applications.

Creating an efficient real-time architecture that allows for open real-time computing can be challenging due to the uncertainty surrounding hardware characteristics. These characteristics, including processor speeds, caches, memory, buses, and I/O devices, may differ between machines and remain unknown until runtime.

The fundamental difficulty arises from the unknown blend of applications and their combined resource and timing demands, which can only be determined during runtime.

Perfect a priori schedulability analysis is practically unachievable. It will require different and more adaptable approaches compared to the current methods used for constructing real-time systems with specific purposes.

There is a potential difference when it comes to open, consumer real-time systems. The most crucial factor may not be flawless execution, but rather the most affordable execution as perceived by the consumer. In other words, for non-safety-critical systems, the consumer may opt for a $50 video player application that occasionally drops frames over a $400 application that guarantees no frame drops. Both economic and correctness criteria are relevant for these systems.

3.3 Composibility
Many real-time systems, such as defense systems like early warning aircraft, command and control systems, autonomous vehicles, missile (control) systems, and complex ship systems, are highly dynamic. These systems operate in fault-inducing and

nondeterministic environments for long periods under strict time constraints. They need to be robust and deliver high real-time performance. They also need to evolve and utilize legacy components. The concept of composition has been recognized as crucial for these systems, but the focus has largely been on functional composition. Current research aims to develop the idea of composition across three interacting domains: function, time, and fault tolerance. Both offline and online solutions are necessary, with results that can be verified. Ultimately, this research will lead to adaptive high-performance fault-tolerant embedded systems that can dynamically address real-time constraints while providing guaranteed system-level performance and graceful degradation in the face of failures and time constraints. Any online composition must also adhere to time and fault-tolerance requirements and produce functional, timing, and fault-tolerant components that drive the system's actions.

In order to maintain reasonable costs in the short and long term, dynamic real-time systems should employ programming and operating environments that are vendor-neutral and portable. Additionally, they should utilize adaptive fault-tolerance techniques. The programming environment should possess effective analysis tools to ensure fault-tolerance and adherence to real-time constraints. The operating environment should support dynamic and flexible behavior within time constraints, and enable easy interoperability with commercially available products. Furthermore, it should facilitate porting to different platforms. The adaptive fault tolerance should be customizable by users and tailored for each application and function.

3.4 Software Engineering
Although software engineering has always been focused on large-scale complex systems with real-time requirements, most of the research and products in this field only address functional issues. Nonfunctional constraints like timing and dependability are typically left to specialized versions that often don't make it into mainstream

releases (as seen with the effort to extend CORBA for real-world systems). Real-time system engineers usually develop their own tools for each project because retrofitted tools are never fully satisfactory. Both communities rely on a static approach where fixed requirements are mapped onto a known platform, without considering evolvability in either the hardware or system requirements.

Software engineering needs a radical shift in perspective and approach to remain relevant in the new environment, despite its limited success with real-time system problems. Time, dependability, and other QoS constraints should be integrated with functionality at all levels, from requirements specification to execution. Evolvability should be ensured by separating platform-dependent concerns from application concerns. Software should be structured into modules with interfaces that capture functionality, environmental assumptions, and conditional service guarantees. Software should also be designed to be adaptable and configurable, allowing for tradeoffs in timeliness, precision, and accuracy based on changes in the environment. Additionally, timing constraints should be dynamically derived and imposed based on end-to-end requirements. The technologies needed to achieve this vision build upon previously isolated capabilities.In addition to discussing language and metaprotocol matters, it is necessary to integrate and repackage various technologies like formal methods, resource management algorithms, compilers, and schedulability analyzers targeting static design environments. This is essential in order to ensure QoS requirements dynamically at runtime services.

This research aims to enable the swift development, deployment, and evolution of intricate real-time software systems on any platforms while automatically adjusting to the platforms' traits and the operating environment.

3.5 The Science of Performance Guarantees

Classical real-time systems have offered offline deterministic guarantees in fulfilling safety and real-time demands considering failures and environmental assumptions [Klein et al. 1993].

The set of algorithms and analysis available for these fixed, predictable assurances can be called a science of performance guarantees. As systems grow in size, become more dynamic, and are deployed in unpredictable and less safety-critical settings, an extended science of performance guarantees is necessary to aid the development of these novel systems.

A science of performance guarantees is needed to analyze dynamic real-time systems in unpredictable and uncontrollable environments, as well as when failures occur. Currently, simulation and testing are frequently used for analysis, but on-line admission control and dynamic guarantees have been utilized in certain cases for some time.

In addition to the application of real-time system technology in areas such as stock market trading and multimedia, there is a requirement for probabilistic guarantees for general Quality of Service (QoS) requirements. Ensuring the delivery of every packet in a multimedia video stream, for instance, is both unnecessary and expensive. Therefore, it is crucial to develop a science of performance guarantees that enables accurate analysis of meeting various timing and delay specifications with probabilistic requirements. This expanded science may involve new extensions of queueing theory that specifically focus on meeting deadlines.

Timing validation is a crucial focus in research for performance guarantees, real-time computing, and embedded computing. The effectiveness of various validation algorithms is determined by their complexity, reliability, and level of success. One way to evaluate these algorithms is by considering their time complexities, such as running in constant time or O(n) time, where n represents the number of tasks in the system. Algorithms with constant or linear time complexity are ideal for on-line timing validation. However, more intricate algorithms with pseudo-polynomial time complexity offer advantages

in optimizing total schedule length and can be employed for off-line validation.

Every schedulability condition and validation algorithm relies on a workload model. The algorithm's conclusion is considered accurate if all the assumptions of the model are valid for the system. A validation algorithm is considered robust if its conclusions remain accurate even if some assumptions of its underlying workload model are not completely accurate. By using a robust validation algorithm, the need for accurately characterizing applications and the run-time environment is significantly reduced, as well as the effort required for analyzing and measuring individual applications to validate the workload model. For example, existing validation algorithms based on the periodic task model are robust. Despite assuming that jobs in each task are released periodically and execute for equal durations, such a validation algorithm remains accurate even in the presence of deviations from periodic behavior.

Efficiency and robustness can be easily achieved by not placing a high importance on the success rate of the validation test. If a validation algorithm is excessively pessimistic and only declares tasks as unschedulable when system resources are significantly underutilized, it has a low degree of success. When using a validation algorithm with a low degree of success, the scheduler may reject too many new tasks that are actually able to be scheduled and acceptable.

There is a need for new research in timing validation technology because the current state of the art has various limitations that make it inadequate for many modern and future real-time systems. The existing schedulability conditions and validation algorithms are not successful enough for applications with sporadic processing requirements. These applications have tasks with widely varying release times, execution

times, and resource requirements. Additionally, the current technology cannot accurately account for the unpredictable behavior of hardware platforms and variable amounts of time and resources consumed by system software and application interfaces. As a result, the current technology is overly pessimistic when used with sporadic applications, especially in large, open run-time environments that use commodity computers, networks, and system software.

(2) The validation algorithms currently in use are primarily designed for deterministic timing constraints, relying on deterministic workload and resource models. While these algorithms may be effective for certain cases, they may not be suitable for validating probabilistic timing constraints. Additionally, the lack of real-time benchmark applications makes it extremely difficult to accurately calibrate and validate the probability distributions assumed by these models. As a result, it is crucial for probabilistic validation algorithms to be robust and maintain accuracy even in situations where the models' assumptions are invalid.

(3) The majority of validation algorithms are designed for statically configured systems, meaning systems where applications are divided into partitions and processors and resources are assigned to those partitions in a fixed manner.

Recent advancements in validation aim at developing innovative methods, theories, and algorithms to overcome the aforementioned limitations.

3.6 Reliability and Formal Verification Enhancing reliability is crucial as computers become more important in complex systems. Several methods have been developed for this purpose, including static analysis based on formal methods and scheduling theory, as well as dynamic analysis based on testing and run-time monitoring and checking. While these techniques are effective for small-scale systems individually, a common framework is needed to uniformly apply them to large-scale systems. By using a common framework, a single system model can be specified

and used for both static and dynamic analyses, reducing the effort required to develop multiple models for each technique. Additionally, this approach eliminates the need to check consistency between models used by different techniques.

It is possible to develop a common framework by extending an existing real-time formalism, such as logics, automata and state machines, Petri nets, or process algebras. By combining these formalisms with scheduling, testing, and runtime monitoring techniques, a unified framework can be created. There have been some promising efforts to integrate process algebra and logics with scheduling theory, as well as generating tests and monitoring based on specifications. For instance, real-time processes and logics have been expanded with schedulability analysis, allowing for analysis of both scheduling and functional correctness using the same specification. Additionally, preliminary work has been done on testing real-time properties using formal specifications as oracles and automatically generating test suites. However, further research is needed to determine their effectiveness in practical applications.

In general, specifications do not capture the amount of detail found in implementations. Therefore, it is crucial to maintain traceability of real-time requirements from the application description to the low-level implementation. To accomplish this, a multilevel specification technology is required that goes beyond abstract representations like state machines or Petri nets. Instead of merely mapping a high-level abstraction to a detailed implementation, what is needed is a multilevel specification mechanism that ensures real-time requirements are preserved through conditions on the low-level implementation. This technology should also enable runtime monitoring and checking to ensure correct system functioning. By using multi-level specifications, one can derive the conditions that need to be checked at runtime. Achieving reliability in complex real-time systems

necessitates a deeper understanding of how to verify correctness during runtime. A multilevel specification technology can also be beneficial during the design phase as it can help identify design errors through simulation and other techniques. Recent research has shown that it is possible to combine simulation and model checking, offering the advantage of limiting the state space considered by the model checker based on the simulation trace being analyzed.The importance of using techniques such as multilevel specification and an appropriate design methodology, along with model checking, is emphasized. This is particularly necessary for ensuring the correctness of a low-level implementation and satisfying real-time requirements.

3.7 General System Issues Real-time systems encompass various critical research issues in architecture, communications, operating systems, and databases. To effectively tackle the key research challenges mentioned elsewhere in this paper—specifically performance guarantees, system evolution, and reliability—it is imperative to leverage specialized support from these areas. Moreover, each of these domains presents its own unique set of research problems.

The text below lists a few open questions for each area, including changes in architecture to better support calculation of worst-case execution times, computation of worst-case execution time including the impact of various types of caches, implementation and analysis of guarantee-based time-constrained communication, efficient implementation of real-time multicasting, predictable real-time thread package implementation, empirical studies supporting creation of real-time models, appropriate resource and workload characterization models for real-time systems, hardware-software co-design for real-time systems, support needed for virtual environments and multimedia with significant processing in real time, guarantees provided by real-time transactions,
suitable architectures and protocols for real-time databases to maintain temporal validity of data.

Real-Time Multimedia: Multimedia computing and communication have the potential to revolutionize human-computer

interaction, remote collaboration, entertainment, and knowledge acquisition. The future holds great potential for transmitting and processing continuous media in real-time.The progress in this field will lead to the widespread use of distributed virtual environments in various areas such as telemedicine, remote surgery, automated factory monitoring and control, and interactive personal advertisements.

Some of the challenging research issues involve specifying the predictability requirements accurately, with many of them being probabilistic in nature. One of the challenges is distinguishing between application-level predictability requirements and internal (sub)system-level predictability requirements, while maintaining end-to-end predictability.

Developing schemes for translating predictability requirements into mechanisms that can effectively fulfill those requirements is a task. When algorithms have been created to offer absolute assurances for specific dynamic requests, assuming the worst-case needs are known, the difficulty is in comprehending the overall predictability characteristics of those algorithms on a larger scale or at the system level, and in providing probabilistic guarantees when the resource needs of requests are also probabilistic. Achieving this in a layered distributed and diverse system requires innovative methods for combining and incorporating QoS guarantees.

One potential solution for the new analysis requirements involves the utilization of queueing theory, a recognized set of techniques and methods that aim to describe the resource-sharing behavior of systems and applications that involve a significant stochastic element. The challenge with queueing theory lies in its emphasis on equilibrium behavior and overall quality-of-service (QoS) measures like average response time. However, it does not provide a means to determine if individual applications meet their specific QoS specifications.

An important future challenge is to develop an integrated set of analysis tools that combines the focus on application Quality of Service (QoS) satisfaction

from real-time scheduling theory with the capability to handle various stochastic application and system behaviors from queueing theory. Moreover, this methodology should ensure predictable resource sharing among tasks with different QoS requirements. The resulting theory could be referred to as real-time queueing theory for guaranteed systems.


3.9 Programming Languages

Programming languages, in general, do not adequately support the needs of real-time requirements and other dependability requirements. This is due to the wide range of guarantees and scheduling schemes, making programming languages weak in this area. Unfortunately, this lack of support is not surprising given the complexity and variety of temporal requirements and implementation strategies. The inability to program such properties is a significant obstacle to their implementation in industrial practice.

In order to accommodate all requirements, including emerging ones, it is important to find more effective structuring mechanisms for programming languages. The development of the OOP paradigm has brought about a distinction between functional and non-functional behaviors. The functional behavior is expressed within a computational model, but the computational model itself can be changed through programming at the metalevel. This process, known as reflection, allows for the addressing of behavioral aspects and has the potential to greatly improve the effectiveness of programming real-time systems. The challenge lies in being able to handle a wide range of requirements, timing guarantees, and scheduling approaches through metalevel programming.

3.10 Education
Teaching real-time systems involves teaching the science of performance guarantees. It is ideal for this science to be taught alongside the science of formal verification of logical correctness in an undergraduate computer science and engineering program. This allows students to handle programs and systems as mathematical objects and

as components within larger systems, especially those that interact with real-world processes.

The teaching of real-time systems encompasses two distinct aspects. The first aspect involves understanding the execution of a program in relation to time. This includes managing time as a fundamental resource, considering explicit time constraints, and developing suitable abstractions for execution time in performance-critical programs. Similar to a course on data structures that explores fundamental abstractions of physical memory and decomposing problems into basic operations on these abstractions, a real-time component within a computer science curriculum would emphasize organizing execution time to meet real-time constraints. One example of this is teaching program-structuring techniques (including data structures) that allow manipulation of execution time at an abstract level. Examples of such structures include sieves and refinements from the imprecise computing literature. This part of the curriculum primarily focuses on algorithmic concepts and builds upon existing analysis of algorithms. However, what sets it apart is the realization that traditional measures of execution time, like asymptotic complexity, are insufficient for solving real-time problems. Instead, alternative techniques such as interval arithmetic and the calculus of execution time intervals are necessary.

The management of logical and physical concurrency in time is a significant aspect to consider. This topic focuses on the high level of concurrency present in real-time systems. There are two main subthemes to explore: the traditional study of cooperating sequential processes and the specialized study of competition for shared resources among processes. The former involves dividing functions among multiple parallel processes and coordinating their synchronization and communication. On the other hand, the latter involves analyzing how sequential execution time is prolonged due to competition for resources and how to

effectively manage this dilation process to ensure performance guarantees.

The goal is to incorporate real-time system principles into current computer science curricula to provide a comprehensive understanding of the fundamental aspects involved in specifying, manipulating, and achieving performance guarantees. One suggestion is for the real-time systems community to create a set of companion monographs for courses, similar to the performance modeling Performance Supplement series produced by the Computer Measurement Group (CMG) and the ACM Special Interest Group on Measurement and Evaluation (SIGMETRICS). Additionally, incorporating practical training through industry projects would be advantageous.

; Part-1 ... Part-3 ;

Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds
New