Embedded And Real Time Systems By Bakshi Free Download
Embedded and Real Time Systems by Bakshi Free Download
Are you looking for a comprehensive guide on embedded and real time systems? Do you want to learn about the hardware and software components, the modeling and design techniques, and the practical issues of these systems? If yes, then you have come to the right place. In this article, we will review the book Embedded and Real Time Systems by Bakshi, which is a free download available online. This book covers all the essential topics of embedded and real time systems, from the basics to the advanced concepts. It also provides numerous examples, case studies, exercises, and references for further reading. Whether you are a student, a researcher, or a practitioner, this book will help you gain a solid understanding of embedded and real time systems.
What are embedded and real time systems?
An embedded system is a computer system that is designed to perform a specific function within a larger system or environment. It usually has limited resources, such as memory, processing power, or battery life. It also has strict requirements for reliability, performance, or security. Some examples of embedded systems are smartphones, smart watches, digital cameras, medical devices, industrial controllers, etc.
A real time system is a computer system that has to respond to events or stimuli within a specified time limit. The correctness of a real time system depends not only on the logical results, but also on the timeliness of the results. A real time system can be classified into two types: hard real time and soft real time. A hard real time system has to meet all its deadlines without any exception, otherwise it may cause catastrophic consequences. A soft real time system can tolerate some occasional deadline misses, but it may degrade the quality of service or user experience. Some examples of real time systems are air traffic control systems, video games, multimedia applications, etc.
Why are embedded and real time systems important?
Embedded and real time systems are important because they are ubiquitous in our daily lives and they enable many critical applications that affect our safety, health, entertainment, productivity, etc. For instance:
Embedded systems power many smart devices that we use every day, such as phones, watches, cameras, etc. They provide us with various functionalities, such as communication, navigation, computation, etc.
Real time systems control many physical processes that require timely responses, such as air traffic control, automotive systems, robotics, etc. They ensure the safety and efficiency of these processes.
Embedded and real time systems also support many emerging technologies that have great potential for innovation and social impact, such as Internet of Things (IoT), artificial intelligence (AI), cyber-physical systems (CPS), etc. They enable the integration of computation, communication, sensing, actuation, learning, etc.
What are the challenges and opportunities of embedded and real time systems?
Embedded and real time systems face many challenges and opportunities in their design, development, and deployment. Some of the main challenges are:
Resource constraints: Embedded and real time systems have to operate with limited resources, such as memory, processing power, battery life, etc. They have to optimize their resource utilization and manage their resource allocation.
Complexity: Embedded and real time systems have to deal with complex functionalities, interactions, environments, etc. They have to cope with uncertainty, variability, heterogeneity, etc.
Quality attributes: Embedded and real time systems have to satisfy various quality attributes, such as reliability, performance, security, safety, etc. They have to meet their functional and non-functional requirements and handle possible faults, errors, or attacks.
Some of the main opportunities are:
Advances in hardware and software technologies: Embedded and real time systems can benefit from the advances in hardware and software technologies, such as multicore processors, memory technologies, wireless communication, cloud computing, etc. They can leverage these technologies to improve their capabilities, scalability, flexibility, etc.
Advances in modeling and design techniques: Embedded and real time systems can benefit from the advances in modeling and design techniques, such as finite state machines, Petri nets, UML, model checking, etc. They can use these techniques to facilitate their analysis, verification, validation, etc.
Advances in application domains: Embedded and real time systems can benefit from the advances in application domains, such as IoT, AI, CPS, etc. They can explore new possibilities and challenges in these domains and create novel solutions and services.
Hardware Components of Embedded and Real Time Systems
In this section, we will discuss the hardware components of embedded and real time systems. These components include processors, memory, I/O devices and architectures, communication structures and protocols.
Processors
A processor is the core component of an embedded or real time system that executes the instructions of a program. A processor can be classified into two types: general-purpose processors (GPPs) and application-specific processors (ASPs). A GPP is a processor that can execute a variety of programs for different applications. A GPP usually has a complex instruction set architecture (ISA) that supports various operations and modes. A GPP can be further divided into two types: microprocessors (MPUs) and microcontrollers (MCUs). A MPU is a processor that has a separate memory unit for storing data and instructions. A MCU is a processor that has an integrated memory unit and other peripherals on a single chip. A ASP is a processor that is designed for a specific application or function. A ASP usually has a simple ISA that supports only the necessary operations and modes. A ASP can be further divided into two types: digital signal processors (DSPs) and field-programmable gate arrays (FPGAs). A DSP is a processor that is specialized for processing digital signals, such as audio, video, etc. A DSP has dedicated hardware units for performing arithmetic operations on data streams. A FPGA is a processor that is composed of programmable logic blocks that can be configured to implement any logic function. A FPGA has high flexibility and parallelism for implementing custom logic circuits.
Memory
A memory is a component of an embedded or real time system that stores data and instructions for the processor. A memory can be classified into two types: volatile memory and non-volatile memory. A volatile memory is a memory that loses its content when the power supply is turned off. A volatile memory usually has high speed and low cost. A volatile memory can be further divided into two types: random access memory (RAM) and cache memory. A RAM is a memory that allows read and write operations on any location with equal access time. A RAM can be further divided into two types: static RAM (SRAM) and dynamic RAM (DRAM). A SRAM is a RAM that uses flip-flops to store each bit of data. A SRAM has fast access speed but high power consumption and large area. A DRAM is a RAM that uses capacitors to store each bit of data. A DRAM has slow access speed but low power consumption and small area. A cache memory is a memory that stores frequently used data or instructions for faster access by the processor. A cache memory usually has small size but high speed. A cache memory can be further divided into two types: instruction cache (I-cache) and data cache (D-cache). An I-cache is a cache memory that stores instructions for the processor. A D-cache is a cache memory that stores data for the processor.
I/O Devices and Architectures
I/O Devices and Architectures
An I/O device is a component of an embedded or real time system that interacts with the external environment or other systems. An I/O device can be classified into two types: input devices and output devices. An input device is a device that receives data or signals from the external environment or other systems. An input device can be further divided into two types: analog input devices and digital input devices. An analog input device is a device that receives continuous data or signals, such as temperature, pressure, voltage, etc. An analog input device usually requires an analog-to-digital converter (ADC) to convert the analog data or signals into digital data or signals. A digital input device is a device that receives discrete data or signals, such as keyboard, mouse, switch, etc. A digital input device usually does not require an ADC. An output device is a device that sends data or signals to the external environment or other systems. An output device can be further divided into two types: analog output devices and digital output devices. An analog output device is a device that sends continuous data or signals, such as sound, light, current, etc. An analog output device usually requires a digital-to-analog converter (DAC) to convert the digital data or signals into analog data or signals. A digital output device is a device that sends discrete data or signals, such as display, printer, LED, etc. A digital output device usually does not require a DAC.
An I/O architecture is a component of an embedded or real time system that connects the I/O devices to the processor and the memory. An I/O architecture can be classified into two types: memory-mapped I/O and port-mapped I/O. A memory-mapped I/O is an I/O architecture that assigns each I/O device a unique address in the memory address space. A memory-mapped I/O allows the processor to access the I/O devices using the same instructions and buses as for accessing the memory. A memory-mapped I/O has the advantages of simplicity and uniformity, but it may cause address conflicts and waste memory space. A port-mapped I/O is an I/O architecture that assigns each I/O device a unique address in a separate I/O address space. A port-mapped I/O requires the processor to access the I/O devices using special instructions and buses that are different from those for accessing the memory. A port-mapped I/O has the advantages of avoiding address conflicts and saving memory space, but it may increase complexity and diversity.
Software Components of Embedded and Real Time Systems
In this section, we will discuss the software components of embedded and real time systems. These components include real time operating systems, task scheduling algorithms, resource sharing and access control policies, concurrent programming and POSIX.
Real Time Operating Systems
A real time operating system (RTOS) is a software component of an embedded or real time system that manages the hardware resources and provides services to the application programs. A RTOS has to meet the timing constraints of the real time tasks and ensure their correctness and predictability. A RTOS can be classified into two types: hard RTOS and soft RTOS. A hard RTOS is a RTOS that guarantees to meet all the deadlines of the hard real time tasks without any exception. A hard RTOS usually has low overhead and high responsiveness, but it may lack flexibility and functionality. A soft RTOS is a RTOS that tries to meet most of the deadlines of the soft real time tasks with some occasional misses. A soft RTOS usually has high flexibility and functionality, but it may have high overhead and low responsiveness.
A RTOS typically consists of three main components: kernel, middleware, and application programming interface (API). The kernel is the core component of a RTOS that performs basic functions, such as task management, interrupt handling, timer management, etc. The kernel can be further divided into two types: monolithic kernel and microkernel. A monolithic kernel is a kernel that integrates all the functions in a single module. A monolithic kernel has high performance and simplicity, but it may have low modularity and reliability. A microkernel is a kernel that separates the functions into different modules that communicate through message passing. A microkernel has high modularity and reliability, but it may have low performance and complexity. The middleware is an optional component of a RTOS that provides additional functions, such as communication protocols, file systems, graphical user interfaces (GUIs), etc. The middleware can enhance the functionality and portability of a RTOS, but it may increase the overhead and resource consumption. The API is the interface component of a RTOS that defines the functions and data structures that the application programs can use to access the services of the RTOS. The API can be further divided into two types: standard API and proprietary API. A standard API is an API that follows a common specification or standard, such as POSIX, OSEK, etc. A standard API can improve the compatibility and interoperability of a RTOS, but it may limit the optimization and customization. A proprietary API is an API that is specific to a particular RTOS or vendor, such as VxWorks, QNX, etc. A proprietary API can exploit the features and advantages of a RTOS, but it may reduce the compatibility and interoperability.
Task Scheduling Algorithms
A task scheduling algorithm is a software component of an embedded or real time system that determines the order and timing of the execution of the real time tasks. A task scheduling algorithm has to optimize the performance metrics, such as utilization, throughput, response time, etc., and satisfy the timing constraints, such as deadlines, periods, etc., of the real time tasks. A task scheduling algorithm can be classified into two types: static scheduling and dynamic scheduling. A static scheduling is a scheduling algorithm that determines the schedule of the real time tasks before their execution. A static scheduling usually assumes that the characteristics and behavior of the real time tasks are known and fixed in advance. A static scheduling has low overhead and high predictability, but it may lack flexibility and adaptability. A dynamic scheduling is a scheduling algorithm that determines the schedule of the real time tasks during their execution. A dynamic scheduling usually adapts to the changes and uncertainties of the real time tasks at run time. A dynamic scheduling has high flexibility and adaptability, but it may have high overhead and low predictability.
A task scheduling algorithm can also be classified into two types: preemptive scheduling and non-preemptive scheduling. A preemptive scheduling is a scheduling algorithm that allows a higher priority task to interrupt and suspend a lower priority task that is currently executing. A preemptive scheduling can improve the responsiveness and timeliness of the real time tasks, but it may cause overheads such as context switching, synchronization, etc. A non-preemptive scheduling is a scheduling algorithm that does not allow a higher priority task to interrupt and suspend a lower priority task that is currently executing. A non-preemptive scheduling can reduce the overheads such as context switching, synchronization, etc., but it may degrade the responsiveness and timeliness of the real time tasks.
Some examples of task scheduling algorithms are:
Rate-monotonic (RM) algorithm: A static preemptive scheduling algorithm that assigns priorities to the real time tasks based on their periods. The shorter the period, the higher the priority.
Earliest deadline first (EDF) algorithm: A dynamic preemptive scheduling algorithm that assigns priorities to the real time tasks based on their deadlines. The earlier the deadline, the higher the priority.
the deadline and the remaining execution time. The smaller the laxity, the higher the priority.
Round-robin (RR) algorithm: A static non-preemptive scheduling algorithm that assigns equal priorities to the real time tasks and allocates them a fixed time slice for execution. The tasks are executed in a circular order.
First come first served (FCFS) algorithm: A static non-preemptive scheduling algorithm that assigns priorities to the real time tasks based on their arrival times. The earlier the arrival time, the higher the priority.
Resource Sharing and Access Control Policies
A resource sharing and access control policy is a software component of an embedded or real time system that manages the access and allocation of the shared resources among the real time tasks. A shared resource is a resource that can be used by multiple real time tasks, such as memory, processor, I/O device, etc. A resource sharing and access control policy has to optimize the resource utilization and avoid resource conflicts and deadlocks. A resource sharing and access control policy can be classified into two types: non-blocking policies and blocking policies. A non-blocking policy is a policy that does not allow a real time task to block or wait for a shared resource that is currently used by another real time task. A non-blocking policy can improve the responsiveness and predictability of the real time tasks, but it may cause resource wastage and starvation. A blocking policy is a policy that allows a real time task to block or wait for a shared resource that is currently used by another real time task. A blocking policy can improve the resource utilization and fairness of the real time tasks, but it may cause priority inversion and deadlock.
Some examples of resource sharing and access control policies are:
No preemption policy: A non-blocking policy that does not allow a real time task to preempt another real time task that is currently using a shared resource. The real time task has to wait until the shared resource is released by the other real time task.
Preemptive resume policy: A non-blocking policy that allows a real time task to preempt another real time task that is currently using a shared resource. The preempted real time task has to resume its execution from where it was interrupted when the shared resource is released by the other real time task.
Preemptive restart policy: A non-blocking policy that allows a real time task to preempt another real time task that is currently using a shared resource. The preempted real time task has to restart its execution from the beginning when the shared resource is released by the other real time task.
Priority inheritance protocol (PIP): A blocking policy that allows a real time task to inherit the priority of another real time task that is blocked by it due to a shared resource. The priority inheritance is transitive and temporary.
Priority ceiling protocol (PCP): A blocking policy that assigns each shared resource a priority ceiling equal to the highest priority of the real time tasks that can access it. A real time task can access a shared resource only if its priority is higher than the priority ceilings of all the other shared resources.
Stack-based resource allocation protocol (SRP): A blocking policy that assigns each shared resource a preemption level equal to the lowest preemption level of the real time tasks that can access it. A preemption level is an inverse measure of priority. A real time task can access a shared resource only if its preemption level is lower than or equal to the preemption levels of all the other shared resources.
Concurrent Programming and POSIX
A concurrent programming is a software component of an embedded or real time system