Is an RTOS really the best way to design embedded systems?

share on: 
Share on reddit
Share on linkedin
Share on twitter
Originally published on:  
2011-06-07
Last updated on: 
LinkedIn Discussion
Table of Contents
Tags:   

Recently I’ve been involved in a discussion on the LinkedIn Real-Time Embedded Engineering group, which I started with the question “Is an RTOS really the best way to design embedded systems?“.

The discussion has swollen to way over 600 comments by now, which is some kind of a LinkedIn record. The discourse has sometimes low signal to noise ratio, but I believe it is still interesting. I consider this discussion to be a continuation of the topic from my April blog post RTOS Considered Harmful. As before, my main point is centered on the fundamental mismatch of using the sequential programming paradigm (RTOS or superloop) to solve problems that are event-driven by nature.

I’m really curious what you, gentle reader, might think…

Discussion

4 Responses

  1. Leaving aside both the signal and the noise of the discussion, I find the “single long list of posts” format used on LinkedIn to be a right pain – If only there were some other mechanism that would cope better with a) discussions that branch into multiple parallel threads and b) allows for easy quoting of previous contributions.

  2. IMO, the best solution is cooperative multitasking — a.k.a. a stack switcher. This is RTOS-like, but does not include time slicing. Each section of a thread runs to its logical endpoint, as would be done with a state-machine approach, and then the task “yields” control of the processor. The yield-point has the semantics of a function call yield(), yet within that function all other active tasks have a turn running, as would happen with a state-machine approach. Without the time slice, there is no concern about atomic operations. The down-side is that each task has its own stack, so memory consumption can be high, but local (automatic) variables can be used without fear of re-entrancy issues; in fact, its best if all variables are local, stack-allocated variables, and this reduces the need for a memory management scheme.

    1. I also like the cooperative scheduler, but I mean an even simpler scheduler than you describe. Event-driven systems naturally process events to completion (Run to Completion, RTC), so the yield() function is unnecessary.

      The system is partitioned into “active objects”, each having an event queue, priority, and a state machine. The scheduler is engaged after every RTC step of any state machine to choose the next state machine to execute. The scheduler always chooses the highest-priority event queue that has any events to process. The scheduler then extracts the next event from this queue and dispatches it to the associated state machine. The state machine runs to completion, after which the scheduler runs and the cycle repeats.

      Please note that because the state machines always return to the scheduler after each RTC step, a single stack can be used to process all state machines (memory-friendly architecture).

      The scheduler can also very easily detect when all event queues are empty, at which point it can call the idle callback to let the application put the CPU and peripherals to a low-power sleep mode (power-friendly architecture).

      Given the simplicity, portability, and low-resource consumption, this “vanilla” scheduler is very attractive. It allows you to partition the problem into state machines and execute these state machines orderly. The task-level response of this scheduler is the longest RTC step in the whole system, but because event-driven state machines don’t block, the RTC steps tend to be very short (typically just a few microseconds). Also, often you can break up longer RTC steps into shorter pieces, by posting an event to self and returning (“Reminder” state pattern). The self-posted event then triggers the continuation of longer processing.

      However, sometimes it is not practical to break up long RTC steps, and consequently the task-level response of the simple “vanilla” kernel might be too slow. In this cases you need to use a *preemptive* kernel. The big advantage of preemptive kernel is that it effectively decouples high-priority task from low-priority tasks in the time domain. The timeliness of execution of high-priority task is almost independent on the low-priority tasks. But of course there is no such thing as a free lunch. Preemptive kernels open the whole new class of problems related to race conditions. So you need to be very careful about sharing any resources.

  3. I believe in a more hybrid approach. My research area right now is event driven implementation with ISR for the obvious realtime response to events AND possibility of context switching threads but ONLY for heavy worker tasks that get passed data and produce a result and never access any data or io themselves. Timer driven workers that can be prempted are useful for tasks that require a lot of work. That way one can have a realtime event loop that always stays fast. All io and logic is done in a single event driven thread too.

    I too think RTOSes are a grave overkill for a system that is single core and does not intend to load any third party code. People use rtoses only because desktops use the same kind of task switching. But desktops do it for a completely different set of reasons (such as loading many arbitrary programs, running on multicore cpus etc) that do not apply to many smaller embedded projects. And when one steps up the game then one can just as well use linux instead of some crappy rtos.

    So my personal take on rtoses: completely unnecessary.

Leave a Reply