Embedded Systems and GUIs
If you’re wondering what embedded systems can possibly have to do with GUIs, consider that just about every embedded system, just like every GUI, is predominantly event-driven, by nature. In both cases, the primary function of the system is reacting to events. In the case of embedded systems, the events might be different than GUI (e.g., time ticks or arrivals of data packets), rather than mouse clicks and button presses. But, the essential job is still the same: reacting to events that come at difficult to foresee order and timing.
The Hollywood Principle
Even the earliest GUIs, such as the original Mac, or the early-days Windows, were structured according to the “Hollywood principle“, which means “Don’t call us, we’ll call you”. The “Hollywood principle” recognizes that the program is not really in control—the events are. So instead of pretending that the program is running the system, the system runs your program by calling your code to process events.
Inversion of Control
This inversion of control seems natural, I hope, and has served well all GUI systems. However, the concept hasn’t really caught on in the embedded space. The time-honored approaches are still either the “superloop” (main+ISR) or an RTOS, none of which really embodies the “Hollywood principle”.
RTOS vs. Framework
It really takes more than “just” an API, such as a traditional RTOS. What you typically need is a framework that provides the main body of the application and calls the code that you provide. Such event-driven real-time frameworks are not new. Today, virtually every design automation tool for embedded systems incorporates a variant of such an event-driven framework. The frameworks buried inside tools prove that the concept works very well in very wide range of embedded systems.
My point is that a Real-Time Framework (RTF) should, and I believe eventually will, replace the traditional RTOS. What do you think?