RTOS, TDD and the “O” in the S-O-L-I-D rules

share on: 
RTOS and S-O-L-I-D principles
Table of Contents

In Chapter 11 of the “Test-Driven Development for Embedded C” book, James Grenning discusses the S-O-L-I-D principles for effective software design. These rules have been compiled by Robert C. Martin and are intended to make a software system easier to develop, maintain, and extend over time.

The SOLID Principles

The acronym SOLID stands for the following five principles:

Single Responsibility Principle
Open-Closed Principle
Liskov Substitution Principle
Interface Segregation Principle
Dependency Inversion Principle

Open-Closed Principle and TDD

Out of all the SOLID design rules, the “O” rule (Open-Closed Principle) seems to me the most important for TDD, as well as the iterative and incremental development in general. If the system we design is “open for extension but closed for modification”, we can keep extending it without much re-work and re-testing of the previously developed and tested code. On the other hand, if the design requires constant re-visiting of what’s already been done and tested, we have to re-do both the code and the tests and essentially the whole iterative, TDD-based approach collapses. Please note that I don’t even mean here extensibility for the future versions of the system. I mean small, incremental extensions that we keep piling up every day to build the system in the first place.

Open-Closed Principle
and Blocking in an RTOS

So, here is my problem: RTOS-based designs are generally incompatible when it comes to the Open-Closed Principle. The fundamental reason is that RTOS threads use blocking for everything, from waiting on a semaphore to timed delays.

Blocked threads are unresponsive for the duration of the blocking and the whole intervening code is designed to handle this one event on which the thread was waiting.

Slide: Perils of Blocking with RTOS and shared-state concurrency
Perils of Blocking in a Traditional RTOS

For example, if a thread blocks and waits for a button press, the code that follows the blocking call handles the button. So now, it is hard to add a new event to this thread, such as reception of a byte from a UART, because of the timing (waiting on user input is too long and unpredictable) and because of the whole intervening code structure.

In practice, people keep adding new thread that can wait and block on new events, but this often violates the “S” rule (Single Responsibility Principle). Often, the added threads have the same responsibility as the old threads and have high degree of coupling (cohesion) with them. This cohesion requires sharing resources (a nightmare in TDD) and even more blocking with mutexes, etc.

Open-Closed Principle
and Event-Driven Approach

Compare this with the event-driven approach, in which the system processes events quickly without ever blocking. Extending such systems with new events is trivial and typically does not require re-doing existing event handlers. Therefore such designs realize the Open-Closed Principle very naturally. You can also much more easily achieve the Single Responsibility Principle, because you can easily group related events in one cohesive design unit. This design unit (an active object) becomes also natural unit for TDD.

So, it seems to me that TDD should naturally favor event-driven approaches, such as active objects (actors), over traditional blocking RTOS.

I’m really curious about your thoughts about this, as it seems to me quite fundamental to the success of TDD. I’m looking forward to an interesting discussion.


6 Responses

  1. I’ve written event-driven (QP) as well as RTOS based code but I have got no experience with embedded TDD (just read Grennings enlightening book). Your post encourages me again to give TDD+QP a try but I’d really like to see a practical example – e.g. your “Fly and Shoot” tutorial coming with tests and a TDD-environment.

    1. That’s a fair request and QP is slowly getting there.

      The latest QP release 4.5.00 brings integration between QP and the Qt GUI framework. The QP-Qt integration can be used for rapid prototyping (virtual prototyping), simulation, and testing of deeply embedded software on the desktop, including building realistic user interfaces consisting of buttons, knobs, LEDs, dials, and LCD displays (both segmented and graphical). Moving embedded software development from an embedded target to the desktop eliminates the target system bottleneck
      and is a critical step for TDD.

      The next step is updating the QM modeling tool to the QP 4.5.x level and extending it with the ability to launch external tools, such as make. This will allow running tests right from QM on the desktop.

      The final step is integration of a unit test framework into QP. Here, the very important component is the QS (Quantum Spy) software tracing facility, because it is ideal to report test results. It is also critical that QS has been specifically designed to run in deeply embedded targets, so the test could be executed both on the desktop and the target.

      So, here is a high-level game plan for QP. There will certainly be some examples and, as usual, an extensive Application Note about doing TDD with QP.

  2. Hi

    “So, it seems to me that TDD should naturally favour event-driven approaches”

    I use TDD embedded extensively. Indeed I use event driven approach (probably because I do not know anything better), but I always design interface to be RTOS or framework independent. This give me more flexibility in porting among any possibly environment and simplifies unit tests.
    Dropping all possibly dependency of any framework gives you really portable module design, obviously we loosing some performance and footprint, also we need extra integration code, but it is always irrelevant, especially in rapid prototyping.
    E.g. please see interface of implementation iso15765 from one of commercial company: http://www.simmasoftware.com/iso-15765-users-manual.pdf

    I am glad to read support for unit testing in QP framework. It should push forward project.
    One more thing:
    For me TDD is something more general purpose tool.
    There is no tool fit to everything, most of things is question of practise and knowledge and more important: understanding your tool. TDD approach is nothing more than methodology, itself without tools what build environment is noting. Methodology do not favour any approach, but ‘practise’, ‘tools’, ‘economy’ and ‘nowadays technology’ may favor some approach, specially event-driven. But remember that event-driven will defend itself (my opinion)! I do not like pushy advocacy.

    “This cohesion requires sharing resources (a nightmare in TDD) and even more blocking with mutexes, etc.”
    so in ma opinion, there is no any nightmare in TDD.
    1. where TDD is not fit, do not use it, understand your tool
    2. TDD itself is methodology, unit testing with TDD need isolation, if technology and experience in building isolation environment is not on acceptable level, than TDD is not a solution

    Do you build your environment using mocks? if not I recommend, because (in may opinion) TDD without mocks is real nightmare.
    I probably do not understand all context of “nightmare problem”, just some thoughts and advocacy of TDD 🙂

    1. Update: As of May 2017, Quantum Leaps has released the QUTest unit testing harness (a.k.a. unit testing framework). Unlike other existing unit testing harnesses for embedded systems (e.g., Unity or CppUTest) QUTest is not based on xUnit that was originally designed to run tests on host computers. Instead, QUTest is geared towards unit testing of deeply embedded systems and event-driven systems in particular.

      One of the main advantages of QUTest over other traditional unit testing frameworks is that it allows you to replace the traditional “mock object” test double with a much simpler “spy object” test double. The process of testing the same CUT (Code Under Test) with mock-objects under Unity and spy-objects under QUTest is described in detail in the QTest Tutorial.

      Here is a list of QUTest unique features:

      – QUTest separates the execution of the CUT (Code Under Test) from checking of the “test assertions”. The embedded target is concerned only with running a test fixture that exercises the CUT and produces QP/Spy™ trace, but it does not check the “test assertions”. Checking the “test assertions” against the expectations is performed on the host computer by means of test scripts.

      – The QUTest approach is more intuitive for embedded developers, because it is conceptually like automated “debugging by printf” that most embedded developers use extensively. As it turns out, this approach also simplifies the development of all sorts of test doubles, including mocks, without breaking encapsulation of the CUT.

      – QUTest is a unique test harness on the embedded market that supports scripting. QUTest test scripts run on the Host, which skips compilation and uploading the code to the Target and thus shortens the TDD micro-cycle.

      NOTE: QUTest supports test scripts written in Python (2.7 or 3.x)

      – QUTest supports resetting the Target for each individual test, if needed. This goes far beyond providing test setup() and teardown() functions that other test fixtures offer (and of course QUTest supports as well). Clean reset of the Target avoids erroneous tests that implicitly rely on side effects from previously executed code. This is particularly important for embedded systems and for state machines, so that each test can start from a known reset condition.

      – QUTest supports testing Design by Contract (assertions in C or C++, not to be confused with “test assertions”) in the CUT. This is a carefully designed, unique feature of QUTest not available in other test harnesses. A successful test of DbC might actually mean breaking an assertion in the Target code.

      – QUTest test fixtures that run on the Target do not require dynamic memory allocation (malloc()/free() in C or new/delete in C++). This means that you don’t need to commit any of your precious embedded RAM to the heap (you can set the heap size to zero) and you don’t need to link the heap management code. Avoiding dynamic memory allocation is one of the best practices of real-time embedded programming, which you don’t need to compromise to run QUTest.

      – QUTest test fixtures that run on the Target do not require non-local jumps (setjmp()()/longjmp() in C or throw/catch in C++), which are needed by other test harnesses to discontinue failing tests. QUTest™ test fixtures do not need to discontinue failing tests, because they don’t check “testing assertions”, so a test fixture does not “know” if it is failing or passing. Should a test fixture crash on the Target, it simply waits for the target reset commanded by a test script.

      – QUTest test fixtures can be based on the actual application code. For example you can reuse the same main() function in a test fixture and in your final application. This means that you can either grow your test fixture into a final application through TDD, or you can more easily add unit tests to an existing application.

      NOTE: Even though QUTest is particularly suitable for running tests on deeply embedded targets, it also fully supports running the same tests on your host computer (Windows, Linux, and MacOS are supported). In fact, running the tests as much as possible on the host and thus avoiding the target-hardware bottleneck is the highly recommended best-practice of embedded TDD. QUTest supports fully-automated unit testing, both on the embedded target and on the host computer.

  3. Hi,

    I have some doubts that equating “RTOS-based designs” with “blocking, waiting, …” is right. Typically, an event-triggered RTOS provides everything that is needed to implement a real-time application in an event-triggered fashion. This is different in time-triggered environments, of course. However, such systems inherently are different from event-triggered systems and dealing with non-periodic events is really problematic – in most cases, you only can poll such events. But, such systems just are not suited for interactive applications demanding pressed buttons. So, IMO this is not a matter of “RTOS-based design” but of application design in general and choosing the right paradigm for that application.

    1. Sure, you can use a traditional RTOS in many different ways, including truly event-driven approach where you structure every thread as an event-loop and you disallow blocking in the event-handlers. (This is exactly one of the best practice recommended by concurrency experts, see https://www.state-machine.com/active-object/#BestPractices ).

      But then, you have to discard 90% of the RTOS, because you have to disallow all the *blocking* mechanisms (except perhaps message queue that you use in the event-loop) that are fundamental to the RTOS. So what’s the point of using RTOS in the first place…

Leave a Reply