Hello and welcome to the "Modern Embedded Systems Programming" course. I'm Miro Samek, and in this lesson, I'd like to discuss testing and its central role in software development. You will also see some ways of testing embedded software on the host computer and embedded targets and how to obtain the necessary tools. My main goal today is to explain testing in the broader context of software development, which, in its essence, is creating complex systems. So, let's examine how complex systems generally come about because our understanding of this process has dramatically changed. The change I'm talking about is one of the most profound in all science and concerns the most complex systems we know of, living organisms. Before Charles Darwin published "On the Origin of Species" in 1859, the established thinking was that all living things were created in a divinely inspired act in their current, perfect, and final form. The Darwinian theory of evolution by natural selection offered an entirely different explanation. All living organisms came into being through a gradual, incremental, and cumulative process of evolution from simpler to more complex forms. The critical and most radical idea at the time was that of natural selection that constantly and relentlessly weeds out the less-fit adaptations, leading to the constant struggle for existence. Following Darwin's publication, the most significant intellectual challenge was imagining and accepting that such a process could produce complexity on this scale. Nowadays, science has come full circle. Not only is evolution capable of producing complexity, but in fact, it is now recognized as the only process through which anything complex can be created anywhere, not just in biology. A little-known scientific fact is that shortly after Darwin's publication, another lesser-known author published "On the Origin of Software by Means of Artificial Selection" with the subtitle "Preservation of favored code in the struggle to survive testing." In this work, which frankly was far ahead of its time, the author, yours truly, Miro Samek, generalized Darwin's ideas to software development. Here are the three most important findings from that forgotten book: First, "A complex system that works is invariably found to have evolved from a simple system that worked." Second, "A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system." And third (actually, a corollary from the first two), "In embedded systems, nothing works until everything works." I'll come back to this fact later. Unfortunately, that important work has been forgotten. By the time software was actually invented, people still believed in a divinely inspired act of creation known as the waterfall process. Such a big up-front design culminating with big-bang final testing cannot work because it lacks the critical evolutionary aspects of incremental development combined with constant selection, which are indispensable for anything complicated to actually work. Well, it seems that every discipline must repeat the same mistakes and separately rediscover the universal rules for creating complexity. In software, the importance of evolution by selection was rediscovered and forgotten several times. But only the most recent movement, called "agile software development," fully acknowledged and embraced testing as the primary selection mechanism that must be applied continuously, not just to weed out the bad adaptations (known in software as "bugs") but to guide the whole development process. This use of testing for guidance is also known as test-driven development (TDD). Other agile best practices, such as continuous integration (CI) and continuous delivery (CD), might seem like recent inventions. However, they merely acknowledge that a software system must keep working continuously like any complex system. If it ever stops working, making it functional again is like trying to resurrect a dead organism. With these general insights, the question is no longer whether evolution coupled with strong selection applies to software. We know for a fact that this is the only way. The question then becomes how to evolve the software and what kind of selection mechanism testing should provide. Obviously, in software development, we don't have eons of deep evolutionary time to emulate natural selection, and the customers certainly won't appreciate testing all iterations of the software on them in the "wild." But natural selection is not the only way. We can use artificial selection, which is how humans evolved and transformed all domesticated plants and animals. Artificial selection works on much shorter time scales. However, artificial selection requires much more than natural selection because now we must take full responsibility for what we want to select and precisely how we do this. For that, we obviously need a suitable starting point for the software evolution- the first working version. But we also need to create an artificial "habitat" for the software to "live in." Of course, this artificial "habitat" depends on the type of testing you wish to perform. Today, I will focus on "unit testing," which involves the smallest software components that can be isolated and individually tested, such as functions and modules in C. Unit testing is (or at least should be) the most extensive type of testing the original developers perform as they work on their code. Generally, the lower the testing level, the more extensive the software "habitat" must be. For unit testing, this artificially created "habitat" is called "testing harness" or "testing framework." Several such unit testing harnesses exist, most based on the heritage of the original xUnit. Harnesses that made their way into embedded software include CppUtest and Unity, both used in the book "Test-Driven Development for Embedded C" by James Grenning. Another popular testing harness is Google Test or gtest. The links to all those testing harnesses will be provided in the video description. Today, however, I'd like to show a unit testing harness called Embedded-Test or ET because it is much simpler than any of the alternatives, yet it can run all tests described in Grenning's TDD book. ET is written in C, but unlike Unity, it requires no "test runners" (I will explain that in a minute). The ET framework runs on host computers and embedded boards with minimal porting. ET is permissively licensed open source, and you can get it from GitHub, either by downloading the zipped code or cloning the repository with git. Assuming that you've downloaded ET as a ZIP file, unzip it into the directory where you keep projects for this course. After doing so, rename the Embedded-Test-main directory to lesson-49. Get into the lesson-49 directory and right-click on Windows Explorer to open a terminal in this directory. So far in this course, you've only used embedded Integrated Development Environments, such as KEIL uVision, IAR Embedded Workbench, or TI Code Composer Studio. But unit testing is often performed directly from the command line; therefore, today, you will also use just a terminal to see how this works. Embedded-Test comes with examples, so let me just explain some of them. The most basic is the "basic" example. Simple as it is, it demonstrates the typical code organization for unit testing, where the code under test (CUT) is located in the "src" subdirectory, while the tests are in the "test" subdirectory. I will then first go to this "test" subdirectory to show you what a test run looks like, and then I will explain the details. So, to run the test, you type "make." This invokes the "make" utility, which executes the build process prescribed in the Makefile located in the current directory. After building the software, "make" immediately runs the tests, which is also customary in unit testing. However, I need to back up at this point because it won't work like that on your machine. You probably don't have the "make" utility or the gcc compiler installed, so your attempt to run "make" will look like this. You have several options to get "make" and other Unix-style utilities commonly used in building and testing. First, you can use Linux or macOS instead of Windows, and the provided Makefile should work without any modifications. But even then, you must ensure that "make" and "gcc" are installed. Another option is to activate the Windows Subsystem for Linux (WSL), but this requires installing a whole Linux distribution, on top of which you'll still need to install "make," "gcc," and perhaps other things. Instead of all that, I use the QTools collection for Windows, which has been specifically designed to provide everything in one simple installation. QTools downloads are available from GitHub and contain all Unix-like utilities commonly used in Makefiles as native Windows executables. To get QTools from GitHub, go to QTools releases and download the latest digitally signed Windows installer, which is the easiest way. If you are allergic to installers, you can also download QTools as a ZIP file, but this requires additional steps to set the PATH and some environment variables. You can install the qtools-windows executable (or unzip the qtools-windows ZIP archive) in any location, but I highly recommend avoiding locations with spaces or special characters, such as "Program Files." I installed my qtools in the default location: c:\qp. After installation, the content of the qtools folder should look as follows: In the bin directory, you get the Unix-style utilities, such as make, and commands commonly used in makefiles, like cp, rm, mkdir, etc. In the MinGW32 directory, which stands for "Minimalist GNU for Windows," you get the GNU C/C++ compiler for Windows. This is useful for building and running tests on the host computers. And finally, in the gnu_arm-none-eabi directory, you get the GNU cross compiler for ARM CPUs. This is useful for building and running tests on ARM boards, such as your TivaC LaunchPad or STM32 NUCLEO. If you installed qools using the Windows installer, the following qools directories will be added to your PATH. Additionally, the environment variable QTOOLS will be defined. However, if you installed qtools from the ZIP file, you must manually modify the PATH and add the QTOOLS variable. You'll need these settings to conveniently access the tools from the command-line. Alright, this would be all regarding general tooling, a general "testing habitat," if you will, for unit testing. All this is not specific to the ET testing harness and will also be useful for other testing harnesses. So, now, let's go back to the basic unit testing example, and look at the CUT (Code Under Test) and the tests around it. The CUT is trivial in this basic example and consists of the files sum.h and sum.c. The header file provides the prototype of the function sum(), while the source file contains the implementation. The function simply calculates and returns the integer sum of the integer x and y parameters. The test file in the test subdirectory is more interesting. It starts with including the CUT and the Embedded-Test unit testing harness. Next are two functions setup() and teardown(), which ET executes before and after each test. Next comes a test group, which provides the name of this group of tests and produces the output shown in the terminal. Next come individual tests. They start with the macro TEST() with the test description, which in ET is an arbitrary string displayed in the terminal. Inside the tests, you see the VERIFY() macros, which evaluate the provided boolean expressions. The test passes only when all VERIFY() expressions in that test are true. Otherwise, the test fails at the first failing VERIFY(). If you watched previous lessons 47 and 48, you might have noticed that the VERIFY() macro resembles the ASSERT() facility. Indeed, other unit testing harnesses provide various "test assertions" to verify test conditions. However, this might create confusion with the true assertions in the Code Under Test, and therefore ET provides the VERIFY() facility. This basic test group also demonstrates a skipped test, which ET does not execute. Temporarily skipping a test is often useful when you work on a quickly changing code. Finally, this example contains an intentionally failing test, which terminates the test run with a printout of the line number and the failing expression in the VERIFY() facility. ET does not execute any tests after a failing one because the system might be in an unknown state, and any subsequent tests might give incorrect results. Of course, ET has many more interesting features, such as a special way of testing assertions in the CUT, where you specifically test for and expect an assertion failure, resulting in a passing test. But perhaps the most significant difference from other unit testing harnesses written in C, such as Unity shown a moment ago, is that ET does not need any "test runners." Unlike in Unity, tests are not separate C functions in ET. Instead, individual tests are just code blocks with their own scope for local variables, all within the test-group, which is a function in ET. Alright, with this quick introduction to ET, you should be able to explore other ET examples on your own as homework from this lesson. For instance, you should take a look at the "lock-free ring buffer" code, which is quite useful in embedded programming. You might also like to see how to test C++ code with ET. Finally, you might enjoy the "leddriver" example, which demonstrates that ET can test most of the code from the already mentioned TDD book, which I highly recommend. Speaking of James Grenning's book, you might have noticed that I have performed the testing on the host computer. Grenning calls this strategy dual-targeting, meaning that from day one, your code is designed to run on two platforms: the final embedded target and your development host computer. Dual-targeting is sometimes confused with emulation of the embedded target on the host with software such as QEMU. But dual-targeting is actually simpler. You really build the embedded code with a native compiler for your host, such as MinGW gcc, and you also run the tests on your host. The dual-targeting strategy has a number of benefits, such as a much quicker evolutionary cycle because you avoid the target hardware bottleneck. Also, it's easier to automate host-based tests. But above all, dual-targeting influences your design because to test embedded code on the host, you must pay close attention to the boundaries between the hardware and software. I have used dual-targeting for many years and cannot imagine embedded software development without it. A team that embraces dual-targeting will easily outperform any team that doesn't. This is one of the most powerful tools in the war chest of professional embedded developers. Having said all this, running the tests exclusively on the host will not cut it, and at least occasionally, you need to run the tests on your embedded target as well. The ET testing harness has been specifically designed to make it easy, which I'd like to now demonstrate. So, let's go back to the basic ET example. Besides the Makefile you used to build and test on the host, you can find two .mak files for testing on the EK-TM4C and NUCLEO-C031 embedded boards, respectively. Let's start with the EK-TM4C, a.k.a. Tiva Launchpad board. Plug it into your computer and open the serial terminal, such Termite, which the QTools collection conveniently provides. You can launch Termite from your terminal by typing termite ampersand. Now, type make -f ek-tm4c123gxl.mak. This builds the same test code as before, but this time using the GNU-ARM cross-compiler from QTools. As usual, immediately after the build, "make" uploads the code to the target, and normally it would also run it. But the LmFlash utility for the TivaC board has some quirks with resetting the board that I couldn't figure out. Therefore, this "makefile" asks you to reset the board manually. After you reset the board, the tests execute, and the serial terminal displays the test run, which is identical to the one produced on the host. You can also try the "makefile" for the NUCLEO board. For that, plug the NUCLEO-C031 board into your computer. If you still have the serial terminal open, close it and launch it again to connect to the new board. Now, type: make -f nucleo-c031c6.mak this produces the message that the USB drive is not provided. That's because NUCLEO boards show up on your computer as USB drives and can be programmed by simply copying binaries to that drive. This is very convenient because you don't need any additional utilities, like LmFlash for TivaC. On my computer, the NUCLEO board enumerated as the G: drive, so I provide the USB drive as follows: make -f nucleo-c031c6.mak USB=E: Of course, you need to use the appropriate USB drive for your NUCLEO. The make command builds, uploads, and automatically executes the basic tests on the board. Again, the output sent to the serial teminal is identical to the previous one for TivaC and the original testing on the host. So now, let me quickly explain how this testing on the embedded boards, as well as the host, works. As always in embedded systems, the biggest problem is getting the information (like the test results in this case) from the board to the host. You've encountered this before in this course, back in lesson 45, about software tracing with printf(). The solution applied in ET is also similar to lesson 45 in that the communication happens over a UART. This always depends on the particular board, so in the test directory, you have the board support packages (BSPs) for TivaC and the NUCLEO board. When you look inside one of these BSPs, you see three ET callback functions. ET_onInit() initializes the UART. ET_onPrintChar() transmits one character to the UART. And finally, ET_onExit() implements behavior after all tests finish. An embedded target cannot really exit, so this function hangs in an endless loop, blinking the onboard LED. Here, I'd like to note that ET does not actually use printf() because it is a huge function, and customizing it to work with a specific UART is implementation dependent. Indeed, one of the design principles for ET is to carefully avoid any depenencies on the standard library or anything else. But none of these restrictions matter for the host computers, where the three ET callbacks are provided in the file et_host.c in the et driectory. That implementation uses the fputc() and exit() functions from the standard library, but again, this is only for the host. This concludes this quick introduction to testing embedded software. The most important takeaway from this lesson is that the only way to create non-trivial software is to evolve it, and the winners of the software development game are those who evolve the software faster by applying better, smarter, and more effective testing techniques to eliminate more defects sooner. This software evolution is guided by artificial selection that requires building artificial environments for testing the software. This lesson gave you a general idea about one such artificial environment for unit testing (a unit testing harness), but the general idea is similar for most such harnesses. Newcomers to unit testing are often confused as to what exactly is being tested. A first impression might be that testing is only about the CUT (Code Under Test). But you must realize that all tests necessarily exercise both the CUT and the test environment. Therefore, methodologies like Test Driven Development (TDD) recommend starting the process without a CUT. The purpose of this first failing test is really to test the environment. And finally, when you start testing more systematically, you will accumulate many tests. On the one hand, such tests are valuable for checking that your software keeps working and the new features don't break the old ones, which is called regression testing. On the other hand, tests are code, too, and the more code you have, the slower you can progress. The trick is to find the right balance and discard the less relevant tests. Keeping all tests forever is a mistake. If you like this channel, please give this video a like and subscribe to stay tuned. You can also visit state-machine.com/video-course for the class notes and project file downloads. Finally, all the projects are also available on GitHub in the Quantum Leaps repository "modern embedded programming course." Thanks for watching!