Welcome to the Modern Embedded Systems Programming course. My name is Miro Samek and today I'd like to start a new segment of lessons about event-driven programming, which is an important stepping stone in understanding modern software of any kind, not just modern embedded programming. In this lesson, you will learn the main concepts of event-driven programming based on its origins in graphical user interfaces, which went mainstream during the personal computer revolution in the 1980s. So, the plan for today is to explain why graphical user interface, abbreviated to GUI, required a new programming paradigm, and how this new paradigm, known today as event-driven programming, really works. Specifically, in this lesson you will see the most important characteristics of event-driven programming exemplified by the original Win32 API that was designed back in the 1980s and still works today, even on the latest 64-bit Windows 10. With this background, in the following lessons, you will see how these main characteristics of event-driven programming can be applied to real-time embedded systems, such as your TivaC LaunchPad board. But let's start from the beginning. Perhaps the most consequential, from the programming perspective, was the invention of the mouse in the mid 1960s. Well, this basically characterizes what we've been pursuing for many years in what we call the Augmented Human Intellect Research Center at Stanford Research Institute. Here is Don Andrew's hand in Menlo Park. And in a second we will see the screen that he is working and the way the tracking spot moves in conjunction with movements of that mouse. I don't know why we call it a mouse. Sometimes I apologize. It started that way and we never did change it. But it took another full decade to work out the technical details of how to actually implement the ideas. This eventually happened at Xerox Palo Alto Research Center (PARC) in mid 1970s. The mouse is a pointing device that moves the cursor around the display screen. Demonstration of the Alto computer developed at Xerox Palo Alto Research Center around 1973. A selected window displays above other windows much like placing a piece of paper on top of a stack on a desk. Compared to command-line systems of the day, such as a teletype or a terminal, the implementation of a graphical user interface required a paradigm shift. Here is the main problem: In a command-line system, the only input device is the keyboard and the only output device is the bottom of the screen, which scrolls up like an electronic teletype. Consequently, as I'm trying to illustrate in this piece of pseudocode, the software can still have the traditional sequential structure. Basically, the software waits for a key press, after which it echoes the character to the screen and then processes the key press, which might cause some more output to the screen. There is no question of where to produce the output because it always goes to the bottom of the screen. But with a graphical user interface the situation is fundamentally different. For starters, you now have multiple sources of input: the keyboard and the mouse. This is a problem, which I'm again trying to illustrate with a piece of pseudocode. If you explicitly wait for the keyboard, for example, you are unresponsive to the mouse input and vice-versa, if you start with waiting for the mouse, you are unresponsive to the keyboard. So, first you need to find a way to somehow wait for multiple inputs simultaneously. But assuming that you've done this, downstream of such a blocking call you now have to check which one of the multiple inputs has been actually received. Moreover, with the keyboard you no longer know where on the screen the output needs to be produced. For this, you would have to know which part of the screen has been active, which is called the "keyboard focus" in today's GUI programming. But with the mouse, there are even more fundamental problems. The mouse provides two-dimensional input, so you have the X and Y coordinates on the screen plus the state of the mouse buttons. But the raw coordinates are not sufficient to take a meaningful action. You need to additionally know which object on the screen is at these coordinates. For this, you would typically call a service of the GUI system. But since this would happen for every mouse input, you could just skip the step and always let the GUI system find the object at the current mouse coordinates. This effectively means that the mouse can produce many more kinds of inputs, such as 'object_id' in this pseudocode, all of them depending on the constantly changing situation on the screen. I hope that you start to see that graphical user interface introduces new levels of complexity, which are not in the same ballpark as the command-line interface. In fact, GUI programming is an entirely different ball game. Therefore, no wonder that GUI programming required a different way of thinking. The key insight that enabled a workable solution was the focus on the inputs, which are called *events* or *messages*, such as the key-presses, mouse moves, and the secondary inputs, from objects on the screen, like buttons, desktop icons, scroll bars, etc. The focus on events means that the events drive the software, not the other way around like it was in the sequentially coded command-line programs. To see what this event-driven paradigm really means and how it works, I 've prepared a simple "HelloWin" GUI application running on Windows. You can download this project as lesson33 from the companion web-page to this video course at state-dash-machine-dot-com-slash-quickstart. Once inside the project directory, I've just clicked on the hellowin- dot-sln solution file to open it in Visual Studio 2019 Community Edition. If you wish to follow along, you can download Visual Studio 2019 from the Microsoft website. The community edition is free after registration. But even though the development environment is quite modern, the software is written in C, using the ancient Windows Application Programming Interface (API), which Microsoft developed back in the 1980s. It turns out that, unlike other, more modern Windows APIs in other programming languages, this low-level Win32 API in C demonstrates the main concepts of event-driven programming in their simplest and most direct form. So, let's just walk through this simplistic Windows application, which is my adaption of the "Hello-Windows" program from the book "Programming Windows" by Charles Petzold. This book, first published in 1988, became the Windows programming bible back in the day. The code begins with include Windows-dot-h header file that defines the Windows API, types and constants. Next, you can see a prototype of the W-N-D-PROC function, which will be needed to initialize a quote-unquote virtual table, and which I will discuss in quite a detail in a minute. But first comes the WinMain() function. This is the main entry point to a Windows GUI application and plays the same role as the main() function in the conventional C environment, except in a GUI environment the entry point takes more parameters, because a GUI application is more complex. Here, in the simple Hello-Windows application, some of these parameters are not even used. But next, comes an interesting part, where you prepare the Window Class to be registered with Windows. I've specifically formatted and commented the initialization, so that you recognize that here you engage in a kind of object-oriented programming, as I explained in the last lesson 32 about OOP in C. So, specifically, you first see the setting of the attributes of a window-class instance 'w-n-d', such as the windows style, the mouse cursor for the window, and the window class name. But, next, you can hopefully recognize the assignment of a quote- unquote virtual function—the windows procedure--wind-proc--specific for this windows class. Here, the Windows API designers used the simple implementation technique of embedding a pointer to virtual function directly in the attribute structure, which I mentioned in my last lesson about object-oriented programming in C. Next, comes the call to Windows to register the prepared windows- class. And finally, the call to CreateWindow() plays the role of a constructor, because it creates a window object based on the just registered window-class. Once the application window is created, it is shown on the screen and updated. But then, after all this initialization, the WinMain function enters the event-loop also known as message loop or message pump, where the program performs the real work. This is the most important part of every event-driven program. The event loop has a very specific structure, consisting of two main steps: First, the call to the GetMessage() Windows API blocks and waits for any input from the keyboard, mouse or the screen. When any of such events occur, the Windows system records it as a message object and places it in the message queue for this application. GetMessage() then unblocks and copies the message from the queue to the msg object. This is how the event-loop solves the problem of waiting simultaneously for multiple events, as I tried to illustrate in the pseudocode before. If the status returned from GetMessage() is zero, it means that the application has been closed, and so the event loop needs to be terminated by executing the break statement in this case. Otherwise the message is passed to the DispatchMessage() Windows API, which then calls the "window proc" registered for the current Window. You will see the "window proc" for this simple Hello-Win application in a minute, but before leaving this piece of code, let me summarize the key properties of the event loop. The first key property is that an event-loop uses special message objects to record all events potentially interesting to an application. These message objects serve only for communication and can be conveniently stored in the event queue and later retrieved for processing. Because of the message queue, events can be delivered both when the loop is waiting for them and when the loop is busy processing previous events. In any case, the Windows system only records the event as a message and places it in the message queue, but does NOT wait for the actual processing of the event. This type of event delivery is called *asynchronous*, and means that event producer (which is the Windows system in this case) executes independently from the consumer of events (which is your application in this case). In other words, these two activities are asynchronous, meaning not synchronized. Later in this lesson you will see some experiments that will demonstrate this asynchronous nature of event delivery. The second property of the event-loop is that the DispatchMessage() call must necessarily complete and return to the event-loop before the loop gets around to the next event. This means that event processing proceeds in Run-to-Completion (RTC) steps that cannot be interrupted by processing of any other event. And the third key property of event-loop is that it makes calls to your application code—the "windows proc". This is backwards compared to what you've been used to, because in all your previous experience with a Real-Time Operating System, for instance, which you saw in lessons 20 through 26, your application was calling the services of the RTOS. But now the event-driven Windows system is calling *your* application. This property of the event-loop leads to *inversion of control* compared to traditional sequential programming. This is the key characteristic of all event-driven systems and is the essence of event-driven programming. The inversion of control is really what it means that quote events drive the application unquote and not the other way around. So, now let's take a look at the "wind-proc" for this GUI application. First, to understand the signature, you need to see the declaration of the message structure that I skipped until now. The MSG structure happens to be defined in the WinUser-dot-H header file, which is included from Windows-dot-H. As you can see, the types of the four parameters of the "wind-proc" are identical to the first four attributes of the MSG structure. I only took the liberty of re-naming the first parameter from H-W-N-D window handle, to "me", to make it more clear that the "wind-proc" is a member function of the window class. This is the naming convention for member functions introduced in lesson 30 about implementing classes in C. Actually, as you saw in WinMain, the "wind-proc" is not just a member function, it is a *virtual* member function that is specific to the type of the window class registered with Windows for this application. I also took the liberty of renaming the second parameter of the "wind-proc" from "message" to "sig", because this is the part of the message informing you about the kind of event that was recorded in this message. In the modern event-driven programming this information is called the *signal* of the event, so I named it "sig". And finally, the last two parameters, W-Param and L-Param, are the event parameters that provide additional information about the recorded event. The meaning of these parameters depends on the event signal. Inside the "wind-proc", the main job is to process the received message. This requires to first determine what kind of the message you are dealing with. For this, the Windows programmers use the switch statement based on the integer signal of the message as the control expression. Each case statement then represents a different message kind that needs to be processed according to that signal. The case statements are labeled with the symbolic names of various message signals, which are listed again in the WinUser-dot-H header file. For example, the WM_CREATE signal corresponds to numerical value 1 and is sent to the wind-proc when the window is created. Similarly, WM_DESTROY corresponds to numerical value 2 and is sent to the wind-proc when the window is about to be destroyed. In the latter case, the "wind-proc" calls the PostQuitMessage API, which inserts the WM_QUIT message into the program's message queue, which, in turn, will then cause termination of the event-loop. This is an interesting example showing you that an application can asynchronously post events to itself. But in all cases, the "wind-proc" also sets the local variable 'status' that records the status of processing. For example, when "wind-proc" handles a given message, it sets the status to WIN_HANDLED. This status is then returned back to the Windows system, so there is the two-way communication: Windows tells the "wind-proc" which message to process, but then the "wind-proc" reports the status of processing back to windows. Now, perhaps the most important message that "wind-proc" needs to handle is the WM_PAINT message, which the Windows system generates when it determines that part or all of the window needs to be repainted. Your Hello-Windows application handles this by painting some text in the center of the window rectangle. The details of all this are not that relevant for today, except perhaps to note that the painting displays the current values of the counters for keyboard presses and mouse moves. What *is* more important, however, is that these counters are defined as *static* variables, because they must outlive the many invocations and *returns* from the "wind-proc". Notice, that if you would define the counters as the local automatic variables, they would go out of scope by every return. Now you might be curious where these counters are actually incremented... So, here they are: the wm_keydown counter is incremented in the WM_KEYDOWN message and the wm_mousemove counter is incremented in the WM_MOUSEMOVE message. In both cases, the processing must also include the call to InvalidateRect() API to tell Windows that the window rectangle needs to be repainted, because otherwise the new value of the counter won't be updated right away. This is another aspect of the two-way communication between the application code and the Windows system. And finally, the default case handles all the message signals that the "wind proc" did not choose to handle explicitly in one of the provided case statements. In that default case, the "wind-proc" calls the default "windows-proc" provided by Windows. This is a very interesting design that has huge implications, because this is how the "characteristic look and feel" of the GUI system comes about. To explain what I mean, let's just run this Hello-Windows application and see what it can do. Well, it turns out that the application has the usual window, with the window bar and a title. You can resize the window. You can move it around. You can minimize it. You can restore it and you can maximize it. But *you* have only explicitly coded the counting of mouse moves and keyboard presses. Yet, the application can apparently do all those other things as well and really looks and feels like any other Windows application. This is all thanks to the default "window-proc", which provides these other behaviors for you. The default case in your "wind-proc" might not look impressive at all, but to appreciate it, you only need to scan through the hundreds of windows messages defined in WinUser-dot-H. Most of these WM- underscore message signals go through your "wind-proc", and most of them are handled by the default "window-proc" provided by Windows. Yet, you as the programmer, can be blissfully unaware about all this complexity, because all you need to know is a handful of messages that you actually handle. A good way of thinking about this design is that it is layered in the order of hierarchy. At the lowest level of the hierarchy is your code. It has the first shot at every event, because every event is sent to your "wind-proc" first. But when your code does not explicitly handle an event, it is not ignored, but rather it is passed on to to the Windows system, which is at the higher-level of hierarchy. The other names of this hierarchical design are the "Ultimate Hook" and "Programming by Difference". These two names mean exactly the same thing, but emphasize the different aspects of it. "Ultimate Hook" emphasizes the ease of attaching, or "hooking up", your code to every event. "Programming by Difference" emphasizes the fact that you only need to explicitly program the *differences* from the default behavior. But at this point, I hope that you starting to realize that you already saw something similar in the previous lessons 29 through 32, where you learned about object-oriented programming. Specifically, with the OOP perspective you can view the design as a class hierarchy in which the Windows System is the base class with hundreds of virtual functions, one for each message signal. Windows applications, such as your Hello-Win or Microsoft Word are then the subclasses that override the selected virtual functions in their "wind-procs". Alright, so this was a quick code review of a simple event-driven Hello-Windows GUI application. I hope that it gave you main ideas of how this programming paradigm works. But no introduction to event-driven programming would give you a correct understanding without contrasting it with the traditional, sequential programming and the role of blocking within the code in particular. And, this is exactly what I'd like to briefly explain in the last few minutes of this lesson. As an example of sequential programming, let me pull up some code from lesson 27 about the Real-Time Operating System -- RTOS. For instance, here is the sequentially coded blinky3 thread, which blinks the red LED on the TivaC LaunchPad board. So, let's try to do something similar in your event-driven "wind- proc". For instance, let's say that you want to briefly blink an LED after pressing any key on your keyboard. From the code review before, you know that the right place in the "wind-proc" is the WM_KEYDOWN case, where keyboard presses are handled. A traditional, sequential implementation would be to turn an LED on, and then wait, meaning block, for, say 200 milliseconds, to actually see the blink and then to turn the LED off. The Windows API actually provides an equivalent to the delay() RTOS service, which is called Sleep(). The Windows Sleep() API blocks and waits for the specified number of milliseconds. Also, you obviously don't have LEDs on your Windows computer, so instead you could just display the LED state as text in the center of your window. As in all other cases, after changing the text to display, you would need to invalidate to window rectangle to force the re-painting of the window. Sleep for 200 milliseconds. After the sleep() delay, you would change the LED text to "OFF" and invalidate the window rectangle again. As the other status variables, the LED text pointer needs to be defined as static in your "wind-proc". And finally, you need to augment the paining of the window by adding the LED text to the displayed string buffer. So now, let's simply build and run this program. When you press the keyboard once the LED state does not change as expected, even though the keyboard counter increments. So, your code for the LED does NOT work as you imagined. But wait, it's getting worse. When you press several keys in quick succession, the program freezes and does not update the keyboard counter immediately. Then, after a considerable while, the keyboard counter jumps all at once by some big increment. Actually, when you pres several keys and also wiggle the mouse, nothing happens in the application and no counter gets incremented until both keyboard and mouse counter jump by a big increment. All this is obviously not good, because the application appears frozen and unresponsive. Also, the massive jumps in the displayed event counters are rather strange. It turns out that this behavior is a consequence of asynchronous event posting and queuing of events inside Windows. The problem is that the sleep delay blocks the "wind-proc" and prevents is to quickly return to the event-loop. When the event-loop spins too slowly, the keyboard events accumulate in the event queue. Only, when finally all the WM_KEYDOWN messages with their blocking delays are processed, the event-loop unclogs and quickly processes all other events. Hence the sudden jump in the counter values. This problem is well known and Windows programmers have a name for it: They call such an application a "pig" and, believe me, you don't want to be a "pig". The old rule of thumb for Windows programs is that if anything requires more than about 100 milliseconds, it should be broken down into shorter pieces by using *events*. So, this explains the lack of responsiveness and the "freezing" of your program. But, remember? The LED update after keypress events didn't actually work. And the reason for *that* is even more interesting. From the event-driven perspective, every blocking call in your code, such as Sleep(), really means waiting for some event to happen. Then the unblocking means that the event has occurred. The event you get after unblocking might not be explicitly named, but still, it is delivered in the middle of processing of another event – WM_KEYDOWN in this case. But this violates the Run-to-Completion semantics of event processing, which the event-driven Windows system, assumes. Quite specifically, the call to InvalidateRect() right before the blocking Sleep() has no effect, because the "wind-proc" does not return back to Windows at this point. Therefore Windows has no chance to send the WM_PAINT massage to the "wind-proc" to actually update the LED state. Therefore you never see the update. So, as you can see, any use of the sequential programming paradigm, and especially blocking, is a BAD IDEA in event-driven systems for two reasons: First, it clogs the event-loop and destroys the responsiveness of the program to all events, not just those that block for a while. And second, it violates the Run-to-Completion semantics universally assumed in all event-driven systems. This is the most important takeaway from this lesson I want you to remember: Sequential programming and event-driven programming are two distinct paradigms that don't mix well – so always keep them separate. This means that in an event-driven program, you need to use a truly "event-driven" solution to implement your LED-blink-after-keypress feature. To this end, instead of sequentially blocking with Sleep, you can use the Windows facility specifically designed for this purpose called the "timer", which you can set to generate a special event called WM_TIMER in the given number of milliseconds in the future. And this is really all you need to do in the WM_KEYDOWN case, because the rest of the processing then goes to the WM_TIMER case. Of course, you need to finish the processing in the usual way, with the only additional step to call KillTimer, because otherwise the Windows timer will keep expiring periodically with the programmed interval. Interestingly, in this case you use the wParam message parameter, which in this case holds the ID of the timer that generated the WM_TIMER message. So, let's now see how this works. Let's first start with isolated keypress... As you can see, after every keypress the LED now changes the status momentarily to RED. Now let's try bursts of keypresses plus wiggling the mouse at the same time... As you can see the LED status changes correctly and the event counters keep updating, so the application remains responsive. This concludes this quick introduction to event-driven programming using GUI and Windows API as an example. You've learned quite a few new concepts, which are summarized here for you. To be called "event-driven" a program must posses most of the characteristics listed in this chart. But perhaps the property that sets an event-driven program most apart from a sequential one is *NO- BLOCKING* inside the application-level code. I will come back to this issue in the next lesson, where you will learn how event-driven programming applies to real-time embedded systems like your TivaC LaunchPad board. I hope you will join me for this fun. If you like this channel, please give this video a like and subscribe to stay tuned. You can also visit state-machine.com/quickstart for the class notes and project file downloads.