Home
Quickstart Guide
Introduction RIO Academic RIO Application examples Your first RT app Your first FPGA app
Real-Time
Basic procedures System admin File system I/O monitor System controller architecture Timed loops Inter-process communication RT/Host communication RT/FPGA communication FPGA personalities Interrupts Datalogger (file I/O)
FPGA
Design flow Simulation Inter-process communication RT/host communication Derived clock domain IP blocks FPGA personality
Networking
Get connected Email Web services UDP TCP IP addresses
Site Map
Guides Code examples Procedures Tags LabVIEW block diagram elements Targets Communications All pages
Glossary How to use About
RIO Developer Essentials Guide for Academia

"Queue" communication

Queue
RT code

Use a queue to send messages and data between two or more parallel process loops contained within a VI or other VIs. Queues also serve as the foundation for the "Queued State Machine" design pattern.
State machines perform system control, data processing, and any task that involves executing a sequence of activities in response to inputs from the surrounding physical system, the user interface, and other processes within the system. The queued state machine is a particular implementation style that is flexible and versatile, easy to maintain, and computationally efficient.
The queued message handler contains multiple process loops operating independently and in parallel that communicate with each other by sending messages through queues. Each process is a well-defined task implemented by the "Queued State Machine" design pattern. Breaking up the system into self-contained tasks greatly simplifies the design of complex systems.
Create a responsive user interface based on two loops operating in parallel: the "producer" loop event structure responds immediately to user interactions such as button clicks and mouse movements that send commands through a queue to the "consumer" loop which performs the required tasks. Separating the state machine into two loops allows the user interface to remain responsive should a consumer task require an unusual amount of time or must wait for a shared resource to become available.