Designing a Robust Modular Hardware-Oriented Application in C++ [closed]

I'm working on an application for a study in environmental physics that will handle continuous (24/7 for years) acquisition of data from remote sensors/measurement devices. It needs to be robust, extensible, fault-tolerant, and secure. First, I'll describe the basic requirements. Then I'll talk about my original approach from some 15 years ago (yes, I've created this app already, I just decided to make better version to evaluate how much I improved in some areas, explore new technologies that are available etc.) - feel free to skip this part if you're not interested. After that I'll delve into my current design ideas for this application. I'll summarize my questions at the end, but I'll greatly appreciate any comments and advice not only limited to the questions. App requirements App's main job is to query the remote sensor every second for new data, process the data, and then store the processed data in a remote database. There is one important thing to note: we can't lose the data if the DB access fails or application crashes, so some form of backup must be implemented to store data until we have confirmation it was properly saved in DB. Additional requirements: Display a live chart locally with incoming sensor data. Possibility to run on very old hardware and OS. The original version ran on legacy systems like Windows 98 or Windows 2000, with as little as ~128 MB of RAM. So resource usage and OS compatibility were (and may still be) important. Old approach (15 years ago) Note: as far as I know this app is still in use after those 15 years. It's not perfect, but it's been reliably doing its job without crashing :). I had some additional constraints when designing the app: Data acquisition: I was given a .dll that polled the device. That locked me to Windows OS but also simplified data acquisition to one function call. Data processing: Basic calculations like mean, variance, etc., had to be performed before storing and displaying the data. The DB interface used SOAP WebServices (the DB was managed externally). At the time I added a requirement that it should be possible to easily add new devices and new ways of displaying data. To support this extensibility I used Qt4 and its plugin architecture. Architecture (3 parts): MainWindow (core app) Application main process, manages event loop, detects and loads plugins, stores app settings (it has some settings of its own and receives some settings from plugins) and displays the main UI window with a widget from the plot plugin. DataHandler (plugin) Handles sensor polling, processing data and DB communication. Includes: Fallback data storage if the DB is unavailable A settings GUI page to be displayed by MainWindow (polling rate, DLL path, DB URL, local file path if one prefers local storage). Plotter (plugin) Displays the data as a chart. Allows zooming, history control and basic chart configuration (like min/max range). Provides a GUI widget and settings GUI page. Communication Done through raw pointers: MainWindow holds pointers to both plugins. DataHandler holds pointers to MainWindow and Plotter, and sends processed data via public function calls. Plotter only has a pointer to MainWindow (used only to retrieve app settings via public function call). When I sat down to this today I see some pitfalls: Data acquisition, data processing and sending to DB are tightly coupled in a single plugin. Changing any part requires recompiling the entire plugin. A crash in any plugin crashes the entire app. There is no automatic recovery if the app crashes. I know that application restart after crash it's more of a OS topic but it can have some impact on the design. No remote access - users need physical access or RDP. Qt plugins make it hard to write compatible plugin in different framework or another programming language. GUI is required. Running headless isn't supported. Every input data stream has to be connected to some display or application won't run. Settings must be changed via GUI or config file/registry (with a restart). New approach Let's start with some additional requirements that are either still valid or new based on my analysis of the first approach: I still want to use C++ (likely Qt for the GUI, and maybe some Qt utility classes elsewhere). The application has to be extensible - new sensors types, display methods (both new types of charts but also new ways to handle GUI e.g. web gui) and storage methods (e.g. PostgreSQL, JSON, CSV). That includes extending the application using languages different than C++. It should be robust: a crash in one module (e.g. GUI) must not affect others (e.g. acquisition). Modules must be dynamically loadable without recompiling or even restarting the core app - even if implemented in other languages. It should be possible to adjust data processing without any compilation. It must support command-line management over ssh (no GUI required). GUI must be optional - but if present, allow multiple GUIs simultan

Jun 15, 2025 - 12:30
 0
Designing a Robust Modular Hardware-Oriented Application in C++ [closed]

I'm working on an application for a study in environmental physics that will handle continuous (24/7 for years) acquisition of data from remote sensors/measurement devices. It needs to be robust, extensible, fault-tolerant, and secure.

First, I'll describe the basic requirements. Then I'll talk about my original approach from some 15 years ago (yes, I've created this app already, I just decided to make better version to evaluate how much I improved in some areas, explore new technologies that are available etc.) - feel free to skip this part if you're not interested. After that I'll delve into my current design ideas for this application. I'll summarize my questions at the end, but I'll greatly appreciate any comments and advice not only limited to the questions.


App requirements

App's main job is to query the remote sensor every second for new data, process the data, and then store the processed data in a remote database.

There is one important thing to note: we can't lose the data if the DB access fails or application crashes, so some form of backup must be implemented to store data until we have confirmation it was properly saved in DB.

Additional requirements:

  • Display a live chart locally with incoming sensor data.
  • Possibility to run on very old hardware and OS. The original version ran on legacy systems like Windows 98 or Windows 2000, with as little as ~128 MB of RAM. So resource usage and OS compatibility were (and may still be) important.

Old approach (15 years ago)

Note: as far as I know this app is still in use after those 15 years. It's not perfect, but it's been reliably doing its job without crashing :).

I had some additional constraints when designing the app:

  • Data acquisition: I was given a .dll that polled the device. That locked me to Windows OS but also simplified data acquisition to one function call.
  • Data processing: Basic calculations like mean, variance, etc., had to be performed before storing and displaying the data.
  • The DB interface used SOAP WebServices (the DB was managed externally).

At the time I added a requirement that it should be possible to easily add new devices and new ways of displaying data. To support this extensibility I used Qt4 and its plugin architecture.

Architecture (3 parts):

  1. MainWindow (core app)
    Application main process, manages event loop, detects and loads plugins, stores app settings (it has some settings of its own and receives some settings from plugins) and displays the main UI window with a widget from the plot plugin.
  2. DataHandler (plugin)
    Handles sensor polling, processing data and DB communication. Includes:
  • Fallback data storage if the DB is unavailable
  • A settings GUI page to be displayed by MainWindow (polling rate, DLL path, DB URL, local file path if one prefers local storage).
  1. Plotter (plugin) Displays the data as a chart. Allows zooming, history control and basic chart configuration (like min/max range). Provides a GUI widget and settings GUI page.

Communication
Done through raw pointers:

  • MainWindow holds pointers to both plugins.
  • DataHandler holds pointers to MainWindow and Plotter, and sends processed data via public function calls.
  • Plotter only has a pointer to MainWindow (used only to retrieve app settings via public function call).

When I sat down to this today I see some pitfalls:

  1. Data acquisition, data processing and sending to DB are tightly coupled in a single plugin. Changing any part requires recompiling the entire plugin.
  2. A crash in any plugin crashes the entire app.
  3. There is no automatic recovery if the app crashes. I know that application restart after crash it's more of a OS topic but it can have some impact on the design.
  4. No remote access - users need physical access or RDP.
  5. Qt plugins make it hard to write compatible plugin in different framework or another programming language.
  6. GUI is required. Running headless isn't supported. Every input data stream has to be connected to some display or application won't run.
  7. Settings must be changed via GUI or config file/registry (with a restart).

New approach

Let's start with some additional requirements that are either still valid or new based on my analysis of the first approach:

  1. I still want to use C++ (likely Qt for the GUI, and maybe some Qt utility classes elsewhere).
  2. The application has to be extensible - new sensors types, display methods (both new types of charts but also new ways to handle GUI e.g. web gui) and storage methods (e.g. PostgreSQL, JSON, CSV). That includes extending the application using languages different than C++.
  3. It should be robust: a crash in one module (e.g. GUI) must not affect others (e.g. acquisition).
  4. Modules must be dynamically loadable without recompiling or even restarting the core app - even if implemented in other languages.
  5. It should be possible to adjust data processing without any compilation.
  6. It must support command-line management over ssh (no GUI required).
  7. GUI must be optional - but if present, allow multiple GUIs simultaneously (local GUI + web GUI etc.).
  8. Sensor-to-GUI mapping should be fully flexible as illustrated below:

Sensor-GUI Mapping Example

             +-------------------------+
             |       GUI Targets       |
             +-------------------------+
             | G1L: Local Line Chart   |
             | G1H: Local Histogram    |
             | G1P: Local Pie Chart    |
             | G2S: Web Scatter Chart  |
             +-------------------------+

Sensors and Display Links:
(S1) ───> G1L
     └───> G2S

(S2) ───> G1L
     └───> G1P

(S3) ───> G1H

(S4) ───> G1L
     ├───> G1H
     ├───> G1P
     └───> G2S

(S5) ───> [not displayed]
  1. Potential future support for distributed architecture.
  2. Data acquisition modules should optionally allow sensor configuration.
  3. Ideally, support a mobile app to manage the system remotely.

Now, let's go to the design considerations:

Modular Structure app modules diagram

  • Core Module - Orchestrates everything, handles config, logs, command-line access.
  • Data Acquisition Module - Interacts with hardware and collects data.
  • Data Processing Module - Transforms data and sends it onward.
  • Data Storage Module - Sends data to DBs or files (including over Samba, NFS, etc.).
  • GUI Module - Presents data visualizations (local or web) and configuration UIs.

Key Architecture Notes:

  • The only way I know of to ensure modules are robust and not vulnerable to crashes in other parts of application is to run modules in separate processes.
  • Modules must be hot-pluggable. It’s acceptable for the application to require an explicit command to discover and load new modules, but it must not require a restart to activate or launch them.
  • The core module does not forward data — it's for orchestration, not a message router.
  • Multiple instances of the same module should be allowed.
  • Using multi-process architecture makes it easier to use distributed architecture in the future should I desire to do so.
  • Multi-process approach also makes it easier to create new modules in different languages. They only need to implement proper IPC protocol and messages and that's it.
  • Another benefit of multi-process application is that the core process can monitor all other processes and restart them if they crash or encounter any issues. The only process that OS has to restart would be the core process.
  • Theoretically it would be possible to use Docker and maybe even Kubernetes. However, Docker is not supported on very old platforms and it will increase resources utilization.
  • Regardless if my decision to use multi-process app stands, I need to improve inter-module communication. I plan on using message passing. I'd say either FIFO queues or message bus.
  • There has to be some ack mechanism to ensure data is not lost in the void.
  • For IPC I'm hesitating between QLocalSocket (it uses named pipe on Windows and UDS on Linux) and gRPC. The latter has bigger overhead but can support both local and remote communication.
  • I'm considering writing my own simple abstraction layer for IPC so that switching between QLocalSocket and gRPC is as simple as changing compilation flag - or even can be done live if needed. I'd love to hear your input on this.
  • Messages and protocols (I'm thinking Google's ProtocolBuffers) are agnostic to both "queue vs bus" and "QLocalSocket vs gRPC" dilemmas.
  • I think I will have to introduce some protocol versioning, so that we can check module compatibility.
  • Script engine built in into the app will handle custom data processing. Currently I'm leaning toward's Google's JavaScript engine, but Lua or Python are on the table as well. I'm open to other ideas as well.
  • The script engine will work as an intermediary between data acquisition module and both GUI modules and storage modules. That way it will be easy to create scripts to process data differently for different charts or add additional info to storage (e.g. to already calculated variance, add mean and median).
  • To implement CLI, I need to either implement some kind of simplified shell in the app itself or have additional utility run from the shell that connects to the main app, executes command and exits. With the first approach we'd have something like this:
$> app
$app> status
$app> restart -id=gui-1
$app> exit

The problem with this approach is that it's difficult to connect from different computer to already running app or if someone starts app with $> app &. Of course, it's possible to build an ambedded SSH shell and expose a port, but it would increase complexity a lot as the app would have to take care of the security concerns that are currently not an issue. That's why I think I prefer the second approach. Login will be handled by ssh shell, and it will be easy to control application:

$> app &
$> appctl status
$> appctl restart -id=gui-1
$> appctl exit

The issue to solve here is how to identify correct instance if someone launches more than one core application (once we identified core, it will identify all other parts correctly), but it's not that difficult: we create unique id for each core app and display it in the GUI as well as log it to the log file. Additionally we give appctl possibility to list all active core instances. Maybe you have some better ideas on how to handle this?

  • Settings management is TBD - how each module should to expose settings to GUI without coupling?
  • Security concerns:
    • The app will probably work on PC behind a VPN, and web GUI would also be available only from within the VPN, but I want to approach this as if there was no VPN. It's supposed to be learning exercise as well.
    • Core Module - I don't think there is much risk here (unless I decide to implement shell and expose a port that is).
    • Script Engine - Of course, one could load malicious code. However, I assume that someone who has access to the local directory and is able to put a script in there, has better ways of wreaking havoc than using my application. Maybe some sandbox would be helpful here? I don't need access to the system to perform mathematical calculations.
    • GUI Module - Local GUI shouldn't pose any issue. Web GUI on the other hand requires careful security analysis, but that's a topic for later, at the beginning I'll use local GUI.
    • Data Acquisition Module - Procures data from the outside world, so it can create opening for malicious actor. There are two dangers I can see: configuring incorrect remote target and hijacking remote location to send malicious data. Severity depends on the method of getting data. For example polling is not vulnerable to flooding. Subscription based access to sensor is however very vulnerable to this. I think I (and each subsequent module author) have to implement some way of checking if the data has correct format to defend against malicious actor sending data that would exploit some vulnerability in script engine.
    • Data Storage Module - the only thing I can think of is secure login to DB. I need to securely store credentials and then encrypt communication to securely transmit them to DB. Provided, of course, that the target DB supports encryption.
    • When the time comes to create mobile app and handle communication with this app, there will be several security topics I assume but I will tackle this when this time comes.
    • I lack experience in security, so I hope you can chime in here and help.

These are the ideas I've gathered so far. Please, let me know if I missed some vital aspect in my deliberations


Questions

  1. Is multi-process IPC-based architecture the best approach for modular and robust C++ application?
  2. Is it possible for the IPC to be a bottleneck in case of high-frequency data (e.g. per-second from multiple sensors)?
  3. For identical sensors, should spin up multiple acquisition modules, or rather design module so that it can handle indeterminate number of identical sensors? Any trade-offs in extensibility?
  4. What's the best way to handle module settings?
  • I assume each module will have its own settings file. However, I don't how to handle changing settings from application.
  • I can limit configuration to changing settings files on the disc and add either watcher to application that periodically checks for update or command that forces app to reload settings.
  • Command line shouldn't be to difficult: prepare a message protocol for settings and core module can forward any requested changes to a proper module.
  • GUI configuration is the most challenging. Unfortunately, I think it's important to have this (especially with web GUI). How should I route all the settings to GUI module? As GUI can be written in countless different languages and frameworks (including web frameworks), other modules can't possibly hope to create part of the GUI. Do I need to create something like qml to allow non-gui modules to describe the way to display settings? Or just send (value type, description, logical group) tuples to GUI and leave it up to the GUI module to create proper layout?
  • I'm a bit lost here.
  1. Any best practices when embedding script engine into the application? Security? Isolation? Performance?
  2. Is my appctl approach sound for CLI? How to make it better? How to improve instance discovery?
  3. How can I future-proof this for a mobile app API without over-engineering now?
  4. Should I invest in observability from the start? I assume logs are quite important. Anything else? Metrics? Distributed tracing? Any thoughts on this will be greatly appreciated.
  5. How to handle module discovery? Should the core scan for available modules? Should it require manual registration? Any other approach?
  6. What to do about errors and error propagation? There are some errors that can be handled locally, inside module. In these cases it should be enough to log that something happened. Otherwise, error should be propagated to GUI/CLI so that the user can take action, right? What is the best way to achieve this?
  7. Should I include any self-test interface? It could be used by core to trigger self-check of the modules if core detects any issue.
  8. Any advice on testing? Unit testing is easy. However, I'm not sure how to perform module level and e2e tests on multi-process application. Should it impact the design phase, or I should design the app and then worry how to test it?

I’d really appreciate any insights — especially on IPC choices, modularization strategies, dynamic configuration/UI generation, and scripting integration. If you spot a potential problem I haven't addressed, please call it out.

Thanks in advance!