Release Note
Release date: 2023-06-29
Each module now has settings for Retry (Common Settings). When the Max Number Of Retries setting has a value larger than 0 the module will retry sending a message if an attempt fails. When the max limit is reached the message will be dropped and the output will have crosser.success set to false, unless persistence is enabled (see below).
With this feature the Memory Buffer module is no longer needed for standard use cases. It may still be useful if more advanced retry logic is needed.
This feature is only useful on modules where another try makes sense when a module fails to process a message. For example modules that communicate with external systems can benefit from this, while it does not make sense to use this on most analytic modules.
Each module now has a setting for persistence. You will find this setting on the Common settings tab.
Note: The persistent REST API is not available for remote session flows, only deployed flows.
Let's say that we have these settings:
This would mean that:
The module will make sure that the message is saved before trying to process
If the module fails to process the message (an exception occurs or crosser.success is false) it will try again, but maximum 3 tries will be made
There will be a delay of 1 second before each retry
If/When the message is successfully processed the persisted message will be deleted from the database.
If a module fails to process the message and all retries are used the message will still not be lost. When we reach this state the failing message will be moved to a DeadLetter storage and stay there until:
A: The TTL setting for DeadLetters timeout (default 24 hours) and it will be deleted.
B: Someone decides to Restore the DeadLetter. It will then be added back to the end of the queue and be processed once again.
The Node offers a new REST API for handling Persistent Messages and DeadLetters
The setting for TTL can be configured with in appsettings.json
If the flow stops no more messages will be added to the persistent storage until the flow starts again. However, all messages persisted before the flow stopped will remain in the storage and when the flow starts each module will load all its messages into the queue again.
Each flow has its own database. This database will be available between restarts of the flow and the node. However, when changing flow-version any data in the database will be lost.
Using persistence will have a huge impact on throughput.
The Node will publish messages when things occur that someone might want to know. The root topic for all the messages will be $crosser/. So for example $crosser/flows/{flowDefinitionId}/started will be published when a flow starts.
The $crosser topic will be forbidden by clients to publish to, only subscriptions are allowed. If a client tries to publish with a topic that starts with $crosser/ the client will be disconnected.
Note that MQTT Notifications are disabled for remote sessions. Only deployed flows will trigger MQTT Notifications.
Topic: $crosser/flows/{flowDefinitionId}/state [RETAINED]
When a flow changes state (started, stopped) this topic will be triggered with the information.
Topic: $crosser/flows/{flowDefinitionId}/status [RETAINED]
When a flow change status to Ok, Warning or Error
Topic: $crosser/modules/{moduleId}/status [RETAINED]
When a module changes status (Ok, Warning, Error)
Topic: $crosser/modules/{moduleId}/messages/dropped
When the module queue is full and the strategy is not wait.
Topic: $crosser/modules/{moduleId}/messages/deadletters
When persistence is on and a message has failed maximum number of times it will be moved to the deadletter queue. I do not see any need for event when a messages fails since we will have retry until success or deadletter
By subscribing to $crosser/# you would get all messages
Subscribing to $crosser/flows/# would get you all flow messages
Subscribing to $crosser/modules/# would get you all module messages
Of course you can also use other combinations and use + as a single level wildcard
Subscribing to $crosser/flows/+/stopped would get you only stopped flow messages
Subscribing to $crosser/modules/+/status would get you module status messages for all modules
When a MQTT client creates a subscription for the topic $crosser/flows/# or $crosser/flows/+/status the broker will send out the current status for all deployed flows.
Up until now the Node will always start the endpoints for HTTP, MQTT and REST API. There might be a number of reasons why you do not want to start all (or any) of them.
All of the endpoints will be enabled by default, but you can now disable them in the appsettings.json file.
If you prefer to use ENVIRONMENT VARIABLES.
Example for HttpServers
You can also use arguments if you need that...
Example for HttpServers
The priority order highest to lowest for configurations are
Arguments
Environment Variables
Settings in json-files
You will not be able to use HTTP to send data to Flows
You will not be able to use WebSockets to send data to Flows
You will not be able to use external MQTT clients to send data to Flows. You can still use MQTT Client modules to get data from external brokers.
You will not be able to use the Python Bridge module in Flows if the module version is less than 4.0.0
You will not be able to use the MQTT notifications feature
You will not be able to use the local UI (http://localhost:9191)
You will not be able to use the REST API for seeing Flows/Metrics/Logs etc
The Default JsonSerializer now supports the new C# types DateOnly and TimeOnly.
If a flow process is stopped (crashes) with an error code and Halt On Error is set to false the Host will restart the flow.
Previously we logged events with the name of the module in combination with a calculated index. This was confusing when you have complex flows with several branches. As of 2.6.0 we log events using the name of the module which can be changed by the user when building the flow.
In previous versions of the Node you could use MQTT as transport for Remote Sessions. This option has now been removed and instead the options are WebSockets (preferred) or HTTP.
Prior to 2.6.0 no more logs was written when the file size limit was reached (10MB). As of 2.6.0 the logger will role the file when the limit is reached.
Sample output
log20220119.log
log20220119_001.log
log20220119_002.log
log20220119_003.log
Previously always to UTC, now always local time with information about timezone in the format yyyy-MM-ddTHH:mm:ss.fffzzz.
When having a very instable network the MQTT connections metric would show a negative number of connections. This was due to the fact that the incrementation of connections was done after reading the MQTT connect message. In cases where the connect message was never received only decrementation was done and this caused negative connection metrics.
The incrementation of MQTT connections is now done before the MQTT connect message is complete.
In previous versions of the Node the flow processes were all started at the same time. This could cause timeouts if the machine does not have sufficient performance to be able to handle the load.
As of 2.6.0 the flows will be started in sequence to make sure that machines with low performance and many flows deployed still can handle a safe startup.
As of 2.5.2 the maximum log-file-size was increased to 10MB, the Windows Service did not get this change and still had 1MB in 2.5.2. This is now changed so that the Windows Service also has a limit at 10MB.
The default initialize timeout for modules is 30 seconds. The C# module might require more time to compile the code on machines with poor performance, therefor the initialize timeout for the C# module was increased to 2 minutes as of 2.6.0.
When a Flow message had large byte arrays as properties, cloning the Flow message was unnecessary slow & also allocated memory in an insufficient way. This is now fixed.
In previous Nodes the Remote Session could not use WebSockets as transport behind a HTTP proxy. This has now been fixed. There are still HTTP proxies that does not allow WebSocket traffic, but chances are that you now can use the standard transport for Remote Sessions
If a HTTP client start sending data to the Node and not send all the data the Node would wait infinite for the data to be received. As of 2.6.0 the Node will terminate the HTTP connection if no data is received in 30 seconds.
An invalid cast to Int32 was used preventing metrics to have numbers over 2147483647. This could cause negative values for network traffic and number of messages in Crosser Cloud.