Industrial

OPC UA C++ for Linux FAQs

The BadHostUnknown error indicates that the hostname cannot be resolved. The operating system cannot resolve the hostname, or the hostname is incorrect.
The DNS is not configured properly.
A "ping <computer name>" will usually also not work.

In such cases, use the URL with the IP address to configure the session, or configure the name resolution (or edit the "hosts" file).

Write operations to some third party servers get rejected with the status "EnumStatusCode_BadWriteNotSupported".

Not all OPC UA Servers support writing timestamps. This is not a bug of the Softing OPC C++ UA SDK or the third party Server, it is the official way of a Server to say, that it doesn't support writing timestamps.

Don't call DataValue::setServerTimestamp() and maybe not even DataValue::setSourceTimestamp() on the DataValue that shall be written, respectively call these methods with an empty DateTime variable to clear existing timestamps.

When dealing with big data (lots of variables, that shall be read, written or subscribed), there are three general aspects that have to be considered:

- Message Size
- Timing
- Resources

All OPC UA services are designed to handle the requests and responses for multiple nodes (like read, write and subscribe) and many of them can contain optional data.

In general, the more nodes are handled within service calls (the bigger the messages), compared to several smaller service calls, the less is the message overhead and the better the performance.

On the other hand, be aware that the OPC UA stacks have has a maximum message size, e.g. the C Stack of the OPC UA C++ SDK has a fixed size of 16 MB.
Trying to send or receive bigger messages will produce errors. Try to limit the maximum message size by splitting too big service calls to several smaller ones or by specific configuration options (like Subscription::setMaxItemsPerPublish() or Application::setMaxMonitoredItemsPerService()).

Splitting operations into several smaller ones might increase the total required time, but might help to prevent timeout problems, as every single service can be processed faster.

Regarding the optional data, try to strip down not required optional information to reduce the message size and increase the speed.
For example when reading values, you can reduce the size of each transported value by 16 bytes by not requesting the Server- and SourceTimestamps of the values.

The more data is transferred, the longer it will take to process it. Have a look on the configured service timeouts, they might need to be increased to prevent timeouts.

The client itself shouldn't have any resource problems, as a client usually does not have big memory storages or value monitors, but it influences the server's resources, like memory and CPU usage.

Theoretically the server should define proper limits to prevent resource problems, but even the client might think about a proper usage.
For example the subscription service requires some memory for the buffering and some CPU for the observation at the Server, where the read service only needs to copy current values once but cannot collect only the changed values.
Try to use subscriptions only for regularly changing nodes or for data that is important to be received as soon as possible (like alarms and events).
Prefer the read service to receive values that have to be received seldom.

When dealing with big data (lots of variables, that shall be read, written or subscribed), there are three general aspects that have to be considered:

  • Message Size
  • Timing
  • Resources

All OPC UA services are designed to handle the requests and responses for multiple nodes (like read, write and subscribe) and many of them can contain optional data.

In general, the more nodes are handled within service calls (the bigger the messages), compared to several smaller service calls, the less is the message overhead and the better the performance.

On the other hand, be aware that the OPC UA stacks have has a maximum message size, e.g. the C Stack of the OPC UA C++ Toolkit has a fixed size of 16 MB. 
Trying to send or receive bigger messages will produce errors. Try to limit the maximum message size by splitting too big service calls to several smaller ones or by specific configuration options (like Subscription::setMaxItemsPerPublish() or Application::setMaxMonitoredItemsPerService()).

Splitting operations into several smaller ones might increase the total required time, but might help to prevent timeout problems, as every single service can be processed faster.

Regarding the optional data, try to strip down not required optional information to reduce the message size and increase the speed. 
For example when reading values, you can reduce the size of each transported value by 16 bytes by not requesting the Server- and SourceTimestamps of the values.

The more data is transferred, the longer it will take to process it. Have a look on the configured service timeouts, they might need to be increased to prevent timeouts.

The client itself shouldn't have any resource problems, as a client usually does not have big memory storages or value monitors, but it influences the server's resources, like memory and CPU usage.

Theoretically the server should define proper limits to prevent resource problems, but even the client might think about a proper usage. 
For example the subscription service requires some memory for the buffering and some CPU for the observation at the Server, where the read service only needs to copy current values once but cannot collect only the changed values. 
Try to use subscriptions only for regularly changing nodes or for data that is important to be received as soon as possible (like alarms and events). 
Prefer the read service to receive values that have to be received seldom.

This error occurs when a publish notification is missing or one publish notification overtook a previous notification. This can happen on networks, for example when packages are sent over different network routes or when a packet is completely lost. For such situations the SDK provides an automatic republishing feature, which can be enabled by Client::Subscription::setRePublishingEnabled(). The republishing service which requests the server to resend the missing publish responses and the SDK then ensures to report the publish responses to the API in the correct order. If an application is only interested in the newest notifications, then the error EnumStatusCode_BadSequenceNumberInvalid can simply be ignored.

Debug assertions during shutdown indicate that a toolkit class instance is still alive during unloadToolbox(). In general when toolkit class instances are destroyed after unloadToolbox(), this might cause memory leaks or crashes (accessing freed resources).

Make sure that all toolkit instances are destroyed before unloadToolbox().

If you use toolkit classes as global variables, you should better replace them with pointers, which can be assigned and released after loadToolbox() and before unloadToolbox().

Application::stop() and Application::uninitialize() should also be called before calling unloadToolbox().
Application::stop() will close the endpoints and this way the client connections will be closed as well.

Debug assertions during shutdown indicate that a toolkit class instance is still alive during unloadToolbox(). In general when toolkit class instances are destroyed after unloadToolbox(), this might cause memory leaks or crashes (accessing freed resources).

Make sure that all toolkit instances are destroyed before unloadToolbox().

If you use toolkit classes as global variables, you should better replace them with pointers, which can be assigned and released after loadToolbox() and before unloadToolbox().

Application::stop() and Application::uninitialize() should also be called before calling unloadToolbox().
Application::stop() will close the endpoints and this way the client connections will be closed as well.

Our SDKs basic architecture is platform independent. With the same code basis we support three operating systems as reference implementation:

- Windows,
- Linux
- VxWorks.


The only difference between the three is a small platform abstraction layer. The SDK can be easily ported on various other operating systems respective hardware platforms.
For our SDK you have the option to buy a source code license which makes it easily portable to other platforms. The make-files are also prepared to be ready to use with cross-compilers use. The code itself is written in a platform independent manner.

Additionally Softing can offer a small integration project for a specific operating system and HW platform as well.
The pre-requirements for such an integration project are:

- access to the build tool chain (cross-compiler) installed, and the tool chain usage exemplified with an sample
- access to a target platform for execution of the tests and
- a technical contact to assist us with details regarding the build tool chain and test platform.

The integration project consists of generating the SDK binaries for the target platform, building the client & server test applications to run on this platform and executing the system tests on the target platform. Potential issues found during tests will be addressed and fixed by Softing. In case you’re interested in an integration project, please contact our sales department [email protected] regarding the commercial aspects.

If you have the source code of the toolkit, you can download the desired OpenSSL ".tar.gz" file to <InstallDir>/Source/Core/OpenSSL, move/remove the prior ".tar.gz" file from that folder and try to build the toolkit with the different OpenSSL (see OpenSSL Functionality). The toolkit should be compatible with most OpenSSL versions, otherwise it will report compilation errors.

If you have the binary version of the Windows toolkit, then the only problem is, that the two different OpenSSL DLLs typically want to have the same names. To solve that, you can rename the libraries libeay32.dll and ssleay32.dll to names with an equal length, e.g. libeayua.dll and ssleayua.dll. Then you can open the DLLs TB5STACK.dll (or TB5STACKx64.dll) and the renamed ssleay32.dll in an editor and search and rename the original DLL names by the new names.

Note: This will work only if the old and new DLL names have the same size, changing the size of the DLL content will cause problems!

If you have the source code of the toolkit, you can download the desired OpenSSL ".tar.gz" file to <InstallDir>/Source/Core/OpenSSL, move/remove the prior ".tar.gz" file from that folder and try to build the toolkit with the different OpenSSL (see OpenSSL Functionality). The toolkit should be compatible with most OpenSSL versions, otherwise it will report compilation errors.

If you have the binary version of the Windows toolkit, then the only problem is, that the two different OpenSSL DLLs typically want to have the same names. To solve that, you can rename the libraries libeay32.dll and ssleay32.dll to names with an equal length, e.g. libeayua.dll and ssleayua.dll. Then you can open the DLLs TB5STACK.dll (or TB5STACKx64.dll) and the renamed ssleay32.dll in an editor and search and rename the original DLL names by the new names.

Note: This will work only if the old and new DLL names have the same size, changing the size of the DLL content will cause problems!

By default it is not possible to have several applications in the same process, the limitation is caused by several singletons which are accessible from the entire process.

The closest approach is to use several Client::Sessions or Server::Endpoints and configure them with individual application instance certificates (see Client::Session::setInstanceCertificate() and Server::Endpoint::setInstanceCertificate()), but all will share the same ApplicationDescription and all servers will share the same address space.

Note: The application instance certificates shall have the same ApplicationUri as the ApplicationDescription, thus all certificates must use the same ApplicationUri.
There is another possible approach to produce several applications in one process by separating the different applications into several DLLs (Windows) or shared objects (Linux).

Different DLLs or shared objects have separate code and don't conflict if they know symbols of the same name, but it is important to load the SDK as static library into the DLLs or shared objects to have duplicate instances.

Note: For Windows the binaries are only available as DLLs. A source code license is needed to produce static libraries for Windows. Please contact the support for additional help how to configure projects for static libraries.

Question: Server Monitored Items may report invalid initial values when the related nodes are updated cyclically.

Reply: A cyclic update usually means that providing the initial value to the node can have a small delay, thus the initial NULL value may be reported.

You can override the method Server Subscription on Provide Initial Values to configure the related Server Monitored Items to take the initial value from the next data change instead of using the current cache value.

 

OPC UA C++ SDK for Linux

Integration of OPC UA communication into applications based on Linux

x

Softing Industrial Support

USA, Canada, Mexico
(865) 251-5244
(Knoxville, TN)
E-mail Request  
Callback

«