diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 000000000..e69de29bb diff --git a/api/0.11/annotated.html b/api/0.11/annotated.html new file mode 100644 index 000000000..36f1e73f4 --- /dev/null +++ b/api/0.11/annotated.html @@ -0,0 +1,80 @@ + + +
+ + + + +
+ Extensible Connection-oriented Messaging (XCM)
+
+ |
+
Cxcm_addr_host | |
Cxcm_addr_ip |
+ Extensible Connection-oriented Messaging (XCM)
+
+ |
+
+ Extensible Connection-oriented Messaging (XCM)
+
+ |
+
+Files | |
file | xcm.h [code] |
This file contains the core Extensible Connection-oriented Messaging (XCM) API. | |
file | xcm_addr.h [code] |
This is an API for building and parsing Connection-oriented Messaging (XCM) addresses. | |
file | xcm_attr.h [code] |
This file contains the XCM attribute access API. See Socket Attributes for an overview. | |
file | xcm_attr_types.h [code] |
This file contains type definitions for the XCM attribute access API. | |
+ Extensible Connection-oriented Messaging (XCM)
+
+ |
+
▼ include | |
xcm.h | This file contains the core Extensible Connection-oriented Messaging (XCM) API |
xcm_addr.h | This is an API for building and parsing Connection-oriented Messaging (XCM) addresses |
xcm_attr.h | This file contains the XCM attribute access API. See Socket Attributes for an overview |
xcm_attr_types.h | This file contains type definitions for the XCM attribute access API |
+ Extensible Connection-oriented Messaging (XCM)
+
+ |
+
+ Extensible Connection-oriented Messaging (XCM)
+
+ |
+
+ Extensible Connection-oriented Messaging (XCM)
+
+ |
+
+ Extensible Connection-oriented Messaging (XCM)
+
+ |
+
+ Extensible Connection-oriented Messaging (XCM)
+
+ |
+
+ Extensible Connection-oriented Messaging (XCM)
+
+ |
+
+ Extensible Connection-oriented Messaging (XCM)
+
+ |
+
+ Extensible Connection-oriented Messaging (XCM)
+
+ |
+
+ Extensible Connection-oriented Messaging (XCM)
+
+ |
+
+Macros | |
#define | XCM_VERSION_API_MAJOR 0 |
#define | XCM_VERSION_API_MINOR 22 |
#define | XCM_VERSION_API "0.22" |
#define XCM_VERSION_API_MAJOR 0 | +
The XCM API/ABI major version this library version implements.
+ +#define XCM_VERSION_API_MINOR 22 | +
The XCM API/ABI minor version this library version implements.
+ +#define XCM_VERSION_API "0.22" | +
The complete XCM API version in string format.
+ +
+ Extensible Connection-oriented Messaging (XCM)
+
+ |
+
+Macros | |
#define | XCM_VERSION_MAJOR 1 |
#define | XCM_VERSION_MINOR 7 |
#define | XCM_VERSION_PATCH 0 |
#define | XCM_VERSION "1.7.0" |
#define XCM_VERSION_MAJOR 1 | +
The XCM library major version.
+ +#define XCM_VERSION_MINOR 7 | +
The XCM library minor version.
+ +#define XCM_VERSION_PATCH 0 | +
The XCM library patch version.
+ +#define XCM_VERSION "1.7.0" | +
The complete XCM library version in string format.
+ +
+ Extensible Connection-oriented Messaging (XCM)
+
+ |
+
This is the documentation for the Extensible Connection-oriented Messaging (XCM) programming APIs.
+XCM consists of of three parts; the core API in xcm.h, an address helper library API in xcm_addr.h, and the attribute access API in xcm_attr.h.
+ +XCM provides a connection-oriented, reliable messaging service with in-order delivery. The design goal is to allow a straight-off mapping to TCP and TLS, but also allow more efficient transport for local communication.
+XCM reuses much of the terminology (and semantics) of the BSD Sockets API.
+XCM has a client-server model. A server creates a server socket (with xcm_server()) bound to a specific address (in case of TCP or TLS, a TCP port on a particular IP interface), after which clients may initiate connections to the server. On a successful attempt, two connection sockets will be created; one on the server side (returned from xcm_accept()), and one of the client side (returned from xcm_connect()). Thus, a server serving multiple clients will have multiple sockets; one server socket and N connection sockets, one each for every client. A client will have one connection socket for each server it is connected to.
+Messages are always sent and received on a particular connection socket (and never on a server socket).
+In-order delivery - that messages arrive at the receiver in the same order they were sent by the sender side - is guaranteed, but only for messages sent on the same connection.
+XCM transports support flow control. Thus, if the sender message rate or bandwidth is higher than the network or the receiver can handle on a particular connection, xcm_send() in the sender process will eventually block (or return an error EAGAIN, if in non-blocking mode). Unless XCM is used for bulk data transfer (as oppose to signaling traffic), xcm_send() blocking because of slow network or a slow receiver should be rare indeed in practice. TCP, TLS, and UNIX domain socket transports all have large enough windows and socket buffers to allow a very large amount of outstanding data.
+In XCM, the application is in control of which transport will be used, with the address supplied to xcm_connect() and xcm_server() including both the transport name and the transport address.
+However, there is nothing preventing a XCM transport to use a more an abstract addressing format, and internally include multiple "physical" IPC transport options. This model is used by the UTLS Transport.
+Addresses are represented as strings with the following general syntax: @tt <transport-name>:<transport-address> @endtt
+For the UX UNIX Domain Socket transport, the addresses has this more specific form:
+
The addresses of the UXF UNIX Domain Socket transport variant follow the following format:
+
For the TCP, TLS, UTLS and SCTP transports the syntax is:
+
'*' is a shorthand for '0.0.0.0' (=bind to all IPv4 interfaces). '[*]' is the IPv6 equivalent.
+For example:
For TCP, TLS, UTLS and SCTP server socket addresses, the port can be set to 0, in which case XCM (or rather, the Linux kernel) will allocate a free TCP port.
+For transports allowing a DNS domain name as a part of the address, the transport will attempt resoĺv the name to an IP address. A DNS domain name may resolv to zero or more IPv4 addresses and/or zero or more IPv6 addresses. XCM relies on the system's configuration to prioritize between IPv4 and IPv6.
+XCM accepts IPv4 addresses in the dotted-decimal format
XCM allows only complete addresses with three '.', and not the archaic, classful, forms, where some bytes where left out, and thus the address contained fewer separators.
+XCM transports attempts to detect a number of conditions which can lead to lost connectivity, and does so even on idle connections.
+If the remote end closes the connection, the local xcm_receive() will return 0. If the process on the remote end crashed, xcm_receive() will return -1 and set errno ECONNRESET. If network connectivity to the remote end is lost, xcm_receive() will return -1 and errno will be set to ETIMEDOUT.
+In general, XCM follow the UNIX system API tradition when it comes to error handling. Where possible, errors are signaled to the application by using unused parts of the value range of the function return type. For functions returning signed integer types, this means the value of -1 (in case -1 is not a valid return value). For functions returning pointers, NULL is used to signal that an error has occurred. For functions where neither -1 or NULL can be used, or where the function does not return anything (side-effect only functions), an 'int' is used as the return type, and is used purely for the purpose to signal success (value 0), or an error (-1) to the application.
+The actual error code is stored in the thread-local errno variable. The error codes are those from the fixed set of errno values defined by POSIX, found in errno.h. Standard functions such as perror() and strerror() may be used to turn the code into a human-readable string.
+In non-blocking operation, given the fact the actual transmission might be defered (and the message buffered in the XCM layer), and that message receive processing might happen before the application has called receive, the error being signaled at the point of a certain XCM call might not be a direct result of the requested operation, but rather an error discovered previously.
+The documentation for xcm_finish() includes a list of generic error codes, applicable xcm_connect(), xcm_accept(), xcm_send() and xcm_receive().
+Also, for errors resulting in an unusable connection, repeated calls will produce the same errno.
+In UNIX-style event-driven programming, a single application thread handles multiple clients (and thus multiple XCM connection sockets) and the task of accepting new clients on the XCM server socket concurrently (although not in parallel). To wait for events from multiple sources, an I/O multiplexing facility such as select(2) or poll(2) is used.
+XCM supports this programming model. However, due to the extensive user space state/buffering required for some XCM transports, and the weak correlation between fd read/write state and actual XCM-level message send/receive that follows, XCM is forced to deviate from the BSD Sockets semantics in this regard.
+XCM allows the application to use select() and poll() by direct calls, or using any of the many event-loop libraries. For simplicity, being the most well-known of options, select() is used in this documentation to denote the whole family of POSIX I/O multiplexing facilities.
+An event-driven application will set the XCM sockets it handles into non-blocking mode (xcm_set_blocking() or the XCM_NONBLOCK flag to xcm_connect()).
+For XCM sockets in non-blocking mode, all potentially blocking API calls related to XCM connections, calls - xcm_connect(), xcm_accept(), xcm_send(), and xcm_receive() - finish immediately.
+Many such potentially blocking calls will finish immediately and with success. For xcm_send(), xcm_connect() and xcm_accept(), XCM signaling success means that the XCM layer has accepted the request. It may or may not have completed the request.
+In case the XCM_NONBLOCK flag is set in the xcm_connect() call, or in case the a XCM server socket is in non-blocking mode at the time of a xcm_accept() call, the newly created XCM connection returned to the application may be in a semi-operational state, with some internal processing and/or signaling with the remote peer still required before actual message transmission and reception may occur.
+The application may attempt to send or receive messages on such semi-operational connections.
+There are ways for an application wishing to know when connection establishment or the task of accepting a new client have finished to do so. See Finishing Outstanding Tasks for more information.
+To receive a message on a XCM connection socket in non-blocking mode, the application may wait for the right conditions to arise, by means of calling xcm_want() with the XCM_SO_RECEIVABLE flag set. When select() signals that these conditions are true, the application should issue xcm_receive() to attempt to retrieve a message.
+xcm_receive() may also called on speculation, prior to any xcm_want() call, to poll the socket for incoming messages.
+A XCM connection socket may buffer a number of messages, and thus the application should, for optimal performance, repeat xcm_receive() until it returns an error, and errno is set to EAGAIN. However, an application may choose to call xcm_want() with XCM_SO_RECEIVABLE set, but in that case, if there are buffered messages, the xcm_want() call will return 0, signaling that the socket doesn't have to do anything in order for the application to receive a message.
+Similar to receiving a message, an application may use xcm_want() to wait for the right conditions to occur to allow the transmission of a message. Just like with xcm_receive(), it may also choose to issue a xcm_send() call on speculation, falling back to xcm_want() and select() only in the face of XCM being unable to accept a new message. XCM will signal that this is the case by having xcm_send() returning an error with errno to EAGAIN.
+For send operations on non-blocking connection sockets, XCM may buffer whole or part of the message before transmission to the lower layer. This may be due to socket output buffer underrun, or the need for some in-band signaling, like security keys exchange, to happen before the transmission of the complete message may finish. The XCM layer will (re-)attempt to hand the message over to the lower layer at a future call to xcm_finish(), xcm_send(), or xcm_receive().
+An application should never attempt to draw any conclusions directly based the state of the fd or fds used by the XCM socket. The fds may be readable, and yet there may be no message to read from XCM, or it may not be readable, but yet there might be one or several messages buffered in the XCM layer. The same lack of correlation holds true also for xcm_send() and the fd writable/non-writable fd state. In addition, XCM may also used file descriptor for other purposes.
+For applications wishing to know when any outstanding message transmission has finished, it may use xcm_finish() to do so. Normally, applications aren't expected to require this kind of control. Please also not that the fact a message has left the XCM layer doesn't necessarily mean it has successfully been delivered to the recipient.
+xcm_connect(), xcm_accept(), xcm_send() may all leave the connection in a state where work is initiated, but not completed. In addition, the transport may also busy with an internal tasks, such filling its internal buffer with incoming messages, being involved in a key exchange operation (TLS hand shake) or keep alive message transmission or reception.
+Prior to the select() call, the application must query any XCM connection or server socket it has in non-blocking mode, asking it what events it is waiting for, and on what file descriptor. This is true even if the application neither wants to send or receive (on a connection socket), or accept incoming connections (on a server socket).
+The file descriptor, and the type of event, may change if the application issues any xcm_* calls on that connection. Easiest for the application is likely to query the connection socket immediately prior to each and every select() call.
+After waking up from a select() call, where the conditions required by a non-blocking XCM socket are met, the application must, if no xcm_send(), xcm_receive() or xcm_accept() calls are to be made, call xcm_finish(). This is to allow the socket to finish any outstanding tasks, even in the face of an application having no immediate further use of the socket.
+The query is made with xcm_want(), and it returns an array of file descriptors and, for each fd, the event type(s) the socket is interested in for that fd.
+In case the XCM socket has any such needs, the application should wait until the conditions are met (by means of select()). Upon the conditions are met, the application may continue to use the socket.
+Prior to changing a socket from non-blocking to blocking mode, any outstanding tasks must be finished.
+There might be situations that the fd or the fds tied to a XCM connection is marked (by select()) with the appropriate ready status (typically, but not always, write) for a xcm_send() operation to success, but a send may still block (or fail with EAGAIN, if in non-blocking mode). One such may be that the are indeed socket buffer space, but not enough to fit the complete message.
+The same situation may arise for xcm_receive(). Even though the fd tied to a XCM connection is marked with the appropriate ready status for a message to be received, a xcm_receive() may fail, since the complete message has not yet arrived.
+Thus, an application may never trust that a xcm_send() or xcm_receive() in blocking mode won't block, and similarly may never trust a send or receive operation to never fail and return EAGAIN, regardless of fd status.
+See Waiting for Read May Mean Waiting for Write for other reasons that a send or receive may always potentially block.
+XCM is designed to allow transports where all the processing is done in the application's thread of control (i.e. no separate OS threads or processes for a connection to do whatever in-band signaling is required for handling retransmission, dead peer detection, key exchange etc). One transport involving a lot of this kind of processing is the TLS Transport.
+For sockets in blocking mode, this complexity is hidden from the application (except in the form of message reception or transmission latency jitter).
+For event-driven applications, with their XCM connections in non-blocking mode, this has a non-obvious effect; in order to receive a message, the XCM transport may ask the application to have its thread wait (with select()) for the connection's fd to be marked writable. This is because in order to receive the message, the transport may need to complete some in-band signaling. For example, it may require some new keys for encrypting the outgoing message, since the old have expired.
+The other way around may also be true; that in order to write a message, the transport may need to have the application to wait for the fd to become readable (since it needs to receive some signaling message from the remote peer in order to proceed).
+The same holds true also for accept operation on server sockets; in order to accept an incoming request, the transport may ask the application to wait for the fd to be come writable.
+In this example, the application connects and immediately tries to send a message. This may fail (for example, in case TCP and/or TLS-level connection establishement has not yet been completed), in which case the application will fall back and wait with the use of xcm_want() and select().
+In case the application wants to know when the connection establishment has finished, it may use xcm_finish() to do so, like in the below example sequence.
+While connecting to a server socket, the client's connection attempt may be refused immediately.
+In many cases, the application is handed a connection socket before the connection establishment is completed. Any errors occuring during this process is handed over to the application at the next XCM call; would it be xcm_finish(), xcm_send() or xcm_receive().
+In this example, the application runs into a situation where the operation requested may be perfomed immediately (since XCM already have a buffered message).
+In this example the application flushes any internal XCM buffers before shutting down the connection, to ensure that any buffered messages are delivered to the lower layer.
+In this sequence, a server accepts a new connection, and continues to attempt to receive a message on this connection, while still, concurrently, is ready to accept more clients on the server socket.
+Tied to an XCM server or connection socket is a set of read-only key-value pairs known as attributes. Which attributes are available varies across different transports, and different socket types.
+The attribute names are strings, and follows a hierarchical naming schema. For example, all generic XCM attributes, expected to be implemented by all transports, have the prefix "xcm.". Transport-specific attributes are prefixed with the transport or protocol name (e.g. "tcp." for TCP-specific attributes applicable to the TLS and TCP transports).
+The attribute value is coded in the native C data types and byte order. Strings are NUL-terminated, and the NUL character is included in the length of the attribute. There are three value types; a boolean type, a 64-bit signed integer type and a string type. See xcm_attr_types.h for details.
+The attribute access API is in xcm_attr.h.
+Retrieving an integer attribute may look like this (minus error handling):
Process-wide and/or read/write attributes may be supported in the future.
+These attributes are expected to be found on XCM sockets regardless of transport type.
+For TCP transport-specific attributes, see TCP Socket Attributes, and for TLS, see TLS Socket Attributes.
+Attribute Name | Socket Type | Value Type | Description |
---|---|---|---|
xcm.type | All | String | The socket type - "server" or "connection". |
xcm.transport | All | String | The transport type. |
xcm.local_addr | All | String | See xcm_local_addr(). |
xcm.remote_addr | Connection | String | See xcm_remote_addr(). |
xcm.max_msg_size | Connection | Integer | The maximum size of any message transported by this connection. |
XCM has a set of generic message counters, which keeps track of the number of messages crossing a certain boundary for a particular connection, and a sum of their size.
+Some of the message and byte counter attributes use the concept of a "lower layer". What this means depends on the transport. For the UX And TCP transports, it is the Linux kernel. For example, for TCP, if the xcm.to_lower_msgs is incremented, it means that XCM has successfully sent the complete message to the kernel's networking stack for further processing. It does not means it has reached the receiving process. It may have, but it also may be sitting on the local or remote socket buffer, on a NIC queue, or be in-transmit in the network. For TLS, the lower layer is OpenSSL.
+All the "xcm.*_bytes" counters count the length of the XCM message payload (as in the length field in xcm_send()), and thus does not include any underlying headers.
+The message counters only count messages succesfully sent and/or received.
+Attribute Name | Socket Type | Value Type | Description |
---|---|---|---|
xcm.from_app_msgs | Connection | Integer | Messages sent from the application and accepted into XCM. |
xcm.from_app_bytes | Connection | Integer | The sum of the size of all messages counted by xcm.from_app_msgs. |
xcm.to_app_msgs | Connection | Integer | Messages delivered from XCM to the application. |
xcm.to_app_bytes | Connection | Integer | The sum of the size of all messages counter by xcm.to_app_msgs. |
xcm.from_lower_msgs | Connection | Integer | Messages received by XCM from the lower layer. |
xcm.from_lower_bytes | Connection | Integer | The sum of the size of all messages counted by xcm.from_lower_msgs. |
xcm.to_lower_msgs | Connection | Integer | Messages successfully sent by XCM into the lower layer. |
xcm.to_lower_bytes | Connection | Integer | The sum of the size of all messages counted by xcm.to_lower_msgs. |
XCM includes a control interface, which allows iteration over the OS instance's XCM server and connection sockets (for processes with the appropriate permissions), and access to their attributes (see Socket Attributes).
+The control interface is optional by means of build-time configuration.
+For each XCM server or connection socket, there is a corresponding UNIX domain socket which is used for control signaling (i.e. state retrieval).
+By default, the control interface's UNIX domain sockets are stored in the /run/xcm/ctl
directory.
This directory needs to be created prior to running any XCM applications (for the control interface to worker properly) and should be writable for all XCM users.
+A particular process using XCM may be configured to use a non-default directory for storing the UNIX domain sockets used for the control interface by means of setting the XCM_CTL
variable. Please note that using this setting will cause the XCM connections to be not visible globally on the OS instance (unless all other XCM-using processes also are using this non-default directory).
The XCM socket state the control interface allows access to (i.e. the attributes) is owned by the various processes is the system using the XCM library. Thus, to avoid synchronization issues, the control interface is driven by the application's thread(s), although the application is kept unaware of this fact.
+If the control interface is enabled, some of the file descriptors returned to the application (in xcm_want()) will are not tied to the data interface (i.e. xcm.h and the messaging I/O), but rather the control interface.
+The control interface is using one file descriptor for the a UNIX domain server socket, and zero or more fds for any control interface clients attached.
+Generally, since the application is left unaware (from an API perspective) from the existence of the control interface, errors are not reported up to the application. They are however logged.
+Application threads owning XCM sockets, but which are busy with non-XCM processing for a long duration of time, or otherwise are leaving their XCM sockets unattended to (in violation of XCM API contract), will not respond on the control interface's UNIX domain sockets (corresponding to their XCM sockets). Only the prescence of these sockets may be detected, but their state cannot be retrieved.
+Internally, the XCM implementation has control interface client library, but this library's API is not public at this point.
+XCM includes a command-line program xcmctl
which uses the Control API to iterate of the system's current XCM sockets, and allow access (primarily for debugging purposes) to the sockets' attributes.
Unlike BSD sockets, a XCM socket may not be shared among different threads without synchronization external to XCM. With proper external serialization, a socket may be shared by different threads in the same process, although it might provide difficult in practice since a thread in a blocking XCM function will continue to hold the lock, and thus preventing other threads from accessing the socket at all. For non-blocking sockets, the contract of xcm_want() may be broken in so far the conditions on which a thread is waiting for may be change, if another thread calls into that connection socket.
+It is however safe to "give away" a XCM socket from one thread to another, provided the appropriate memory fences are used.
+These limitations (compared to BSD Sockets) are in place to allow socket state outside the kernel (which is required for TCP framing and TLS).
+Sharing a XCM socket between threads in different processes is not possible.
+After a fork() call, either of the two process (the parent, or the child) must be designated the owner of every XCM socket the parent owned.
+The owner may continue to use the XCM socket normally.
+The non-owner may not call any other XCM API call than xcm_cleanup(), which frees local memory tied to this socket in the non-owner's process address space, without impacting the connection state in the owner process.
+The core XCM API functions are oblivious to the transports used. However, the support for building, and parsing addresses (which some applications are expected to do) are available only for a set of pre-defined set of transports. There is nothing preventing xcm_addr.h from being extended, and also nothing prevents an alternative XCM implementation to include more transports without touching the address helper API.
+The UX transport uses UNIX Domain (AF_UNIX, also known as AF_LOCAL) Sockets.
+UX sockets may only be used with the same OS instance (or, more specifically, between processes in the same Linux kernel network namespace).
+UNIX Domain Sockets comes in a number of flavors, and XCM uses the SOCK_SEQPACKET variety. SOCK_SEQPACKET sockets are connection-oriented, preserves message boundaries and delivers messages in the same order they were sent; perfectly matching XCM semantics and provides for an near-trivial mapping.
+UX is the most efficient of the XCM transports.
+The standard UNIX Domain Sockets as defined by POSIX uses the file system as its namespace, with the sockets also being files. However, for simplicity and to avoid situations where stale socket files (originating from crashed processes) causing problems, the UX transport uses a Linux-specific extension, allowing a private UNIX Domain Socket namespace. This is known as the abstract namespace (see the unix(7) man page for details). With the abstract namespace, server socket address allocation has the same life time as TCP ports (i.e. if the process dies, the address is free'd).
+The UX transport enables the SO_PASSCRED BSD socket option, to give the remote peer a name (which UNIX domain connection socket doesn't have by default). This is for debugging and observability purposes. Without a remote peer name, in server processes with multiple incoming connections to the same server socket, it's difficult to say which of the server-side connection sockets goes to which remote peer. The kernel-generated, unique, name is an integer in the form "%05x" (printf format). Applications using hardcoded UX addresses should avoid such names by, for example, using a prefix.
+The UTLS Transport also indirectly uses the UX namespace, so care should be taken to avoid any clashes between UX and UTLS sockets in the same network namespace.
+The UXF transport is identical to the UX transport, only it uses the standard POSIX naming mechanism. The name of a server socket is a file system path, and the socket is also a file.
+The UXF sockets resides in a file system namespace, as opposed to UX sockets, which live in a network namespace.
+Upon xcm_close(), the socket will be closed and the file removed. If an application crashes or otherwise fails to run xcm_close(), it will leave a file in the file system pointing toward a non-existing socket. This file will prevent the creation another server socket with the same name.
+The TCP transport uses the Transmission Control Protocol (TCP), by means of the BSD Sockets API.
+TCP is a byte-stream service, but the XCM TCP transport adds framing on top of the stream. A single-field 32-bit header containing the message length in network byte order is added to every message.
+TCP uses TCP Keepalive to detect lost network connectivity between the peers.
+The TCP transport supports IPv4 and IPv6.
+Since XCM is designed for signaling traffic, the TCP transport disables the Nagle algorithm of TCP to avoid its excessive latency.
+The TCP attributes are retrieved from the kernel (struct tcp_info in linux/tcp.h). See the tcp(7) manual page, and its section on the TCP_INFO socket option.
+Attribute Name | Socket Type | Value Type | Description |
---|---|---|---|
tcp.rtt | Connection | Integer | The current TCP round-trip estimate (in us). |
tcp.total_retrans | Connection | Integer | The total number of retransmitted TCP segments. |
tcp.segs_in | Connection | Integer | The total number of segments received. |
tcp.segs_out | Connection | Integer | The total number of segments sent. |
tcp.segs_in
and tcp.segs_out
are only present when running XCM on Linux kernel 4.2 or later.The TLS transport uses TLS to provide a secure, private, two-way authenticated transport.
+TLS is a byte-stream service, but the XCM TLS transport adds framing in the same manner as does the XCM TCP transport.
+The TLS transport supports IPv4 and IPv6.
+The TLS transport disables the Nagle algorithm of TCP.
+The TLS transport expect the certificate, trust chain and private key files to be found in a file system directory - the certificate directory. The default path are configured at build-time, but can be overriden on a per-process basis by means of a UNIX environment variable. Prior to creating any TLS or UTLS sockets (typically before program start), set XCM_TLS_CERT
to change the certificate directory.
The TLS transport will, at the time of XCM socket creation (xcm_connect() or xcm_server()), look up the process' current network namespace. In case the namespace is given a name per the iproute2 methods and conventions, XCM will retrieve this name and use it in the certificate and key lookup.
+In the certificate directory, the TLS transport expects the certificate to follow the below naming convention (where <ns> is the namespace):
The private key is stored in:
The trust chain is stored in:
For the default namespace (or any other network namespace not named according to iproute2 standards), the certificate need to be stored in a file "cert.pem", the private key in "key.pem" and the trust chain in "tc.pem".
+In case the certificate, key or trust chain files are not in place (for a particular namespace), a xcm_server() call will return an error and set errno to EPROTO. The application may choose to retry at a later time.
+TLS has all the TCP-level attributes of the TCP transport; see TCP Socket Attributes.
+The UTLS transport provides a hybrid transport, utilizing both the TLS and UX transports internally for actual connection establishment and message delivery.
+On the client side, at the time of xcm_connect(), the UTLS transport determines if the server socket can be reached by using the UX transport (i.e. if the server socket is located on the same OS instance, in the same network namespace). If not, UTLS will attempt to reach the server by means of the TLS transport.
+For a particular UTLS connection, either TLS or UX is used (never both). XCM connections to a particular UTLS server socket may be a mix of the two different types.
+In the UTLS transport, xcm_want() will return at least two file descriptors; one for the TCP BSD socket file descriptor utilized for TLS, and one for the UNIX domain socket. However, the applications should not depend on this (or the fact that other transports might return fewer).
+For an UTLS server socket with the address @tt utls:<ip>:<port> @endtt, two underlying addresses will be allocated; @tt tls:<ip>:<port> @endtt and @tt ux:<ip>:<port> @endtt .
+Or, in the case DNS is used: @tt tls:<hostname>:<port> @endtt and @tt ux:<hostname>:<port> @endtt .
+A wildcard should never be used when creating a UTLS server socket.
+If a DNS hostname is used in place of the IP address, both the client and server need employ DNS, and also agree upon which hostname to use (in case there are several pointing at the same IP address).
+Failure to adhere to the above two rules will prevent a client from finding a local server. Such a client will instead establish a TLS connection to the server.
+The SCTP transport uses the Stream Control Transmission Protocol (SCTP). SCTP provides a reliable, message-oriented service. In-order delivery is optional, and to adhere to XCM semantics (and for other reasons) XCM leaves SCTP in-order delivery enabled.
+The SCTP transport utilizes the native Linux kernel's implementation of SCTP, via the BSD Socket API. The operating mode is such that there is a 1:1-mapping between an association and a socket (fd).
+The SCTP transport supports IPv4 and IPv6.
+To minimize latency, the SCTP transport disables the Nagle algorithm.
+Namespaces is a Linux kernel facility concept for creating multiple, independent namespaces for kernel resources of a certain kind.
+Linux Network Namespaces will affect all transports, including the UX transport.
+XCM has no explicit namespace support, but the application is rather expected to use the Linux kernel facilities for this functionality (i.e. switch to the right namespace before xcm_server() och xcm_connect()).
+In case the system follows the iproute2 conventions in regards to network namespace naming, the TLS and UTLS transports support per-network namespace TLS certificates and private keys.
+XCM, in its current form, does not support binding to a local socket before doing connect() - something that is possible with BSD Sockets, but very rarely makes sense.
+XCM also doesn't have a sendmmsg() or recvmmsg() equivalent. Those could easily be added, and would provide some major performance improvements for applications that are sending or receiving multiple messages on the same connection on the same time. *mmsg() equivalents have been left out because there are strong doubts there are such applications.
+
+ Extensible Connection-oriented Messaging (XCM)
+
+ |
+
Library Version | |
API Version |