You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The UX transport should allow for large(r) messages to be sent and receive.
The UX transport uses AF_UNIX sockets of the SEQPACKET type to send and receive messages.
UX has a max message size of 65535. This is a hard-coded, administrative limit, which can be met (and exceeded) by AF_UNIX limits on most (if not all) Linux distributions.
net.core.wmem_max governs what the actual max limit is, and net.core.wem_default controls the default. So, in case this limit is lowered, the XCM-advertised max of 65535 may not be fulfilled. However, most Linux distros have both these values be 212992.
In addition to this, on all kernel but the most recent ones, there is a soft maximum limit in the form of a linear kernel-level memory allocation to host the message data. In case of kernel memory fragmentation, such large linear buffer allocations may fail. For allocations in the 64k range this seems to not happen in practice. YMMV. For larger allocations, the risk increases.
The new UX maximum should be derived from net.core.wmem_max, with an upper limit of 512 kB. In practice, on most systems, the actual limit will be somewhat lower. The net.core.wem_max is the maximum size of a request, but the actual buffer limit will be 2x larger than the requested value (a Linux quirk). The actual value can (and in this case, should) be read up. However, the actual max value for a particular socket includes a kernel-internal header as well, which is currently 24 bytes.
XCM UX should advertise the actual max socket buffer size minus the header, and minus some extra header room, in case the header size changes in the future version of the kernel.
The rationale for a 512k XCM-level administrative limit is:
Reduced the risk for applications performing stack allocations for the message buffers on the basis of the value of "xcm.max_msg" attribute.
The XCM API is not designed for huge message, and if such are to be supported in an efficient manner, it needs to be extended. For blobs or files being sent, the user may be better off chopping up the data in smaller messages, or even better, using a byte-stream type transport.
A very large limit (say, 1 GB or even 16 MB) may not be possible to implement in other transports, causing very large variations in size limit between the transports, which is undesirable.
With this change, the maximum UX message size will be dependent on run-time configuration.
The 512k limit will max out the UX message size for default-configured Linux systems. (It will end up being ~416 kB.) One can also argue for a 256 kB XCM UX administrative limit, which is a nice power-of-2 and sits comfortably under default net.core.wmem_max-imposed limit. With such a limit, and with other transport going 64k -> 256k, one can continue to talk about one XCM-level limit, although it's not 100% accurate in all cases.
No API/ABI changes are required for this feature's implementation.
The text was updated successfully, but these errors were encountered:
The UX transport should allow for large(r) messages to be sent and receive.
The UX transport uses AF_UNIX sockets of the SEQPACKET type to send and receive messages.
UX has a max message size of 65535. This is a hard-coded, administrative limit, which can be met (and exceeded) by AF_UNIX limits on most (if not all) Linux distributions.
net.core.wmem_max governs what the actual max limit is, and net.core.wem_default controls the default. So, in case this limit is lowered, the XCM-advertised max of 65535 may not be fulfilled. However, most Linux distros have both these values be 212992.
In addition to this, on all kernel but the most recent ones, there is a soft maximum limit in the form of a linear kernel-level memory allocation to host the message data. In case of kernel memory fragmentation, such large linear buffer allocations may fail. For allocations in the 64k range this seems to not happen in practice. YMMV. For larger allocations, the risk increases.
The new UX maximum should be derived from net.core.wmem_max, with an upper limit of 512 kB. In practice, on most systems, the actual limit will be somewhat lower. The net.core.wem_max is the maximum size of a request, but the actual buffer limit will be 2x larger than the requested value (a Linux quirk). The actual value can (and in this case, should) be read up. However, the actual max value for a particular socket includes a kernel-internal header as well, which is currently 24 bytes.
XCM UX should advertise the actual max socket buffer size minus the header, and minus some extra header room, in case the header size changes in the future version of the kernel.
The rationale for a 512k XCM-level administrative limit is:
With this change, the maximum UX message size will be dependent on run-time configuration.
The 512k limit will max out the UX message size for default-configured Linux systems. (It will end up being ~416 kB.) One can also argue for a 256 kB XCM UX administrative limit, which is a nice power-of-2 and sits comfortably under default net.core.wmem_max-imposed limit. With such a limit, and with other transport going 64k -> 256k, one can continue to talk about one XCM-level limit, although it's not 100% accurate in all cases.
No API/ABI changes are required for this feature's implementation.
The text was updated successfully, but these errors were encountered: