CF App Cutting Transfers Off #768
-
Hello, I've been running file transfers with the CF app and a ground CFDP Entity. Things are looking mostly good. That's until I tried doing a larger file transfer. After a certain size, I noticed the CF app stopped sending out PDUs. With some analysis, I've realized the CF app consistently stops sending after about 944 PDUs. This is across transfers as well. I.e. if I start one transfer with ~900 PDUs, and I start another transfer, that transfer will stop at ~44 PDUs. I tried changing config parameters (Increased CF_NAK_MAX_SEGMENTS and CF_R2_CRC_CHUNK_SIZE), within cf_platform_cfg.h, but that didn't help. Does anyone know of other config parameters that may help with this situation? Or is there a known bug? Thank you, |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 8 replies
-
We regularly run scenarios where CF generates ~60,000 PDUs with no identified issues (class 1 via the polling directory, all on a single channel). Looks like you might be using class 2 though? Can you swap to class 1 and see if it works? I'm not as familiar with class 2, but are you able to detail the flow of PDU types in each direction to confirm the ground entity is providing the expected responses? |
Beta Was this translation helpful? Give feedback.
So, this was a me problem. Inspired by a discussion on the CF app, I made changes to remove the Software Bus from the app and use POSIX pipes instead. Those changes have been quite helpful to transfer robustness.
But I created a bug where CFE_SB_ReleaseMessageBuffer(CF_AppData.engine.out.msg); was getting called when producing the output message, but CFE_SB_ReleaseMessageBuffer(CF_AppData.engine.out.msg); was never called. I assume the release also occurs in CFE_SB_TransmitBuffer(CF_AppData.engine.out.msg, true);
Releasing the buffer after sending it into the pipe resolved the problem.
Thank you for the responses and help.