Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make picoquic_sample work properly under long delay senarios #1594

Closed
KathyNJU opened this issue Dec 11, 2023 · 61 comments
Closed

Make picoquic_sample work properly under long delay senarios #1594

KathyNJU opened this issue Dec 11, 2023 · 61 comments

Comments

@KathyNJU
Copy link

KathyNJU commented Dec 11, 2023

The purpose of my experiment is to achieve normal file transfer function under long delay senarios (for example, RTT of 2min). As the default handshake_completion_timer was set to 30s and the default maximum_idle_timer was set to 120s, the first trial did not work. According to your guidance, I have made the following modifications.
Modification 1: Set the idle timeout parameter for the context (both on server side and client side)

...
// simply call the API (Modification 1)
// set_rtt is the delay introduced by netem, in our experiment is 120s
picoquic_set_default_idle_timeout(quic, (3*set_rtt));

picoquic_set_default_congestion_algorithm(quic, picoquic_bbr_algorithm);
...

Modification 2: set PICOQUIC_MICROSEC_HANDSHAKE_MAX (defined in picoquic_internal.h) to 360000000 (namely 3*set_rtt)

So far, the modifications to maximum_idle_timer and handshake_completion_timer should be completed. However, it still did not work.
I also noticed the remarks in Sample: Server or client close the connection if it remains inactive for more than 10 seconds. So I guess it was because the maximum time remained inactive was too short that the handshake failed. As a result, I have made the following modification:
Modification 3: set PICOQUIC_MICROSEC_STATELESS_RESET_INTERVAL_DEFAULT (defined in picoquic_internal.h) to 3600000ull.

After making the above modifications, the handshake was not implemented successfully. I was a little lost in the face of the problem. Last time, you mentioned some parameters to overcome the flow control limits. Are these parameters relevant to the handshake?

@huitema
Copy link
Collaborator

huitema commented Dec 11, 2023

@KathyNJU There are several tests of long delay behavior in the test suite. You may want to look at the tests in .\picoquictest\delay_tolerant.c, with tests for connections with long delays, e.g., 2 minutes or 20 minutes.

You do not need to recompile the code and change the value of PICOQUIC_MICROSEC_HANDSHAKE_MAX. This parameter is only use to set a default value to the "idle timeout", before it is negotiated during the handshake. Instead, you need to set the value of the idle timeout explicitly, using the API:

void picoquic_set_default_idle_timeout(picoquic_quic_t* quic, uint64_t idle_timeout);

You need to do that in both the client and the server, because the handshake will negotiate the smaller value of the client and the server proposal.

For the client, in picoquic_sample_client(), I would insert a line near the call to picoquic_set_default_congestion_algorithm. For the server, do the same in picoquic_sample_server().

@KathyNJU
Copy link
Author

According to your suggestion, I referred to delay_tolerant.c. I set up a basic topology through mininet and ran server-side and client-side commands on two nodes. Therefore, I haven't found a good way to directly call the test in delay_tolerant.c. I mainly set some parameters according to delay-tolerant.c.

Based on dtn_basic_test(), I changed the parameters as follows:

picoquic_set_default_idle_timeout(quic, (5*2*1000*60));
picoquic_set_enable_time_stamp(quic, 3);
uint64_t initial_flow_control_credit = 20000000;
 // update the flow control limits
 picoquic_set_initial_max_data(quic, initial_flow_control_credit);
 picoquic_set_initial_max_stream_data_bidi_local(quic, initial_flow_control_credit);
 picoquic_set_initial_max_stream_data_bidi_remote(quic, initial_flow_control_credit);
 picoquic_set_initial_max_stream_data_uni(quic, initial_flow_control_credit);

I repeated the experiment, and things got better. Initial retransmit timer are doubling as expected, and Qlogs are being generated on the server side. However, the experiment was not a complete success.
According to qvis, I found that the server could not successfully send ACK and handshake packets after receiving the initial package from the client. And there is an Info on the server side: "Session ticket could not be decrypted".

I think it could be for the following reasons:

  1. When the server receives the Initial packet, the corresponding connection has been closed (because it is mentioned in the remarks that if the client or server is inactive for 10s, the connection will be closed).
  2. The key used in the connection has been updated and the server does not have the correct key to decrypt (inferred from Info)
  3. There were other parameters that I didn't notice that led to the failure of the experiment.

I'd like to know your opinion on this issue.

@huitema
Copy link
Collaborator

huitema commented Dec 13, 2023

@KathyNJU can you post the qlog that you collected on the server side? Also, did you apply the changes above to both the client and the server side?

@KathyNJU
Copy link
Author

I apply the changes to both the client and the server side, and it did't work. The Qlogs are as follows:
Qlog.zip

@huitema
Copy link
Collaborator

huitema commented Dec 14, 2023

The message "Session ticket could not be decrypted" happens because the client already had received a Session Resume Ticket from this server, probably during a previous attempt using a small RTT. So the client is trying to use 0RTT. The server does not have the decryption key for that ticket, so protests. In theory, the only effect would be to refuse the incoming 0-RTT packets. The handshake would continue, as verified in the unit test zero_rtt_spurious.

The qlog shows that no packet from the server reaches the client. This cannot be an issue with the QUIC code, since the logs also show that the server is attempting to send several packets to the client, and none arrives... The packets seem correct, the DCID match what the client expects. I think that these packets are dropped by an intermediate system. Maybe there is something wrong in the addressing and routing. Or maybe there is an issue in the link between the application and the simulator.

@KathyNJU
Copy link
Author

KathyNJU commented Dec 14, 2023

I tried to change the RTT to 2 seconds just now, and the file transfer was successful. Therefore, I think it was some parameters that were not set properly, which led to the failure of the experiment. Later I will try more RTT values to see if the experiment can be successful.

I checked the latest picoquic-test-idle-timeout version. All the modifications in my test:

  1. both in client and server side:
...
picoquic_set_default_idle_timeout(quic, (5*2*60000));
picoquic_set_default_handshake_timeout(quic, (5*2*60000000));

picoquic_set_cookie_mode(quic, 2);
picoquic_set_default_congestion_algorithm(quic, picoquic_bbr_algorithm);
...
  1. in quicctx.c, I changed the default value:
void picoquic_init_transport_parameters(picoquic_tp_t* tp, int client_mode)
{
    memset(tp, 0, sizeof(picoquic_tp_t));
    // tp->initial_max_stream_data_bidi_local = 0x200000;
    // tp->initial_max_stream_data_bidi_remote = 65635;
    // tp->initial_max_stream_data_uni = 65535;
    // tp->initial_max_data = 0x100000;
    tp->initial_max_stream_data_bidi_local = 20000000;
    tp->initial_max_stream_data_bidi_remote = 20000000;
    tp->initial_max_stream_data_uni = 20000000;
    tp->initial_max_data = 20000000;
    // below are unchanged
    tp->initial_max_stream_id_bidir = 512;
    tp->initial_max_stream_id_unidir = 512;
    tp->max_idle_timeout = PICOQUIC_MICROSEC_HANDSHAKE_MAX/1000;
    tp->max_packet_size = PICOQUIC_PRACTICAL_MAX_MTU;
    tp->max_datagram_frame_size = 0;
    tp->ack_delay_exponent = 3;
    tp->active_connection_id_limit = PICOQUIC_NB_PATH_TARGET;
    tp->max_ack_delay = PICOQUIC_ACK_DELAY_MAX;
    tp->enable_loss_bit = 2;
    tp->min_ack_delay = PICOQUIC_ACK_DELAY_MIN;
    tp->enable_time_stamp = 0;
    tp->enable_bdp_frame = 0;
}

I reviewed the experiment. When RTT is 2 minutes, the server cannot generate Qlog during the first few runs. (The generated Qlog is as follws) :
client_qlog.zip

At first I just thought there was some other error, so I shut down the server and tried a few more times immediatelly. That could be the reason why "Session tickets could not be decrypted"? If so, what should be done?

@huitema
Copy link
Collaborator

huitema commented Dec 14, 2023

The error "Session tickets could not be decrypted" happens because the client remembers tickets from a previous server run. These tickets are encrypted with a key. By default, the key is picked at random each time the server runs. To eliminate the error, you should either:

  • delete the "bin" file in which the client stores tickets before running it again,
  • or, make sure that all successive server runs use the same ticket encryption key.

@huitema
Copy link
Collaborator

huitema commented Dec 15, 2023

The new client qlog confirms that the client is not receiving any packet sent by the server. This was already visible in the previous logs. At this point we now that:

  • the client sends data and waits for the specified timeout before closing the connection. In your example, the delay is set to 600,000 milliseconds, i.e., 10 minutes. The log shows that the client waited 600,003 milliseconds before closing the connection.
  • the server receives the data and sends responses.
  • if the delay is 2 seconds, the client receives the responses from the server and the connection works.
  • if the delay is set to 2 minutes and the timeout set to 10 minutes, the connection does not work.

Since the packets are sent by the server but not received by the client, something between the server and the client is dropping them. I suspect this is some kind of firewalls. Many UDP firewalls open a "pin hole" when the client sends a packet, and leave the pinhole open for some time to receive the response. After a delay, they close it. If the RTT is long enough, packets arrive after that delay.

To verify that, you could record a PCAP of what the client sends and receive. You will probably see server packets arrive to the client's device, but you will not see the same packets in the qlog. You can test were the limit is by trying delays of 5, 10, 30, 60 seconds, etc.

If this really is a firewall issue, you should be able to work around it by explicitly specifying the UDP port for the client, and changing the call to picoquic_packet_loop from:

    /* Wait for packets */
    ret = picoquic_packet_loop(quic, 0, server_address.ss_family, 0, 0, 0, sample_client_loop_cb, &client_ctx);

to:

    /* Wait for packets */
    ret = picoquic_packet_loop(quic, client_port, server_address.ss_family, 0, 0, 0, sample_client_loop_cb, &client_ctx);

I had not heard of this problem before. You have just found something "interesting", and worth publishing in, for example, a short blog, or a communication to the QUIC or "deepspace" mailing lists. Or just ask me to do that, but I want to make sure you get credits for finding it.

@KathyNJU
Copy link
Author

I changed the call to picoquic_packet_loop, but it seems that there still exists something wrong. I plan to further find the problem by printing some status information (such as UDP port) when processing the Initial packet. If I want to add this print code, where should I add it to?

I plan to try different RTTs as you said, and analyze the results in combination with Qlog and Wireshark. I have conducted some experiments and may have found some problems. When all is done, I will share the results with you and have a further discussion!

@huitema
Copy link
Collaborator

huitema commented Dec 17, 2023

For the print code, you could print the "addr_from" and "addr_to" just before the call to "picoquic_incoming_packet_ex" in "sockloop.c", and the "peer_addr" and "local_addr" just after the call to "picoquic_prepare_next_packet_ex".

@KathyNJU
Copy link
Author

OK, I'll add print code for debugging later.

Recently, I conducted simulations with RTTs of 5s, 10s, 15s, 20s, and 30s respectively. The specific results are presented in the PDF file in zip. Relevant images, QLOG, and Wireshark packet capture data are all saved in their respective folders.
Simulation.zip

If you have the time, could you take a look? I'd like to verify if some of my interpretations might be incorrect.

Thanks very much for your help!

@huitema
Copy link
Collaborator

huitema commented Dec 18, 2023

I am looking at your report. First, a quick note on the use of CID during handshake, as specified in RFC 9000:

  • First message from Client to Server: Source CID = client CID; destination CID = Initial CID
    • Initial CID is used to compute the encryption key of initial messages.
    • Initial CID is random
  • next messages from Client to Server will bear the same CID, until the first message from server is received.
  • First message from Server to Client: Source CID = server CID; destination CID = client CID
    • Client CID is used to go through the load balancer at the client and link to the context
  • Next messages from Server to Client will bear the same CID
  • Next message from client to server: Source CID = client CID; destination CID = server CID
    • Server CID is used to go through the load balancer at the server side and link to the context.

@huitema
Copy link
Collaborator

huitema commented Dec 18, 2023

On Initial packets marked "lost": for security reason, the client ignores any further initial packet from the server as soon as it has received the first "initial" packet containing the "server hello". After that, the client only processes handshake packets, and later 1RTT packets.

Because of the long delay, the server does not know that the client has received the message, so it repeats them a couple time. This will stop when the server receives an ACK from the client, and a Handshake message. After receiving the handshake message the server deletes the Initial key, consider all Initial packets as acknowledged, and clears the Initial repeat queue.

@huitema
Copy link
Collaborator

huitema commented Dec 18, 2023

The PING packet that you noticed is for detecting the PATH MTU. RTT is computed from the delays between packets and the ACK that acknowledges them, plus timestamps.

@huitema
Copy link
Collaborator

huitema commented Dec 18, 2023

The error in the initial UDP port is bizarre. I do not see that anywhere else. I suspect it is some kind of configuration issue. How did you specify the client and server ports? Can something there be crosswired?

@huitema
Copy link
Collaborator

huitema commented Dec 18, 2023

For the provision of CID, see RFC 9000 section 5. See section 7.1 for examples of handshake flows.

@huitema
Copy link
Collaborator

huitema commented Dec 18, 2023

For the ACK frequency, the spec is in the ACK Frequency draft

@huitema
Copy link
Collaborator

huitema commented Dec 18, 2023

I suspect that there is some system dropping packets in path from server to client. You see somewhat random results based on the value of the delay and time out. I suppose this is because these change the timing of packets. You may want to plot the interval between the last packet sent by the client and the arrival time (or planned arrival time) of the server packet. I suspect that packets are accepted when this delay is short, rejected if the delay exceeds a theshold.

You probably have a firewall or equivalent somewhere in the setup. You want to disable it, or at a minimum program it to let packets through for the IP addresses and ports of client and server.

@KathyNJU
Copy link
Author

In the experiment of 2s scenario, I did not specifically set the client port, and the server port was directly set through the call, as follows:

// server side
./picoquic_sample server 4433 ./ca-cert.pem ./server-key.pem ./sample/server_files

// client side
./picoquic_sample client fc01::1 4433 /tmp smallfile.txt

I just repeated the experiment, and maybe that strange phenomenon was a fluke. I am very sorry that the test was not repeated in time for occasional circumstances.

@huitema
Copy link
Collaborator

huitema commented Dec 18, 2023

Many operating systems treat "server ports" and "client ports" differently. Server ports are generally open, because the servers can receive packets are any time. CLient ports are often protected, with the firewall open by an outgoing packet and then close some time after that packet is sent.

That's why I asked you to set an explicit port for the client, maybe 4434 or some such, just by hard coding it in your prototype. The firewalls should treat that as a server port, and let the traffic go through.

@KathyNJU
Copy link
Author

OK, I see what you mean. I will later set an explicit port for the client and repeat the experiment to see if there is any difference.

@KathyNJU
Copy link
Author

Today I discusse the simulation results with my tutor. For cases where the Qlog displays packet generation on the client side but Wireshark fails to capture these packets on the client port, there's a suspicion that there might be an issue with the UDP socket calls at the UDP layer. If I want to print whether picoquic's packets were sent through the UDP socket, and whether the operation of the UDP socket was executed properly, how should I add the print code?

In your blog, you mentioned implementing long-delay transmission. I'm curious about your network topology. Did you use physical servers or virtual machines for the server and client? If virtual, could you provide details about the VM configurations? We suspect environmental setup issues and would like various configuration information. We're planning to directly connect two virtual machines instead of using Mininet to set up the topology for file transfer.

@huitema
Copy link
Collaborator

huitema commented Dec 18, 2023

The UDP sending code is in sockloop.c, lines 349-427. The return code of the UDP sendmsg call is "sock_ret", line 376.

You can also enable debug logging by calling: debug_printf_push_stream(stderr); in your main program (client and/or server).

@huitema
Copy link
Collaborator

huitema commented Dec 18, 2023

The tests use the test simulator built into picoquic -- see "delay_tolerant_test.c". The simulator abstracts away the socket calls and passes them to simulated links. The simulator works in virtual time, which means for example that a connection with a 20 minutes RTT can complete in a a few milliseconds (for a connection tests) or a few seconds (for a data transmission test). This is used widely through the test suite.

The downside is that the actual socket calls are not tested in the automated tests. They are tested manually a lot, because the same "socket loop" is used in applications like "picoquicdemo".

There is a known issue with the mininet drivers. They are somewhat non standard, because they do not support UDP GSO. However, there is a work around in the code. These drivers are used for example in the interop tests, which execute correctly.

@KathyNJU
Copy link
Author

Can I call delay_tolerant_test.c directly from picoquic_sample? Does that mean I probably don't need mininet?

I want to achieve successful transmission under long delay senarios at first.

@huitema
Copy link
Collaborator

huitema commented Dec 19, 2023

You are blocked by an interface issue in your workstation. You have to find out what it is. I don't know what system you use. Windows? Mac? Linux? Linux systems come with IPTable. Do you know what configuration you are using?

@huitema
Copy link
Collaborator

huitema commented Dec 19, 2023

Yes, you would not need Mininet if you just extended the code in the dtn tests. There are four tests: "dtn_basic" simulates establishment of a connection with 2 minutes seconds RTT; "dtn_data" simulates transfer of 100MB with 2 minutes RTT; "dtn_twenty" does the same with 20 minutes RTT; "dtn silence" simulates doing three transactions with a 2 minutes RTT.

There are two parameters for each test: the "test spec" that sets latency, data rate, etc.; and the 'stream description" which specifies how many streams to send, which comes after which, and how much data in each direction.

@huitema
Copy link
Collaborator

huitema commented Dec 19, 2023

Also, no, you cannot call the test code from picoquic sample. I mean, you could, but it would not use any of the code in the sample. The code simulates client and server, and is meant to be call from a test program. You could write your own test program and call the test code. You could probably write something interactive, letting the user program latency, bandwidth, etc.

@KathyNJU
Copy link
Author

I use Linux. I use Vmware to create an ubuntu virtual machine, ubuntu version 18.04, Linux version 5.4.0.

If I want to call delay_tolerant_test.c directly to test, should I use picoquicdemo? Since the scenario we are currently applying does not consider HTTP and may only use picoquic for file transfer, I initially chose picoquic_sample instead of picoquicdemo.

At present, the solution I am still trying is to create two virtual machines and connect them directly through vmware for communication. The delay and bandwidth characteristics will be set at the network port of the virtual machine, so mininet is not needed. Do you think this method is feasible?

@huitema
Copy link
Collaborator

huitema commented Dec 21, 2023

You have an interesting configuration...

First, I can see that the first packet that arrives at the server is indeed:

[0, "transport", "datagram_received", { "byte_length": 1232, "addr_from" : { "ip_v6": "fd15:4ba5:5a2b:1008:d573:a7dc:e9c5:9014", "port_v6" :4434}, "addr_to" : { "ip_v6": "fd15:4ba5:5a2b:1008:8a80:dccf:8350:bcc3", "port_v6" :20753}}],
[0, "transport", "packet_received", { "packet_type": "initial", "header": { "packet_size": 1232, "packet_number": 99361, "version": "00000001", "payload_length": 1186, "scid": "285128707d81e89e", "dcid": "c19d96de9d706a41" }, "frames": [{ 
    "frame_type": "ping"}, { 
    "frame_type": "crypto", "offset": 0, "length": 287}, { 
    "frame_type": "padding"}]}],

This is not normal. If you look at the client's log, you will see that there is no difference between packet #99361 and the preceding packets. All these packets leave the client with a header set to:

[2287, "transport", "datagram_sent", { "byte_length": 1232, "addr_to" : { "ip_v6": "fd15:4ba5:5a2b:1008:8a80:dccf:8350:bcc3", "port_v6" :4433}, "addr_from" : { "ip_v6": "0:0:0:0:0:0:0:0", "port_v6" :0}}]

The source IP address is unspecified, which means that it should set by the system during the system call -- which is done for packet #99361, which the server sees coming from address ending in 9014, port 20753. I wonder what is happening there. Could those previous packets have been sent from a different address or port? Do you see them in the PCAP?

Second, the last packet received by the server from the client is:

[91035274, "transport", "datagram_received", { "byte_length": 55}],
[91035274, "transport", "packet_dropped", {
    "packet_type" : "handshake",
    "packet_size" : 55,
    "trigger": "key_unavailable",
    "raw": "680bee4176a12a48a60203014534000102000000000000000000000000000000"}],

This is the handshake packet sent by the client at:

[155302706, "transport", "packet_sent", { "packet_type": "handshake", "header": { "packet_size": 39, "packet_number": 2, "payload_length": 10, "scid": "285128707d81e89e", "dcid": "0bee4176a12a48a6" }, "frames": [{ 
    "frame_type": "ack", "ack_delay": 0, "acked_ranges": [[0, 3]], "ect0": 4, "ect1": 0, "ce": 0}, { 
    "frame_type": "padding"}]}],

The drop is expected, because the server has sent "handshake done" and is now receiving 1RTT packets.

The next packets from the client are not delivered to the server.

Picoquic seems to behave correctly, but there is still something weird happening at the socket layer.

@KathyNJU
Copy link
Author

The first phenomenon you mentioned is indeed peculiar. I repeated the experiment, and the same situation occurred. Moreover, Wireshark packet captures indicate that these packets were not captured on the client-side port, meaning these data packets simply weren't successfully transmitted. I believe your hypothesis is accurate. I will attempt to investigate the interaction between Picoquic and the socket layer to identify any potential issues.

@KathyNJU
Copy link
Author

KathyNJU commented Jan 4, 2024

I've been busy with exams and final projects recently, so I haven't shared much progress. I apologize that I haven't found a perfect solution to the problem I mentioned earlier, but I will continue to make some attempts. Additionally, I've tried simulations of reconnecting after a long period of silence. In one scenario, the keys from the previous connection were saved, theoretically enabling 0RTT transmission; in another scenario, the keys were not saved, which should theoretically result in 1RTT transmission. However, from observations using Qlog and Wireshark, although the first scenario successfully transmitted the file, it seemed to achieve 1RTT transmission instead of 0RTT. The following diagram shows a transmission that, if correct, would successfully implement 0RTT transmission. But in reality, these packets were not successfully transmitted, and even Wireshark did not capture them. This seems to be the same issue as before. If the transmission shown in the diagram is successful, it means 0RTT transmission is achieved, right?
0RTT失败
0RTT_1RTT.zip

I also have a few questions I'd like to discuss with you:

  1. Has the current picoquic_sample successfully implemented multi-stream multiplexing for multiple file transfers?
  2. Does the current picoquic_sample support transmission during address migration?
  3. If I want to build upon the existing picoquic_sample to implement the above features and fully demonstrate the characteristics of the QUIC protocol in satellite scenarios, do I need to consider other features of QUIC? And will this task be quite labor-intensive? (The timeline for this task might be a bit tight).

@huitema
Copy link
Collaborator

huitema commented Jan 4, 2024

The sample was designed to be as simple as possible, and thus has limited features, but the scenarios that you describe ought to be supported. The sample client is programmed to open a picoquic stream for each of the files requested in the command line. These streams are opened when the program starts, and then served as soon as transmission capacity permits.

The sample uses the default stream scheduling programmed in picoquic, which is first-in, first-out. By default, the only amount of parallelism is for loss recovery. If packets are lost when sending a stream, the next stream may well begin immediately, while recovery of the packet losses happens. The scheduling of streams could be changed by changing stream priorities, as explained in lines 1123-1175 of picoquic.h.

The sample code does not currently support explicit address migration. It will perform NAT rebinding if the client address changes. This will take 1 or 2 RTT, during which packets sent by the server may very well be lost.

There are many features of QUIC that could be used, such as address migration or maybe multipath, but I would not try that if you are short on time.

If the socket API is fixed, you will need to experiment with parameters such as the idle timeout, the maximum data allowed on the streams, and the maximum data allowed by default on bidirectional streams. These last 2 parameters affect flow control. If not set to a sufficient value, the transmission will be slowed.

After flow control, yes, you should also experiment with 0RTT. As far as I can tell, the 0RTT qlog that you sent shows a successful 0RTT connection. The first data packet carried both the Initial request and a 0RTT packet. Per QUIC specification, they are "coalesced" into a single network-level packet. The server accepted the 0RTT message from the client, and started sending "stream 0" immediately. The file was short, so the transmission completed quickly.

@KathyNJU
Copy link
Author

KathyNJU commented Jan 5, 2024

For the 0RTT experiment, it seems that the client did initially implement the 0RTT feature and the server tried to make the correct response. However, for some reasons, the actual transmission achieved in the Qlog was still 1RTT, right? I noticed in the compressed file that the actual responses made by the server for 1RTT and 0RTT are quite similar. Does this mean my actual experiment failed to achieve 0RTT?

A file is transmitted through one stream, and different files can be transmitted in parallel through multiple streams, as there are no dependencies between multiple files. In picoquic_sample, up to eight streams can be parallelized, meaning up to eight files can be transmitted simultaneously. Based on what you said, if a packet loss occurs in one stream, the next stream will be immediately initiated. Does this mean that it's not possible to transmit eight files in parallel in case of packet loss?

Regarding the address migration issue, if I want to test performance in an actual satellite scenario, my current idea is to use the actual satellite as the server and the ground user as the client, to achieve file transmission of satellite data. But in this scenario, it would be the server's address that changes. Does this also involve NAT rebinding? If the satellite is used as the server side, and the satellite's address is in a state of change, can a connection still be established and file transfer be achieved in this case?Are there any potential issues with this experimental assumption?

@huitema
Copy link
Collaborator

huitema commented Jan 5, 2024

Oh, I see. Yes, the TLS ticket was accepted by the QUIC layer, but the 0-RTT key was not installed by the TLS component. This is weird. It means some check inside picotls failed. The client then notices that TLS did not negotiate 0RTT, and considers that all 0RTT packets pending acknowledgement have been lost. This would happen if the ticket lifetime was expired.

The ticket lifetime is hard coded in picoquic to 100,000 seconds, a bit more than a day. Did more than 100,000 seconds pass between the beginning of the first connection and the resumption attempt?

The constant is set in the function picoquic_master_tlscontext in tls_api.c.

ctx->ticket_lifetime = 100000; /* 100,000 seconds, a bit more than one day */

You could probably try changing that to 1 million seconds -- but not much more, as picotls uses a 32 bit integer to store the value in milliseconds. By the way, this kind of hidded constants are very much what the long transmission delay tests are set to discover :-).

It is possible to transmit all streams in parallel in a "round robin" manner. You need to set the stream priority to an even value, such as 8, instead of the default odd value, 9. Streams at the same priority level are sent FIFO if the priority level is odd, round-robin if the priority is even. See lines 1123-1175 of picoquic.h.

QUIC does not support changes in server address, sorry. Of course, this could be changed with an extension, but that will take time. In the short term, you can either change your setup to have the satellite be the QUIC client, or ask the network layer to establish some kind of tunnel to ensure that the satellite can be reached at a fixed address.

@KathyNJU
Copy link
Author

KathyNJU commented Jan 6, 2024

For the 0RTT experiment, the interval for re-establishing the connection did not exceed 100,000s. I suspect the same issue occurred as before where the initial packets were not truly sent out. I will address this in subsequent attempts.

Due to the requirements of the task, I am currently adjusting parameters for the Earth-Moon scenario. I have modified the PICOQUIC_INITIAL_RTT and PICOQUIC_INITIAL_RETRANSMIT_TIMER values, both set to 4s, in hopes of optimizing for the specific Earth-Moon context. I have also set idle_timeout and handshake_timeout using the API. Currently, I am somewhat confused about the parameters for flow control.

As we know, in actual satellite links, there are uplink and downlink channels, which are typically asymmetric. We denote the bandwidth of the uplink as client_flow_control_credit and the bandwidth of the downlink as server_flow_control_credit. Regarding flow control, I have made the following settings:

`// sample_client: set flow control credit
picoquic_set_initial_max_data(quic, server_flow_control_credit);
picoquic_set_initial_max_stream_data_bidi_local(quic, client_flow_control_credit);
picoquic_set_initial_max_stream_data_bidi_remote(quic, server_flow_control_credit);
picoquic_set_initial_max_stream_data_uni(quic, client_flow_control_credit);

// sample_server: set flow control credit
picoquic_set_initial_max_data(quic, client_flow_control_credit);
picoquic_set_initial_max_stream_data_bidi_local(quic, server_flow_control_credit);
picoquic_set_initial_max_stream_data_bidi_remote(quic, client_flow_control_credit);
picoquic_set_initial_max_stream_data_uni(quic, server_flow_control_credit);
`

Due to the asymmetric bandwidth of the uplink and downlink, I thought of setting the parameters for these four flow controls separately. Do you think my settings are reasonable?

@huitema
Copy link
Collaborator

huitema commented Jan 7, 2024

The QUIC names for the variables may be a bit confusing. Here is what it means:

  • max_data: the total amount of data that can be sent by the peer.
  • max_stream_data_bidi_local: the total amount of data that can be sent by the peer in the "peer to local" direction of a bidirectional stream created locally.
  • max_stream_data_bidi_remote: the total amount of data that can be sent by the peer in the "peer to local" direction of a bidirectional stream created by the peer.
  • max_stream_data_uni: the total amount of data that can be sent by the peer in the unidirectional stream created by the peer.

Note that the local flow control parameters do not limit the amount of data sent by the local host to the peer -- this will be limited by the parameters received from the peer. That means your setup should really be:

`// sample_client: set flow control credit
picoquic_set_initial_max_data(quic, server_flow_control_credit);
picoquic_set_initial_max_stream_data_bidi_local(quic, server_flow_control_credit);
picoquic_set_initial_max_stream_data_bidi_remote(quic, server_flow_control_credit);
picoquic_set_initial_max_stream_data_uni(quic, server_flow_control_credit);

// sample_server: set flow control credit
picoquic_set_initial_max_data(quic, client_flow_control_credit);
picoquic_set_initial_max_stream_data_bidi_local(quic, client_flow_control_credit);
picoquic_set_initial_max_stream_data_bidi_remote(quic, client_flow_control_credit);
picoquic_set_initial_max_stream_data_uni(quic, client_flow_control_credit);

@KathyNJU
Copy link
Author

KathyNJU commented Jan 7, 2024

Alright, I understand, thank you very much!

Regarding the issue of the initial packet not being sent out, I have a thought. Since I've set the delay to 30 seconds, it needs to negotiate the MAC address before it can send out packets. I found that IPv6 uses NDP, and there seems to be an issue with the probe timer in this process. Because the delay exceeds the timer's threshold, it leads to the assumption that the neighbor is unreachable. Therefore, initially, NDP is in a failed state, and since the link layer isn't established, the packet isn't sent and isn't captured by Wireshark either. It's not until the Picoquic client starts requesting that it further probes for reachability, and upon confirming communication is possible, it successfully sends the packet. But this idea is currently just a speculation, and I am conducting more experiments to verify it. I would like to hear your thoughts on this.

@huitema
Copy link
Collaborator

huitema commented Jan 7, 2024

NDP? This is designed for local networks, like Ethernet or Wi-Fi. I would not expect to use that on a long distance link, let alone a satellite link. Can you try to model your test network as a set of links with fixed addresses, plus a set of routes?

@KathyNJU
Copy link
Author

KathyNJU commented Jan 7, 2024

OK, in our actual satellite scenario, the IP addresses are indeed fixed. I will modify my network model. I have just discovered through experimentation that the issue of the initial packets not being sent out was indeed due to MAC address negotiation. I will modify the network model and then take another look at the specific situation.

Additionally, after resolving the issue with the initial packets, I repeated the 0RTT experiment and encountered some problems. The 0RTT packets do not seem to be correctly received, and what is being implemented is still 1RTT transmission.
1704603109590
0RTT_repeat.zip

@huitema
Copy link
Collaborator

huitema commented Jan 7, 2024

The 0RTT issue appears similar to your previous traces. The way the code works is:

  • in a previous session, the TLS layer formatted a "session resumption" ticket, and passed it through a callback to picoquic.
  • the callback encrypts the ticket using the "ticket encryption key"
  • TLS asks the picoquic layer to send the ticket in a "crypto" frame.
  • The client receives the crypto frame and passes it to its TLS layer (just like it processes all crypto frames)
  • The client TLS layer parses the TLS message, and passes the encrypted ticket in a callback to picoquic.
  • The encrypted ticket is saved in the "ticket store".

In the next session, the client retrieves the encrypted ticket from the store, and then:

  • the TLS1.3 "client hello" message is sent by the client in a "crypto" frame in an "initial" QUIC packet.
  • when the server receives the "crypto" frames, it calls the TLS code provided by the "picotls" component.
  • the TLS code processes the ClientHello message, finds the ticket, and do a callback to picoquic to decrypt it.
  • if the ticket is correctly decrypted, the TLS code verifies ticket parameters such as time-to-live, etc.
  • if the verification passes, the TLS code provides the 0RTT encryption key to picoquic.

The traces show that the issue happens in the last steps of this process:

  • the ticket is received, and is correctly decrypted using the ticket encryption key -- we see the message "session ticket properly decrypted"
  • the TLS code does not provide the 0RTT key to picoquic -- we see the message "packet dropped". If you click on it, you will see that the packet is dropped because the 0RTT key is not available.

The means that TLS could not verify some of the ticket parameters. The code doing that is in static int try_psk_handshake() in picotls.c in the picotls sources. It does a series of tests, and obviously one of them is failing:

  • verify that the session is accepting 0RTT data. This will fail if the client itself has not set the flag in its TLS message. The flag is set only if the "ticket store file name" is set in the QUIC context of the client.
  • verify that the ticket has not expired -- the 100,000 seconds test mentioned above.
  • verify that the "server name" parameter in the ticket matches the "server name" (SNI) parameter asked by the client.
  • verify that the "alpn" parameter in the ticket matches the "alpn" parameter asked by the client.
  • check that other ticket parameters match what the server expects

I don't know which of these tests is failing, but one of them clearly is. Quick checks:

  • Did you wait too long? (Probably not, as you explained above.)
  • Do the server name and alpn match? (Probably, the picoquic client verifies that.)
  • Did you restart the server between the tests? This could cause the "ticket parameters" test to fail, because the server would not remember issuing this ticket.

@KathyNJU
Copy link
Author

KathyNJU commented Jan 8, 2024

Based on your suggestion, I conducted some tests today:

  1. The QUIC context of the client indeed has the "ticket store file name" set.
  2. The server name in the ticket matches the server name SNI parameter requested by the client.
    image
    image
  3. The "alpn" parameter in the ticket matches the "alpn" parameter asked by the client.
    image
    image
  4. I also tried to match the server name with the alpn, but encountered the same issue.
  5. I did not restart the server between the tests.

An unusual phenomenon has emerged in our simulation: (Our server and client are simulated using two virtual machines under the same VMware.) The picoquic server is continuously running, and the picoquic client has previously successfully communicated with the server. The current phenomenon is: when there is a 40-minute interval between the client's current and previous requests, 0RTT transmission is successfully achieved. However, when the interval is 30 minutes or less, the server displays key_unavailable, indicating a failure in receiving the 0RTT key, thus resulting in 1RTT communication. I suspect there may be some issues with timer settings involved?

@huitema
Copy link
Collaborator

huitema commented Jan 8, 2024

I am looking at the code for possible issues.

@KathyNJU
Copy link
Author

Apart from 0RTT experiment, if I want to conduct performance testing with Picoquic, I would use the picoquic_sample example program. What parameters can I modify to perform different types of performance tests? Currently, I have implemented flow control on both the server and client sides.

@huitema
Copy link
Collaborator

huitema commented Jan 10, 2024

The obvious parameters are the latency and throughput of the simulated data links. You may also want to retrieve files of various sizes, or series of files.

There may be a need to simulate variable latency, for example if the spacecraft it on its way from Earth to the Moon.

There is a research issue regarding "suspension". Suppose for example that the space craft is in orbit around the Moon. At some point in every orbit, it might move behind the moon and become unreachable.

@huitema
Copy link
Collaborator

huitema commented Jan 10, 2024

I looked at the 0RTT issue. I think you have found a limitation of TLS 1.3. The session tickets include an "obfuscated age", which is a mechanism to limit 0RTT replay attacks. The client obfuscate a "ticket age", set to the time at which the ticket was used minus the time at which the ticket was received. The server then uses that to check whether:

(ticket_issued_time - now) - ticket_age < max_ticket_reuse_delay

If that check fails, the connection is accepted but 0RTT is disabled. Picoquic uses "picotls" for the TLS1.3 stack. Picotls hardwires the max_ticket_reuse_delay to 10 seconds. But the formula depends on the transmission delay:

  • the ticket_issued_time is measured on the server.
  • now is measured on the server, at the time the ClientHello is received
  • ticket_age is measured on the client at the time the ClientHello is sent, using the ticket received time measured when the ticket is received from the server.

If the transmission delay is long, it is very possible that this obfuscated ticket age check fails.

I filed a bug about that in the picotls repo.

I also sent a message describing the problem to the TLS working group.

@huitema
Copy link
Collaborator

huitema commented Jan 11, 2024

The weirdness issue that you found between 40 minutes and 20 minutes may be due to an integer overflow bug in picotls.

@KathyNJU
Copy link
Author

Great! Thank you for your response! We have now essentially succeeded in deployment and should be able to carry out actual tests soon. If the issue persists, it might be as you suggested; otherwise, it could be some configuration issues in my simulation. I will share the experimental results with you later :)

@KathyNJU
Copy link
Author

KathyNJU commented Jan 11, 2024

In picoquic_internal.h, I have noticed that currently there are the following parameters:
`
#define PICOQUIC_MAX_PACKET_SIZE 1536

#define PICOQUIC_MIN_SEGMENT_SIZE 256

#define PICOQUIC_ENFORCED_INITIAL_MTU 1200

#define PICOQUIC_PRACTICAL_MAX_MTU 1440

#define PICOQUIC_RETRY_SECRET_SIZE 64

#define PICOQUIC_RETRY_TOKEN_PAD_SIZE 26
`
I would like to ask about their specific meanings. The default Ethernet MTU is 1500 bytes, and I'm curious about how the values 1536 and 1440 came about. I looked into RFC 9000, which mentions that in QUIC, the minimum size for link datagrams is supposed to be 1200 bytes (which corresponds to PICOQUIC_ENFORCED_INITIAL_MTU).

I am currently facing a practical deployment issue. We want to conduct actual tests, but our current support for USLP payload length is only 861 bytes, requiring the IP packet size to be less than or equal to 855 bytes. However, this contradicts the stipulations in RFC 9000. Can we still implement QUIC transmission under these conditions? If I want to implement QUIC transmission on the current USLP basis, which parameters do I need to modify?

Additionally, as our current bandwidth can only achieve about 6000bit/s, I think the following two parameters also need to be modified:
`
#define PICOQUIC_CWIN_INITIAL (10 * PICOQUIC_MAX_PACKET_SIZE)

#define PICOQUIC_CWIN_MINIMUM (2 * PICOQUIC_MAX_PACKET_SIZE)
`

Given the indeed small current bandwidth, I think it can be modified to:

`
#define PICOQUIC_CWIN_INITIAL (1 * PICOQUIC_MAX_PACKET_SIZE)

#define PICOQUIC_CWIN_MINIMUM (1 * PICOQUIC_MAX_PACKET_SIZE)

or

#define PICOQUIC_CWIN_MINIMUM (0 * PICOQUIC_MAX_PACKET_SIZE)
`

Or can the existing implementation achieve automatic fragmentation?

@huitema
Copy link
Collaborator

huitema commented Jan 11, 2024

yes, I should add comments and document these variables:

#define PICOQUIC_MAX_PACKET_SIZE 1536
This is the memory allocated in the packet buffer. Sending more than that is impossible.

#define PICOQUIC_MIN_SEGMENT_SIZE 256
To avoid trying to send coalesced packets if the remaining space in sending buffer is too small

#define PICOQUIC_ENFORCED_INITIAL_MTU 1200
Per RFC 9000

#define PICOQUIC_PRACTICAL_MAX_MTU 1440
Based on current Internet observation, MTU remaining after commonly used tunnels, etc.

#define PICOQUIC_RETRY_SECRET_SIZE 64
Max length that can configured for the secret protecting the retry packets -- worst case 512 bits

#define PICOQUIC_RETRY_TOKEN_PAD_SIZE 26
Tokens are padded to that length to avoid leaking information through token length

To answer your other question, no you cannot use an RFC 9000 compliant version of QUIC if the network cannot carry packets with the minimal IPv6 MTU. You could of course change the variable and recompile the code, but you would not interoperate with other implementations of QUIC.

@KathyNJU
Copy link
Author

So, this means that if I want to implement QUIC communication on the existing USLP basis, I could modify the parameters like this:

#ifndef PICOQUIC_MAX_PACKET_SIZE
#define PICOQUIC_MAX_PACKET_SIZE 861
#endif
#define PICOQUIC_MIN_SEGMENT_SIZE 256
#define PICOQUIC_ENFORCED_INITIAL_MTU 700
#define PICOQUIC_PRACTICAL_MAX_MTU 855
#define PICOQUIC_RETRY_SECRET_SIZE 64
#define PICOQUIC_RETRY_TOKEN_PAD_SIZE 26
#define PICOQUIC_DEFAULT_0RTT_WINDOW (10*PICOQUIC_ENFORCED_INITIAL_MTU)

Are there any other parameters that need to be modified?

Do you mean that after these modifications, Picoquic will still be able to transmit successfully, albeit not in compliance with RFC9000, and it won't interact with other QUIC implementations (like MsQuic), but it can interact with my modified Picoquic implementation? Is my understanding correct?

@huitema
Copy link
Collaborator

huitema commented Jan 12, 2024

Yes, that should work. Of course, the best is to program the parameters of your network in Mininet or something similar, and try. 6000bps is very low, lower than most existing tests, so I am not completely sure that everything will work.

You should be careful with the computation of packet length. Picoquic is interested with the size of QUIC packets, i.e., the size of the UDP payload. The complete packet may have 20 bytes of IPv4 header, and 16 bytes of UDP payload. If you send a QUIC packet of 861 bytes, you may end up sending 861 + 16 + 20 = 897 bytes. Is that what you want?

If you do know the packet size, you could also set the parameters to the exact values that you expect, and completely bypass PMTU discovery.

@KathyNJU
Copy link
Author

Before modifying the MTU, I tested at 6000bps, and it was successful in file transfer. After modification, I will conduct actual tests.

In the practical tests, I used IPv6, and the packet size of 861 bytes I mentioned specifically refers to:
861 = 5 (encapsulation header) + 40 (IPv6 header) + 8 (UDP header) + user data.

In that case, should my PICOQUIC_MAX_PACKET_SIZE be set to 807?

I'm also concerned that Picoquic's own packet headers will be encrypted. I would like to know how parameters such as PICOQUIC_ENFORCED_INITIAL_MTU and PICOQUIC_PRACTICAL_MAX_MTU are calculated. I'm worried that incorrect settings might prevent normal encryption and other processes, causing Picoquic to malfunction.

@huitema
Copy link
Collaborator

huitema commented Jan 12, 2024

You could set PICOQUIC_MAX_PACKET_SIZE, PICOQUIC_ENFORCED_INITIAL_MTU and PICOQUIC_PRACTICAL_MAX_MTU to 807. That would disable PMTU discovery, one less thing to worry about.

@KathyNJU
Copy link
Author

#ifndef PICOQUIC_MAX_PACKET_SIZE
#define PICOQUIC_MAX_PACKET_SIZE 807
#endif
#define PICOQUIC_MIN_SEGMENT_SIZE 256
#define PICOQUIC_ENFORCED_INITIAL_MTU 807
#define PICOQUIC_ENFORCED_INITIAL_CID_LENGTH 8
#define PICOQUIC_PRACTICAL_MAX_MTU 807
#define PICOQUIC_RETRY_SECRET_SIZE 64
#define PICOQUIC_RETRY_TOKEN_PAD_SIZE 26
#define PICOQUIC_DEFAULT_0RTT_WINDOW (7*PICOQUIC_ENFORCED_INITIAL_MTU)
#define PICOQUIC_NB_PATH_TARGET 8
#define PICOQUIC_NB_PATH_DEFAULT 2
#define PICOQUIC_MAX_PACKETS_IN_POOL 0x2000
#define PICOQUIC_STORED_IP_MAX 16

Do I need to make corresponding modifications to parameters like PICOQUIC_MIN_SEGMENT_SIZE, PICOQUIC_RETRY_SECRET_SIZE, PICOQUIC_RETRY_TOKEN_PAD_SIZE, and PICOQUIC_MAX_PACKETS_IN_POOL?

Considering the link bandwidth is 6000bps, should PICOQUIC_DEFAULT_0RTT_WINDOW be modified as mentioned above?

@KathyNJU
Copy link
Author

KathyNJU commented Jan 12, 2024

However, the initial value of PICOQUIC_MAX_PACKET_SIZE is 1536, while the MTU for Ethernet is 1500. Does this include other things as well?

The initial PICOQUIC_PRACTICAL_MAX_MTU is 1440. Is this because it's calculated as 1500 (Ethernet MTU) - 8 (UDP header) - 40 (IPv6 header)?

@huitema
Copy link
Collaborator

huitema commented Jan 12, 2024

This just shows how old I am. Ethernet V1, as deployed in the 1980's, had an MTU of 1536 bytes. V2 is 1500 bytes. It does not matter very much, but the first value tried for PMTU discovery does matter -- too high and the first trial is lost. Look at https://en.wikipedia.org/wiki/Maximum_transmission_unit for series of plausible values. 1500 bytes will not work for residential customers if the path between the home router and the ISP uses PPPoE (1492), or if one the path is some kind of tunnel or VPN. 1440 is just a value that happens to work with a majority of these tunneling techniques.

@KathyNJU
Copy link
Author

OK, thank you very much for your explanation!

@huitema huitema closed this as completed Mar 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants