-
-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stream stringify #709
stream stringify #709
Conversation
Signed-off-by: francesco <[email protected]>
Signed-off-by: francesco <[email protected]>
Signed-off-by: francesco <[email protected]>
Signed-off-by: francesco <[email protected]>
Signed-off-by: francesco <[email protected]>
I don't this would yield any significant performance improvement. |
Signed-off-by: francesco <[email protected]>
Signed-off-by: francesco <[email protected]>
@mcollina this will be complicated to test |
Signed-off-by: francesco <[email protected]>
I think this might give you the perf boost only if you have a really large array in the response. So I would try to make a chunk at least per object and not per object key/value. Making chunks small will give you a big overhead. |
Simple math. If time required to send one tcp package from server to client is more than time for serializing the whole response than it doesn't make a lot of sense. If I understand the idea correctly. |
Signed-off-by: francesco <[email protected]>
Signed-off-by: francesco <[email protected]>
Signed-off-by: francesco <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will be slower than before, because it generates all data synchronously before beginning to write it. Note you'll still generate all the content synchronously and queue it in the stream. Let's say this queuing cost 1ns. If you enqueue 1 chunk, it would take 1 ns. If you enqueue 1000 chunks, it would cost 1ms. Given that you already have 100% of the data to be sent, it's not worth the complexity.
Streams are great if we could start sending some of the data while waiting for some other I/O to happen. This reduces loading time etc. But not in this case.
Why not? const s = new Stream.PassThrough()
s.pipe(req.res as any); // pipe the stream on the server response
stringify(data, s);
|
It doesn't. You are generating all the chunks synchronously and enqueuing them. After all of that is completed, the event loop will pick up and do all the work to send them through (I'm simplifying). |
I have tried , it's true ! I think I need to study how streams work :( |
I don't know if this idea has ever been discussed or if it can actually be a performance improvement.
The idea is to implement a stringify version that write a
Readable
to consume in the server response during the stringify processe.g:
Maybe serving the stream into the web-server (fastify) response during the stringify process, is a time-parallelization and maybe has some advantage.
The benchmark only shows that no slowdowns have been introduced in the process. There is no benchmark on the stream!