-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fixed delay between websocket connection and .onmessage subscription … #19
Conversation
extends LagomServiceApiBridge { | ||
|
||
// Defines the internal buffer size for the websocket when using ServiceCalls containing a Source as it's not possible to use backpressure in the JS websocket implementation | ||
val maxBufferSize = Option(config.getInt("lagom.client.websocket.scalajs.maxBufferSize")).filter(_!=0).getOrElse(1024) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't figure out how to use the reference.conf
to define the default value :( the config stayed empty. Defining it directly in the ClientApplication by overriding com.lightbend.lagom.scaladsl.api.LagomConfigComponent#config
works though
@an-tex, this is great! I should be able to do full review this weekend. I want to build a test case from your example for the feature/integration-test branch so we can protect against regressions. I'll also look into the |
Brilliant! Those integration tests make certainly a lot of sense considering the custom JS implementation. Thanks for your work on this :) |
Can you target the PR to the develop branch (rather than master)? |
I have the feature/integration-test branch updated with tests for these stream delay cases. I thought I had tests to cover this, but obviously they didn't work. They used service calls like I was looking at implementing tracking of demand for the Still need to look into |
True, much better! Guess no need to push my changes into the develop branch any more? |
…which can cause lost elements
I didn't know it before, but I can add commits directly to the PR. I've changed the target to develop and added my changes to the PR so we can maintain your authorship. |
I don't mind the authorship stuff but cool, thanks mate! :) If you need any further help/review let me know |
Removed mixed notation in config due to akka-js/shocon#28. The mixed notation can be added back when resolved. |
Sorry for the delay but testing this turned out to be pretty tricky. First I couldn't override the bufferSize in my app, do you have a working approach? I used this in my ClientApplication
but the bufferSize stayed at 16. So I've just hardcoded it for testing ;) My local testing (which is obviously not realistic but still shows what can happen) showed that a fast sender can pretty much keep the JS thread busy at the socket.onMessage function in a way that even the extra buffer as in apiClient.websocketServiceCall.invoke().map ( source =>
// never got here in time...
source.buffer(10240).runWith(...)
) didn't get a chance to be connected until the sender actually finished, so it was all about the internal websocket buffer. But even just by adding a I'll try with an actual remote system tomorrow. But I guess it'll be down to really be careful with fast senders and buffers. From that point of view your impl and docs are looking great |
Experimented with the config and am running into similar issues. I'll keep looking into it. Another good catch! I've tested this fast sender and not lost elements: override def fast = ServerServiceCall { _ =>
val source = Source(Seq.range(1, 5000))
Future.successful(source)
} Can you post or link the code that's causing the issue so I can try to replicate it? |
I believe the config issue is due to akka-js/shocon#55. |
Have a look here: https://github.com/an-tex/lagom-scalajs-example As soon as there's a burst of only 32 Elements, even after a Sink.ignore is connected, you'll get a full buffer exception. I've tried locally and from another machine over wifi. Raising the bufferSize or adding .throttle helps. It's really odd. Seems like the demand from the Sink.ignore isn't getting to the socketSource fast enough. Could be the silly single-threaded JS engine only calling .onmessage but not getting much further? |
Sorry there hasn't been much progress on this lately. I'm seeing all the same issues you are (my previously working example was a fluke). I think you're right, the WebSocket I'll see if there's anything else I can do, but if not I'll just have to update the docs to discuss the issue. |
Thanks for the update. Sounds good. Seems not much we can do to fix it properly but rather workaround it. Bummer! |
It's been a while, but I think I have a mitigating solution. Using Akka To mitigate this I implemented a custom WebSocket stream buffer that schedules buffer operations and downstream consumption on the JavaScript job queue, so it runs as soon as possible and before other elements of the event-loop queue. Generally, this helps the buffer keep up better with a fast upstream, though it can still fail. I also made the buffer allow an unbounded size as a last ditch. I updated the PR readme with a discussion of the trade-offs. Changing the config is still wonky because of akka-js/shocon#55. I'll work on a way to make that easier in a separate PR (#21). |
Great! Seems it's the way to go to not rely entirely on akka in js at least for such for internals. |
…which can cause lost elements.
In my case making a call to
ServiceCall[NotUsed,Source[String,NotUsed]]
likemyService.myServiceCall.invoke().flatMap(_.runWith(Sink.seq))
caused a delay of ~30ms between opened websocket and connected subscriber. All elements in between were lost.Also don't push more elements to subscriber than requested.
Annoying that JS Websockets don't support backpressure :(