-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hang when cluster is unavailable #267
Comments
I think there are some lesser visible facets of this bug that I would call in scope. Whenever a kubeconfig is selected for a cluster that does not have an active remote (control plane), the interface does not react at all or becomes unresponsive. It's not only about the story as it is recorded, it's that whole class of issues. It would be better if you got some kind of timeout behavior and an icon indicating they couldn't be reached. If the timeout is a blocking activity, then it should not be single threaded if possible. The "GitOps-Enabled" cluster icon is red if I remember correctly. I'd stick to association of red with not-working, and the GitOps icon should be green – or maybe better, purple. I always associate red with down or offline. This one is high-priority as well (adding it to the pile) |
It turns out there actually is a timeout if you wait long enough, but some of our operations that time out are called in serial rather than in parallel, and the later calls are not aborted if the earlier calls had timed out, so we'd better call them in parallel (then the timeout behavior returns control to the UI in a reasonable amount of time, and the user gets some feedback hopefully from whatever timed out) |
There is still a bit of a jarring hang when one of your clusters is unavailable, but it's definitely improved by #270 |
We still see this surfacing often enough to keep this issue open. It's still not exactly clear today how the issue is reproduced. |
We should try to reproduce this given the updates that we've made, but from exploratory testing that was not focused on this particular issue, we haven't seen it being reproduced organically in a while. This may be closed if it cannot be reproduced anymore. |
I think there's a solid argument that now users can set their kubeconfig from #334, they can be held responsible for their own kubeconfig hygiene. It would be nice to provide controls to delete contexts as I currently don't have another great UI for managing my kubeconfig contexts, merging kubeconfigs is handy as well, but as of now users can select a different kubeconfig and we don't require anyone to merge them, users can keep them separate or merge them, it doesn't matter. I think this new feature obviates the bug, and I personally haven't seen it hang in a while, or it always seems to time out within a fairly reasonable 30-60s, so I'm going to close it for now. |
Expected behaviour
The editor should eventually time out and give up, if a cluster is unreachable. If I have picked a different cluster, as a user who knows the old cluster is not coming back, I should not be blocked waiting for the cluster to change.
Actual behaviour
In the attached screenshot you can see the Kubernetes extension at the bottom has already changed to select "oidc@moo" but the gitops extension is still blocked, looking for
kind-kind
that has already been deleted.This apparently never times out. Those spinners just keep spinning and there's nothing to do but quit the editor and restart.
Steps to reproduce
Related to #239 but also not exactly the same issue – (I have a feeling that we're going to make one change that fixes all this at once.)
The steps I followed:
kind create cluster
Versions
Extension version: v0.19.2
VSCode version: 1.66.2 (Universal)
Operating System (OS) and its version: MacOS Monterey 12.3.1
The text was updated successfully, but these errors were encountered: