Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refresh Clusters - triggers unexpected behavior / does not do what I expected #239

Closed
kingdonb opened this issue Apr 2, 2022 · 5 comments
Assignees
Labels
bug Something isn't working
Milestone

Comments

@kingdonb
Copy link
Collaborator

kingdonb commented Apr 2, 2022

Expected behaviour

Screen Shot 2022-04-01 at 11 34 35 PM

When I clicked this button, I expected the list of clusters to refresh. I should not have to quit the editor and restart it to see a new cluster after either kind create cluster or k3d cluster create.

It's OK for clicking this button to also trigger a refresh of the active cluster, if it won't hang anything. If the cluster I have selected is inactive or unreachable, then refreshing the list of clusters might hang the UI. (This is bad!)

I do want a way to refresh both sources and workloads in one click. Maybe not this button. I'm hesitate to add another button, it's actually the button I reached for to refresh the whole cluster. But I also wanted it to refresh the list of clusters from my kubeconfig, in case it has been modified.

Actual behaviour

Instead of refreshing the list of clusters, it refreshed my cluster's inventory – both workloads and sources.

Steps to reproduce

Boot a new cluster or add one to your kubeconfig after the editor has been started. There is seemingly no way to refresh the kubeconfig from the source, and update the list of clusters without restarting the editor.

Versions

kubectl version: v1.23.5
Flux version: v0.28.5
Git version: 2.35.1
Azure version: N/A
Extension version: v0.19.0
VSCode version: 1.66.0
Operating System (OS) and its version: MacOS 12.2.1

@josefaworks josefaworks added the bug Something isn't working label Apr 7, 2022
@kingdonb
Copy link
Collaborator Author

kingdonb commented Apr 8, 2022

Seems to be a duplicate of #215 - which it appears has already been fixed in main.

I'm adding this to a milestone for 0.19.1 (let's target Monday for a next Patch release!)

@kingdonb kingdonb self-assigned this Apr 8, 2022
@kingdonb kingdonb added this to the 0.19.1 milestone Apr 8, 2022
@kingdonb
Copy link
Collaborator Author

I'm not sure this is totally resolved by #215

I still had instances where a cluster would not remove itself from the list, even after k3d cluster delete – it seems from reading the commits tagged in #215 that there might be a gap in the logic around refreshes that explains why.

But this is definitely a much narrowed edge-case issue now than it was in 0.19.0

I'm going to keep this issue open and move it to the next milestone, unless we can resolve that edge case too,

It should be noted though, that the main issue I was reporting here was already addressed in #215 – you can now refresh the clusters list after a new cluster has been added to your kubeconfig, without restarting the editor.

@kingdonb kingdonb modified the milestones: 0.19.1, 0.19.2 Apr 11, 2022
@kingdonb
Copy link
Collaborator Author

Here let's elaborate the issue that remains:

  • The cluster that I'm working on has been deleted, however it happens
  • The kubeconfig has been updated to point at a different context (this happens when k3d cluster delete ... finishes cleaning up the context for the cluster that was just deleted, for example, if you had other clusters in your kubeconfig)
  • I still see the deleted cluster in the list, even after refreshes. The "cluster refresh" button at the top triggers all sub-tree frames to reload their content. Now they are pointed at the different kubeconfig.

The failure mode is: I have selected a different cluster, and returned to the deleted cluster, but the different cluster remains selected in my global kubeconfig context.
Screen Shot 2022-04-10 at 10 07 01 AM

In the screenshot here, k3d-testclu is the cluster that I just deleted. aks-kingdon-az1 is the cluster that still runs in the background, the kubernetes extension has selected it as the context now.

I can switch away from the k3d-testclu to any of the other clusters, and refreshing the cluster list does not remove the newly removed cluster. Selecting it and right-clicking, there seems to be awareness that the context has been deleted, but some unexpected behavior remains – why does it stay in the list? How is it possible to have one context selected in the Kubernetes extension, but another one selected in the GitOps extension? (Which one is used / is this intentionally like this?)

I don't think it's intentional but from a quick read of the code it's not clear how to avoid yet for me.

@kingdonb kingdonb modified the milestones: 0.19.2, 0.19.3 Apr 28, 2022
@kingdonb kingdonb modified the milestones: 0.19.3, 0.19.6 Jun 8, 2022
@kingdonb kingdonb modified the milestones: 0.19.x, 0.20 Jul 14, 2022
@kingdonb
Copy link
Collaborator Author

The GitHub Pages target described in #300 will also have to include a Troubleshooting tab and perhaps a Cheatsheet tab, that includes further information above and beyond the basics for getting started, to help users use the extension optimally.

At this point I'm convinced this behavior is not wrong it just needs to be documented, because it's a bit surprising for a new user out of the gate. (Surprise and delight...)

@kingdonb
Copy link
Collaborator Author

I think this is pretty well solved by #373

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants