-
Notifications
You must be signed in to change notification settings - Fork 105
Ingresses are not created anymore in isolation mode #302
Comments
we have understood that B2K not work with Nginx Manifest networking.k8s.io/v1 but only with networking.k8s.io/v1Beta1 can we confirm it? |
Are anyone working on issues @here? |
Apologies @letmagnau for the slow replies/investigations. We are dealing with extra work and a reduced dev team. Another argument to push for open sourcing the codebase and not put us as the blocking point! I don't believe we ever investigated Nginx to understand how this would work with our isolation mechanism. Given the number of features that Nginx provides, it would also be possible that we work with it if you're using some features but not others. Was the Nginx manifest upgraded when you upgraded your cluster? If yes, is going back to the previous version a possibility? |
Hi daniv thanks to reply. Our scope is contributing to improve the B2K system trying to track all issues that we are experimenting. this conf works , isolation ingresses was created correctly If I upgrade the cluster to >= 1.22 , manifest nginx's v1beta1 are deprecated and We have to change to v1 mandatory As you can understand , we are came back to the old manifest but also to the old cluster version and this could be a critical scenario for the future I hope now is more clear regards |
Hi Daniv, any news? regards |
Thanks @letmagnau for providing context, it makes sense. We will need to investigate Nginx to understand what's happening with the new manifest, and overall what is the support we can provide there. I created a work item on our side, but for now I don't have a great workaround apart from sticking to the version of Kubernetes you're already using. |
Thanks @daniv-msft we hope that investigation will bring success for fixing it we will listen for regards |
Hi @daniv-msft , any news?? we are experimenting more troubles based on not upgrade the version to v1 standard |
anyone works on this project? :( |
Hi @letmagnau, |
Hi @daniv-msft , nobody can understand the staffing problem like me. But this is not a Nginx problem, Nginx is already on standard. Manifest are something about Kubernetes and they have deprecated v1beta1 not Nginx |
We are having the same issue and it's critical to us. We are using B2K to work with our services daily and now the ingresses are not being created and we can't interconnect some of our systems to continue working in our services locally. I hope some day this project become open source, maybe we can help to solve issues or at least trust that issues will be solved. Currently we are thinking about in return to telepresence or devspace as a workaround :/ @daniv-msft Is there any help or information we could bring to solve this issue? |
@daniv-msft, I was investigating the routingmanager docker container and I think that maybe just updating the dotnet dependency KubernetesClient (actually you are using 5.x.x) to 7.0.0+ were they added support for k8s 1.22 the problem could be solved. I believe that's all the help I can give because the code is closed and I can't read the code o try something. I was trying to manually update the dependency KubernetesClient in the docker container but it's a little bit hard because dotnet is compiled so there are many references to that DLL that I need to change and I could't. |
Thanks @agallardol for the feedback and investigation. Adding @GatoonJin who is looking into this issue on our side. |
Cool @GatoonJin , let me know is there is any other thing I can do to help you! |
Hello, is there any new about this issue. We have a k8s cluster ready with ingress-nginx instances to test if it's necessary. Maybe you can build a test Docker image for Routingmanager and we can try it. I know it could be hard to generate a test environment for this issue 💪🏽 |
Hello everyone, I have followed up, the big upgrade needs more time :) |
Hello @GatoonJin is there any update about this? Could we help of any way? |
@GatoonJin Is there any update on this? I am running into this same exact issue. |
Hello, we're also encountering an issue with ingress duplication that is preventing us from using B2K Isolation Mode past Azure Kubernetes v1.21.9. Once we upgrade past that version (to 1.23.5 specifically) we're no longer seeing the creation of envoy pods or cloned routing services. |
@GatoonJin Hi, is there any update about this issue? :( |
Heeey, how are you? I noticed you released a new version of B2K vscode extension and Routing Manager. After some tests our team could see Ingresses are working again in isolated mode. We really preciate your job because all of our team work daily with your tool and we were waiting all of this months to view this feature fixed. Thank you so much and if you think we can help you with feedback or anything about this project we will be happy to give you more information. |
Hi everyone! A lot of work has gone on this project over the past month or so. I am happy to see this issue resolved. You should see more active response from our team going forward, and some cool things coming :) @agallardol , yes! We would love feedback on how we can make the product better. |
Hello
We have upgraded k8 cluster to 1.22.6
we have kubectl 1.23.4
Unitl the upgrade all works good, we have upgraded because VSCODE extension alert us of possible malfunctioning
After that , it works but when someone start as isolate way , the ingresses aren't created yet
and it's impossibile to use for debug becuse that alias not exist and nginx fail
NO step , the usual way
VS CODE Ext v1.0.120220125
our simple tasks
{
"label": "bridge-to-kubernetes.resource",
"type": "bridge-to-kubernetes.resource",
"resource": "XXXX-webapp",
"resourceType": "service",
"ports": [
80
],
"targetCluster": "XXXXX",
"targetNamespace": "XXXX",
"useKubernetesServiceEnvironmentVariables": false,
"isolateAs": "letmagnau"
}
before when we start in isolate mode, it will create an Ingresses in K8 with the isolateAs + domain
there are no problme , connection succeded but no way to debug starting from alias
Operating System: Manjaro 5.16.11-2
The text was updated successfully, but these errors were encountered: