Skip to content
This repository has been archived by the owner on Sep 16, 2020. It is now read-only.

k8s clusters are unable to create Load Balancers #78

Open
voor opened this issue Oct 31, 2018 · 2 comments
Open

k8s clusters are unable to create Load Balancers #78

voor opened this issue Oct 31, 2018 · 2 comments

Comments

@voor
Copy link
Contributor

voor commented Oct 31, 2018

Problem

Since we are using existing subnets, and the cluster is deployed into a private subnet, services are unable to create load balancers.

How to recreate

  1. Create a file like load-balancer.yml or similar and put the contents into it:
    kind: Service
    apiVersion: v1
    metadata:
    name: my-service
    spec:
    selector:
        app: MyApp
    ports:
        - protocol: TCP
        port: 443
        targetPort: 443
    type: LoadBalancer
    
  2. Create this load balancer for a pks created cluster:
    kubectl apply -f load-balancer.yml
    
  3. Result is this:
        {
                "apiVersion": "v1",
                "count": 7,
                "eventTime": null,
                "firstTimestamp": "2018-10-31T21:08:32Z",
                "involvedObject": {
                    "apiVersion": "v1",
                    "kind": "Service",
                    "name": "my-service",
                    "namespace": "default",
                    "resourceVersion": "33581",
                    "uid": "1f345c73-dd51-11e8-868d-12039f4a481a"
                },
                "kind": "Event",
                "lastTimestamp": "2018-10-31T21:13:48Z",
                "message": "Error creating load balancer (will retry): failed to ensure load balancer for service default/my-service: could not find any suitable subnets for creating the ELB",
                "metadata": {
                    "creationTimestamp": "2018-10-31T21:08:32Z",
                    "name": "my-service.1562cda09d9fa18b",
                    "namespace": "default",
                    "resourceVersion": "34040",
                    "selfLink": "/api/v1/namespaces/default/events/my-service.1562cda09d9fa18b",
                    "uid": "1f5261d3-dd51-11e8-868d-12039f4a481a"
                },
                "reason": "CreatingLoadBalancerFailed",
                "reportingComponent": "",
                "reportingInstance": "",
                "source": {
                    "component": "service-controller"
                },
                "type": "Warning"
            }
    

How to solve (temporary workaround)

You need to get the kubernetes cluster name from bosh, since it's actually the bosh deployment name.
Once you have that, you can add these tags in (work in progress apologies for how dirty this is):
voor@811d718

2 of the 3 tags are generic and can always be there, it's the kubernetes.io/cluster/service-instance_4a7a5305-88dc-4d90-9785-fc86b08c3d08 that is specific to your kubernetes clusters, I'm unsure of how to apply that each time a new cluster is created.

@cf-gitbot
Copy link

We have created an issue in Pivotal Tracker to manage this. Unfortunately, the Pivotal Tracker project is private so you may be unable to view the contents of the story.

The labels on this github issue will be updated when the story is started.

@voor
Copy link
Contributor Author

voor commented Dec 13, 2018

Also referencing this commit as it allows people to apply additional networking cluster tags without terraform removing them:
voor@72daf73

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants