Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ingress-dns no longer works on MacOS in minikube 1.23 #12424

Closed
iamnoah opened this issue Sep 7, 2021 · 11 comments Β· Fixed by #12476
Closed

ingress-dns no longer works on MacOS in minikube 1.23 #12424

iamnoah opened this issue Sep 7, 2021 · 11 comments Β· Fixed by #12476
Assignees
Labels
addon/ingress kind/bug Categorizes issue or PR as related to a bug. os/macos priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@iamnoah
Copy link

iamnoah commented Sep 7, 2021

MacOS BigSur 11.5.2

Following the ingress-dns setup.

Steps to reproduce the issue:

  1. minikube start --vm=true tried with and without --vm=true

     minikube start --vm=true
     πŸ˜„  minikube v1.23.0 on Darwin 11.5.2
     ✨  Automatically selected the hyperkit driver
     πŸ‘  Starting control plane node minikube in cluster minikube
     πŸ”₯  Creating hyperkit VM (CPUs=4, Memory=8192MB, Disk=81920MB) ...|
     🐳  Preparing Kubernetes v1.22.1 on Docker 20.10.8 ...
         β–ͺ Generating certificates and keys ...
         β–ͺ Booting up control plane ...
         β–ͺ Configuring RBAC rules ...
     πŸ”Ž  Verifying Kubernetes components...
         β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
     🌟  Enabled addons: storage-provisioner, default-storageclass
    πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
    
  2. Β 

      > minikube addons enable ingress-dns
          β–ͺ Using image cryptexlabs/minikube-ingress-dns:0.3.0
      > minikube addons enable ingress
          β–ͺ Using image k8s.gcr.io/ingress-nginx/controller:v1.0.0-beta.3
          β–ͺ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
          β–ͺ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
      πŸ”Ž  Verifying ingress addon...
  3. Β 

        > echo -e "domain test\nnameserver $(minikube ip)\nsearch_order 1\ntimeout 5\n" | sudo tee /etc/resolver/minikube
        
        Password:
        domain test
        nameserver 192.168.64.36
        search_order 1
        timeout 5
  4. Β 

    > kubectl apply -f https://raw.githubusercontent.com/kubernetes/minikube/master/deploy/addons/ingress-dns/example/example.yaml
    deployment.apps/hello-world-app created
    ingress.networking.k8s.io/example-ingress created
    service/hello-world-app created
    service/hello-world-app created       

Then verification:

> nslookup hello-john.test $(minikube ip)
;; connection timed out; no servers could be reached

lsof shows nothing is listening on port 53:

logs.txt

@iamnoah
Copy link
Author

iamnoah commented Sep 7, 2021

Also, downgrading to 1.22 resolves the issue for us.

@sharifelgamal sharifelgamal added addon/ingress kind/bug Categorizes issue or PR as related to a bug. os/macos priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Sep 7, 2021
@sharifelgamal sharifelgamal self-assigned this Sep 7, 2021
@sharifelgamal
Copy link
Collaborator

Seems to be a service account issue:

E0908 00:05:23.074631       8 leaderelection.go:361] Failed to update lock: configmaps "ingress-controller-leader" is forbidden: User "system:serviceaccount:ingress-nginx:ingress-nginx" cannot update resource "configmaps" in API group "" in the namespace "ingress-nginx"

@jcdickinson
Copy link

jcdickinson commented Sep 8, 2021

This is also happening under Linux (Fedora) with the Docker engine. Edit: with regular ingress.

@dR3b
Copy link

dR3b commented Sep 10, 2021

minikube version: v1.23.0

$ minikube start --vm-driver=virtualbox --memory=8192 --cpus=2 --addons=ingress

Arch Linux, not working

@sharifelgamal
Copy link
Collaborator

The ingress-dns issue has to do with the source code of the docker image the addon is using. It's calling an old endpoint for ingresses that was fully deprecated in 1.22, which is causing the pod to crash anytime the nslookup is done.

I'm building/pushing a new image with the proper endpoint here soon and the fix will be live in the 1.23.1 release happening this week.

@exocode
Copy link

exocode commented Nov 6, 2021

Hi @sharifelgamal I tried the initial steps to reproduce that issue.

I also upgraded Minikube to version

❯ minikube version
minikube version: v1.24.0
commit: 76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b

❯ docker -v
Docker version 20.10.8, build 3967b7d

I also tried to run minikube tunnel in a seperate window.

I also tried also another Kubernetes version (which I normally use and need):
minikube start --memory=4096 --driver=docker --kubernetes-version=v1.21.6 --cpus=4

And that's the content of my /private/etc/resolver/minikube file:

domain test
nameserver 192.168.49.2
search_order 1
timeout 5
NAMESPACE     NAME              CLASS   HOSTS                             ADDRESS     PORTS   AGE
kube-system   example-ingress   nginx   hello-john.test,hello-jane.test   localhost   80      6m1s

But tests against that domain are still failing. Why and how to bypass that?

❯ nslookup hello-john.test $(minikube ip)
;; connection timed out; no servers could be reached

Maybe out of scope: has addons ingress also to be enabled? or is addons ingress-dns enough? Bit lost there.

Any suggestion could safe my weekend :-)

here are my minikube logs:

Click to expand!

==> Audit <==
|--------------|--------------------------------|----------|------|---------|-------------------------------|-------------------------------|
|   Command    |              Args              | Profile  | User | Version |          Start Time           |           End Time            |
|--------------|--------------------------------|----------|------|---------|-------------------------------|-------------------------------|
| docker-env   |                                | minikube | jan  | v1.23.2 | Fri, 05 Nov 2021 18:28:44 CET | Fri, 05 Nov 2021 18:28:45 CET |
| docker-env   |                                | minikube | jan  | v1.23.2 | Fri, 05 Nov 2021 18:28:54 CET | Fri, 05 Nov 2021 18:28:55 CET |
| -p           | minikube docker-env            | minikube | jan  | v1.23.2 | Fri, 05 Nov 2021 18:29:02 CET | Fri, 05 Nov 2021 18:29:03 CET |
| docker-env   |                                | minikube | jan  | v1.23.2 | Fri, 05 Nov 2021 23:36:01 CET | Fri, 05 Nov 2021 23:36:03 CET |
| addons       | list                           | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 11:56:12 CET | Sat, 06 Nov 2021 11:56:13 CET |
| addons       | ingress help                   | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 12:11:16 CET | Sat, 06 Nov 2021 12:11:16 CET |
| addons       | ingress-- help                 | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 12:16:59 CET | Sat, 06 Nov 2021 12:16:59 CET |
| addons       | ingress --help                 | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 12:17:01 CET | Sat, 06 Nov 2021 12:17:01 CET |
| addons       | ingress --help                 | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 12:17:04 CET | Sat, 06 Nov 2021 12:17:04 CET |
| addons       | enable ingress-dns             | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 12:19:09 CET | Sat, 06 Nov 2021 12:19:11 CET |
| ip           |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 12:21:15 CET | Sat, 06 Nov 2021 12:21:16 CET |
| tunnel       | --help                         | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 12:21:33 CET | Sat, 06 Nov 2021 12:21:33 CET |
| tunnel       |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 12:21:49 CET | Sat, 06 Nov 2021 12:22:39 CET |
| ip           |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 12:23:34 CET | Sat, 06 Nov 2021 12:23:35 CET |
| ip           |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 12:24:25 CET | Sat, 06 Nov 2021 12:24:25 CET |
| ip           |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 15:45:58 CET | Sat, 06 Nov 2021 15:45:59 CET |
| ip           |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 15:48:55 CET | Sat, 06 Nov 2021 15:48:56 CET |
| tunnel       |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 15:50:28 CET | Sat, 06 Nov 2021 15:52:00 CET |
| delete       |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 15:53:54 CET | Sat, 06 Nov 2021 15:54:24 CET |
| start        | --memory=4096 --driver=docker  | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 15:54:39 CET | Sat, 06 Nov 2021 15:55:20 CET |
|              | --kubernetes-version=v1.21.6   |          |      |         |                               |                               |
|              | --cpus=4                       |          |      |         |                               |                               |
| docker-env   |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 15:57:43 CET | Sat, 06 Nov 2021 15:57:44 CET |
| tunnel       | -h                             | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 15:57:52 CET | Sat, 06 Nov 2021 15:57:52 CET |
| docker-env   |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 16:27:11 CET | Sat, 06 Nov 2021 16:27:13 CET |
| service      | --url nginx                    | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 16:27:30 CET | Sat, 06 Nov 2021 16:28:25 CET |
| service      | list                           | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 16:29:10 CET | Sat, 06 Nov 2021 16:29:11 CET |
| profile      | list                           | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 16:29:51 CET | Sat, 06 Nov 2021 16:29:52 CET |
| ip           |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 16:30:34 CET | Sat, 06 Nov 2021 16:30:35 CET |
| tunnel       |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 16:19:47 CET | Sat, 06 Nov 2021 16:52:19 CET |
| tunnel       |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 16:52:20 CET | Sat, 06 Nov 2021 16:54:31 CET |
| addons       | enable ingress-dns             | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 16:56:33 CET | Sat, 06 Nov 2021 16:56:34 CET |
| tunnel       | --cleanup                      | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 16:54:31 CET | Sat, 06 Nov 2021 16:56:39 CET |
| profile      | list                           | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 16:57:20 CET | Sat, 06 Nov 2021 16:57:21 CET |
| ip           |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 17:03:59 CET | Sat, 06 Nov 2021 17:04:00 CET |
| tunnel       | --cleanup                      | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 16:56:41 CET | Sat, 06 Nov 2021 17:39:23 CET |
| tunnel       | --cleanup                      | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 17:39:25 CET | Sat, 06 Nov 2021 17:40:36 CET |
| ip           |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 17:43:32 CET | Sat, 06 Nov 2021 17:43:33 CET |
| ip           |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 17:56:33 CET | Sat, 06 Nov 2021 17:56:33 CET |
| ip           |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 17:56:35 CET | Sat, 06 Nov 2021 17:56:35 CET |
| docker-env   |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 17:56:45 CET | Sat, 06 Nov 2021 17:56:46 CET |
| addons       | list                           | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 17:57:23 CET | Sat, 06 Nov 2021 17:57:23 CET |
| delete       |                                | minikube | jan  | v1.23.2 | Sat, 06 Nov 2021 18:13:11 CET | Sat, 06 Nov 2021 18:13:19 CET |
| start        | --memory=4096 --driver=docker  | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:18:28 CET | Sat, 06 Nov 2021 18:20:10 CET |
|              | --kubernetes-version=v1.21.6   |          |      |         |                               |                               |
|              | --cpus=4                       |          |      |         |                               |                               |
| addons       | enable ingress-dns             | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:20:24 CET | Sat, 06 Nov 2021 18:20:25 CET |
| ip           |                                | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:21:27 CET | Sat, 06 Nov 2021 18:21:28 CET |
| docker-env   |                                | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:22:04 CET | Sat, 06 Nov 2021 18:22:05 CET |
| ip           |                                | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:22:08 CET | Sat, 06 Nov 2021 18:22:09 CET |
| ip           |                                | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:23:06 CET | Sat, 06 Nov 2021 18:23:06 CET |
| ip           |                                | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:23:22 CET | Sat, 06 Nov 2021 18:23:22 CET |
| ip           |                                | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:23:36 CET | Sat, 06 Nov 2021 18:23:37 CET |
| ip           |                                | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:24:00 CET | Sat, 06 Nov 2021 18:24:01 CET |
| addons       | enable ingress                 | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:24:41 CET | Sat, 06 Nov 2021 18:25:00 CET |
| ip           |                                | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:26:00 CET | Sat, 06 Nov 2021 18:26:00 CET |
| logs         |                                | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:29:42 CET | Sat, 06 Nov 2021 18:29:44 CET |
| update-check |                                | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:31:00 CET | Sat, 06 Nov 2021 18:31:03 CET |
| delete       |                                | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:32:37 CET | Sat, 06 Nov 2021 18:32:42 CET |
| start        | --memory=4096 --driver=docker  | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:32:59 CET | Sat, 06 Nov 2021 18:34:00 CET |
|              | --cpus=4                       |          |      |         |                               |                               |
| addons       | enable ingress                 | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:41:27 CET | Sat, 06 Nov 2021 18:41:46 CET |
| addons       | enable ingress-dns             | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:41:50 CET | Sat, 06 Nov 2021 18:41:51 CET |
| ip           |                                | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:43:07 CET | Sat, 06 Nov 2021 18:43:08 CET |
| ip           |                                | minikube | jan  | v1.24.0 | Sat, 06 Nov 2021 18:48:11 CET | Sat, 06 Nov 2021 18:48:12 CET |
|--------------|--------------------------------|----------|------|---------|-------------------------------|-------------------------------|


==> Last Start <==
Log file created at: 2021/11/06 18:32:59
Running on machine: Jans-MacBook-Pro
Binary: Built with gc go1.17.2 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1106 18:32:59.318253   10621 out.go:297] Setting OutFile to fd 1 ...
I1106 18:32:59.318439   10621 out.go:349] isatty.IsTerminal(1) = true
I1106 18:32:59.318442   10621 out.go:310] Setting ErrFile to fd 2...
I1106 18:32:59.318446   10621 out.go:349] isatty.IsTerminal(2) = true
I1106 18:32:59.318537   10621 root.go:313] Updating PATH: /Users/jan/.minikube/bin
I1106 18:32:59.318561   10621 oci.go:561] shell is pointing to dockerd inside minikube. will unset to use host
I1106 18:32:59.318949   10621 out.go:304] Setting JSON to false
I1106 18:32:59.381543   10621 start.go:112] hostinfo: {"hostname":"Jans-MacBook-Pro.local","uptime":859185,"bootTime":1635360794,"procs":635,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.6","kernelVersion":"20.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"3a460e0e-a60a-52a0-8d25-6ededad90182"}
W1106 18:32:59.381673   10621 start.go:120] gopshost.Virtualization returned error: not implemented yet
I1106 18:32:59.402329   10621 out.go:176] πŸ˜„  minikube v1.24.0 on Darwin 11.6
I1106 18:32:59.402494   10621 notify.go:174] Checking for updates...
I1106 18:32:59.433051   10621 out.go:176]     β–ͺ MINIKUBE_ACTIVE_DOCKERD=minikube
I1106 18:32:59.433430   10621 driver.go:343] Setting default libvirt URI to qemu:///system
I1106 18:33:00.015086   10621 docker.go:132] docker version: linux-20.10.8
I1106 18:33:00.015291   10621 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1106 18:33:00.629938   10621 info.go:263] docker info: {ID:HL6Y:NP2P:5U4Q:WRHC:7YB6:LTED:FWML:DFVZ:6CUT:5C2D:MI6F:5KCZ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:45 SystemTime:2021-11-06 17:33:00.1721638 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.47-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:5175267328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.3] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.0.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
I1106 18:33:00.672703   10621 out.go:176] ✨  Using the docker driver based on user configuration
I1106 18:33:00.672790   10621 start.go:280] selected driver: docker
I1106 18:33:00.672797   10621 start.go:762] validating driver "docker" against <nil>
I1106 18:33:00.672815   10621 start.go:773] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I1106 18:33:00.673177   10621 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1106 18:33:00.910251   10621 info.go:263] docker info: {ID:HL6Y:NP2P:5U4Q:WRHC:7YB6:LTED:FWML:DFVZ:6CUT:5C2D:MI6F:5KCZ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:45 SystemTime:2021-11-06 17:33:00.8215333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.47-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:5175267328 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.3] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.0.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
I1106 18:33:00.910343   10621 start_flags.go:268] no existing cluster config was found, will generate one from the flags
I1106 18:33:00.910488   10621 start_flags.go:736] Wait components to verify : map[apiserver:true system_pods:true]
I1106 18:33:00.910502   10621 cni.go:93] Creating CNI manager for ""
I1106 18:33:00.910508   10621 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I1106 18:33:00.910513   10621 start_flags.go:282] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4096 CPUs:4 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
I1106 18:33:00.950362   10621 out.go:176] πŸ‘  Starting control plane node minikube in cluster minikube
I1106 18:33:00.950449   10621 cache.go:118] Beginning downloading kic base image for docker with docker
I1106 18:33:00.969274   10621 out.go:176] 🚜  Pulling base image ...
I1106 18:33:00.969350   10621 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1106 18:33:00.969440   10621 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
I1106 18:33:01.109048   10621 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4
I1106 18:33:01.109072   10621 cache.go:57] Caching tarball of preloaded images
I1106 18:33:01.109337   10621 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1106 18:33:01.128774   10621 out.go:176] πŸ’Ύ  Downloading Kubernetes v1.22.3 preload ...
I1106 18:33:01.128812   10621 preload.go:238] getting checksum for preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4 ...
I1106 18:33:01.177839   10621 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
I1106 18:33:01.177861   10621 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
I1106 18:33:01.354584   10621 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4?checksum=md5:40b3c09dd22c40c7510f649d667cddd5 -> /Users/jan/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4
I1106 18:33:30.323623   10621 preload.go:248] saving checksum for preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4 ...
I1106 18:33:30.324761   10621 preload.go:255] verifying checksumm of /Users/jan/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4 ...
I1106 18:33:31.374663   10621 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
I1106 18:33:31.374865   10621 profile.go:147] Saving config to /Users/jan/.minikube/profiles/minikube/config.json ...
I1106 18:33:31.374890   10621 lock.go:35] WriteFile acquiring /Users/jan/.minikube/profiles/minikube/config.json: {Name:mke871c32276400b4c4877d748f3cbbd96ba49ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1106 18:33:31.375101   10621 cache.go:206] Successfully downloaded all kic artifacts
I1106 18:33:31.375121   10621 start.go:313] acquiring machines lock for minikube: {Name:mkb79e848cf021ed81b18acdcc9279e6af0c3f25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1106 18:33:31.375157   10621 start.go:317] acquired machines lock for "minikube" in 30.44Β΅s
I1106 18:33:31.375171   10621 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4096 CPUs:4 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
I1106 18:33:31.375200   10621 start.go:126] createHost starting for "" (driver="docker")
I1106 18:33:31.414517   10621 out.go:203] πŸ”₯  Creating docker container (CPUs=4, Memory=4096MB) ...
I1106 18:33:31.414876   10621 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
I1106 18:33:31.414918   10621 client.go:168] LocalClient.Create starting
I1106 18:33:31.415123   10621 main.go:130] libmachine: Reading certificate data from /Users/jan/.minikube/certs/ca.pem
I1106 18:33:31.415614   10621 main.go:130] libmachine: Decoding PEM data...
I1106 18:33:31.415644   10621 main.go:130] libmachine: Parsing certificate...
I1106 18:33:31.415753   10621 main.go:130] libmachine: Reading certificate data from /Users/jan/.minikube/certs/cert.pem
I1106 18:33:31.416180   10621 main.go:130] libmachine: Decoding PEM data...
I1106 18:33:31.416198   10621 main.go:130] libmachine: Parsing certificate...
I1106 18:33:31.417401   10621 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1106 18:33:31.860082   10621 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1106 18:33:31.860497   10621 network_create.go:254] running [docker network inspect minikube] to gather additional debugging logs...
I1106 18:33:31.860542   10621 cli_runner.go:115] Run: docker network inspect minikube
W1106 18:33:32.027637   10621 cli_runner.go:162] docker network inspect minikube returned with exit code 1
I1106 18:33:32.027660   10621 network_create.go:257] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error: No such network: minikube
I1106 18:33:32.027671   10621 network_create.go:259] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr **
Error: No such network: minikube

** /stderr **
I1106 18:33:32.027795   10621 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1106 18:33:32.226480   10621 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007144a0] misses:0}
I1106 18:33:32.226514   10621 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1106 18:33:32.226530   10621 network_create.go:106] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1106 18:33:32.226638   10621 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
I1106 18:33:32.427706   10621 network_create.go:90] docker network minikube 192.168.49.0/24 created
I1106 18:33:32.427730   10621 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container
I1106 18:33:32.427879   10621 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I1106 18:33:32.591075   10621 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I1106 18:33:32.761204   10621 oci.go:102] Successfully created a docker volume minikube
I1106 18:33:32.761428   10621 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
I1106 18:33:33.569189   10621 oci.go:106] Successfully prepared a docker volume minikube
I1106 18:33:33.569284   10621 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1106 18:33:33.569300   10621 kic.go:179] Starting extracting preloaded images to volume ...
I1106 18:33:33.569346   10621 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I1106 18:33:33.569425   10621 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jan/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
I1106 18:33:34.649156   10621 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.079709351s)
I1106 18:33:34.649660   10621 cli_runner.go:115] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=4 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c
I1106 18:33:35.858571   10621 cli_runner.go:168] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=4 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c: (1.208777413s)
I1106 18:33:35.858719   10621 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}}
I1106 18:33:36.048074   10621 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I1106 18:33:36.235127   10621 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I1106 18:33:36.544281   10621 oci.go:281] the created container "minikube" has a running status.
I1106 18:33:36.544304   10621 kic.go:210] Creating ssh key for kic: /Users/jan/.minikube/machines/minikube/id_rsa...
I1106 18:33:36.705148   10621 kic_runner.go:187] docker (temp): /Users/jan/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1106 18:33:37.095539   10621 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I1106 18:33:37.352988   10621 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1106 18:33:37.353002   10621 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I1106 18:33:40.980902   10621 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jan/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (7.411382862s)
I1106 18:33:40.980920   10621 kic.go:188] duration metric: took 7.411601 seconds to extract preloaded images to volume
I1106 18:33:40.981080   10621 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I1106 18:33:41.250470   10621 machine.go:88] provisioning docker machine ...
I1106 18:33:41.250606   10621 ubuntu.go:169] provisioning hostname "minikube"
I1106 18:33:41.250778   10621 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1106 18:33:41.418470   10621 main.go:130] libmachine: Using SSH client type: native
I1106 18:33:41.419482   10621 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x43a0ca0] 0x43a3d80 <nil>  [] 0s} 127.0.0.1 58106 <nil> <nil>}
I1106 18:33:41.419491   10621 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I1106 18:33:41.550527   10621 main.go:130] libmachine: SSH cmd err, output: <nil>: minikube

I1106 18:33:41.550635   10621 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1106 18:33:41.719390   10621 main.go:130] libmachine: Using SSH client type: native
I1106 18:33:41.719612   10621 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x43a0ca0] 0x43a3d80 <nil>  [] 0s} 127.0.0.1 58106 <nil> <nil>}
I1106 18:33:41.719621   10621 main.go:130] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
			fi
		fi
I1106 18:33:41.835611   10621 main.go:130] libmachine: SSH cmd err, output: <nil>:
I1106 18:33:41.835624   10621 ubuntu.go:175] set auth options {CertDir:/Users/jan/.minikube CaCertPath:/Users/jan/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jan/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jan/.minikube/machines/server.pem ServerKeyPath:/Users/jan/.minikube/machines/server-key.pem ClientKeyPath:/Users/jan/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jan/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jan/.minikube}
I1106 18:33:41.835640   10621 ubuntu.go:177] setting up certificates
I1106 18:33:41.835665   10621 provision.go:83] configureAuth start
I1106 18:33:41.835770   10621 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I1106 18:33:42.010820   10621 provision.go:138] copyHostCerts
I1106 18:33:42.010947   10621 exec_runner.go:144] found /Users/jan/.minikube/cert.pem, removing ...
I1106 18:33:42.010954   10621 exec_runner.go:207] rm: /Users/jan/.minikube/cert.pem
I1106 18:33:42.011079   10621 exec_runner.go:151] cp: /Users/jan/.minikube/certs/cert.pem --> /Users/jan/.minikube/cert.pem (1115 bytes)
I1106 18:33:42.011689   10621 exec_runner.go:144] found /Users/jan/.minikube/key.pem, removing ...
I1106 18:33:42.011693   10621 exec_runner.go:207] rm: /Users/jan/.minikube/key.pem
I1106 18:33:42.011786   10621 exec_runner.go:151] cp: /Users/jan/.minikube/certs/key.pem --> /Users/jan/.minikube/key.pem (1679 bytes)
I1106 18:33:42.012383   10621 exec_runner.go:144] found /Users/jan/.minikube/ca.pem, removing ...
I1106 18:33:42.012388   10621 exec_runner.go:207] rm: /Users/jan/.minikube/ca.pem
I1106 18:33:42.012531   10621 exec_runner.go:151] cp: /Users/jan/.minikube/certs/ca.pem --> /Users/jan/.minikube/ca.pem (1070 bytes)
I1106 18:33:42.012852   10621 provision.go:112] generating server cert: /Users/jan/.minikube/machines/server.pem ca-key=/Users/jan/.minikube/certs/ca.pem private-key=/Users/jan/.minikube/certs/ca-key.pem org=jan.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I1106 18:33:42.504438   10621 provision.go:172] copyRemoteCerts
I1106 18:33:42.504567   10621 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1106 18:33:42.504629   10621 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1106 18:33:42.669875   10621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58106 SSHKeyPath:/Users/jan/.minikube/machines/minikube/id_rsa Username:docker}
I1106 18:33:42.755612   10621 ssh_runner.go:319] scp /Users/jan/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1070 bytes)
I1106 18:33:42.777642   10621 ssh_runner.go:319] scp /Users/jan/.minikube/machines/server.pem --> /etc/docker/server.pem (1192 bytes)
I1106 18:33:42.799780   10621 ssh_runner.go:319] scp /Users/jan/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1106 18:33:42.821464   10621 provision.go:86] duration metric: configureAuth took 985.780734ms
I1106 18:33:42.821474   10621 ubuntu.go:193] setting minikube options for container-runtime
I1106 18:33:42.821917   10621 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I1106 18:33:42.822005   10621 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1106 18:33:42.990993   10621 main.go:130] libmachine: Using SSH client type: native
I1106 18:33:42.991247   10621 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x43a0ca0] 0x43a3d80 <nil>  [] 0s} 127.0.0.1 58106 <nil> <nil>}
I1106 18:33:42.991262   10621 main.go:130] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1106 18:33:43.110176   10621 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay

I1106 18:33:43.110189   10621 ubuntu.go:71] root file system type: overlay
I1106 18:33:43.110423   10621 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1106 18:33:43.110558   10621 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1106 18:33:43.277314   10621 main.go:130] libmachine: Using SSH client type: native
I1106 18:33:43.277513   10621 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x43a0ca0] 0x43a3d80 <nil>  [] 0s} 127.0.0.1 58106 <nil> <nil>}
I1106 18:33:43.277578   10621 main.go:130] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1106 18:33:43.404912   10621 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I1106 18:33:43.405085   10621 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1106 18:33:43.570025   10621 main.go:130] libmachine: Using SSH client type: native
I1106 18:33:43.570234   10621 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x43a0ca0] 0x43a3d80 <nil>  [] 0s} 127.0.0.1 58106 <nil> <nil>}
I1106 18:33:43.570245   10621 main.go:130] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1106 18:33:44.204282   10621 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-07-30 19:52:33.000000000 +0000
+++ /lib/systemd/system/docker.service.new	2021-11-06 17:33:43.419493000 +0000
@@ -1,30 +1,32 @@
 [Unit]
 Description=Docker Application Container Engine
 Documentation=https://docs.docker.com
+BindsTo=containerd.service
 After=network-online.target firewalld.service containerd.service
 Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID

 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
 LimitNPROC=infinity
 LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0

 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

 # kill only the docker process, not all processes in the cgroup
 KillMode=process
-OOMScoreAdjust=-500

 [Install]
 WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I1106 18:33:44.204334   10621 machine.go:91] provisioned docker machine in 2.953746464s
I1106 18:33:44.204347   10621 client.go:171] LocalClient.Create took 12.789390857s
I1106 18:33:44.204488   10621 start.go:168] duration metric: libmachine.API.Create for "minikube" took 12.789574489s
I1106 18:33:44.204497   10621 start.go:267] post-start starting for "minikube" (driver="docker")
I1106 18:33:44.204500   10621 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1106 18:33:44.204721   10621 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1106 18:33:44.204796   10621 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1106 18:33:44.373284   10621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58106 SSHKeyPath:/Users/jan/.minikube/machines/minikube/id_rsa Username:docker}
I1106 18:33:44.461670   10621 ssh_runner.go:152] Run: cat /etc/os-release
I1106 18:33:44.466124   10621 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1106 18:33:44.466139   10621 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1106 18:33:44.466144   10621 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1106 18:33:44.466150   10621 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I1106 18:33:44.466156   10621 filesync.go:126] Scanning /Users/jan/.minikube/addons for local assets ...
I1106 18:33:44.466289   10621 filesync.go:126] Scanning /Users/jan/.minikube/files for local assets ...
I1106 18:33:44.466342   10621 start.go:270] post-start completed in 261.841311ms
I1106 18:33:44.467083   10621 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I1106 18:33:44.639144   10621 profile.go:147] Saving config to /Users/jan/.minikube/profiles/minikube/config.json ...
I1106 18:33:44.639834   10621 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1106 18:33:44.639892   10621 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1106 18:33:44.810274   10621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58106 SSHKeyPath:/Users/jan/.minikube/machines/minikube/id_rsa Username:docker}
I1106 18:33:44.899420   10621 start.go:129] duration metric: createHost completed in 13.524172416s
I1106 18:33:44.899431   10621 start.go:80] releasing machines lock for "minikube", held for 13.524233313s
I1106 18:33:44.900397   10621 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I1106 18:33:45.072666   10621 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
I1106 18:33:45.072797   10621 ssh_runner.go:152] Run: systemctl --version
I1106 18:33:45.072888   10621 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1106 18:33:45.074082   10621 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1106 18:33:45.242610   10621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58106 SSHKeyPath:/Users/jan/.minikube/machines/minikube/id_rsa Username:docker}
I1106 18:33:45.255775   10621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58106 SSHKeyPath:/Users/jan/.minikube/machines/minikube/id_rsa Username:docker}
I1106 18:33:45.325717   10621 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
I1106 18:33:45.603191   10621 ssh_runner.go:152] Run: sudo systemctl cat docker.service
I1106 18:33:45.615481   10621 cruntime.go:255] skipping containerd shutdown because we are bound to it
I1106 18:33:45.615673   10621 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service crio
I1106 18:33:45.626984   10621 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I1106 18:33:45.644731   10621 ssh_runner.go:152] Run: sudo systemctl unmask docker.service
I1106 18:33:45.707277   10621 ssh_runner.go:152] Run: sudo systemctl enable docker.socket
I1106 18:33:45.772139   10621 ssh_runner.go:152] Run: sudo systemctl cat docker.service
I1106 18:33:45.784894   10621 ssh_runner.go:152] Run: sudo systemctl daemon-reload
I1106 18:33:45.843509   10621 ssh_runner.go:152] Run: sudo systemctl start docker
I1106 18:33:45.857339   10621 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
I1106 18:33:45.903177   10621 ssh_runner.go:152] Run: docker version --format {{.Server.Version}}
I1106 18:33:45.975436   10621 out.go:203] 🐳  Preparing Kubernetes v1.22.3 on Docker 20.10.8 ...
I1106 18:33:45.975846   10621 cli_runner.go:115] Run: docker exec -t minikube dig +short host.docker.internal
I1106 18:33:46.263367   10621 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I1106 18:33:46.263582   10621 ssh_runner.go:152] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
I1106 18:33:46.269229   10621 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1106 18:33:46.281722   10621 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I1106 18:33:46.450629   10621 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1106 18:33:46.450734   10621 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
I1106 18:33:46.491664   10621 docker.go:558] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.22.3
k8s.gcr.io/kube-scheduler:v1.22.3
k8s.gcr.io/kube-controller-manager:v1.22.3
k8s.gcr.io/kube-proxy:v1.22.3
kubernetesui/dashboard:v2.3.1
k8s.gcr.io/etcd:3.5.0-0
kubernetesui/metrics-scraper:v1.0.7
k8s.gcr.io/coredns/coredns:v1.8.4
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.5

-- /stdout --
I1106 18:33:46.491689   10621 docker.go:489] Images already preloaded, skipping extraction
I1106 18:33:46.491800   10621 ssh_runner.go:152] Run: docker images --format {{.Repository}}:{{.Tag}}
I1106 18:33:46.532831   10621 docker.go:558] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.22.3
k8s.gcr.io/kube-scheduler:v1.22.3
k8s.gcr.io/kube-controller-manager:v1.22.3
k8s.gcr.io/kube-proxy:v1.22.3
kubernetesui/dashboard:v2.3.1
k8s.gcr.io/etcd:3.5.0-0
kubernetesui/metrics-scraper:v1.0.7
k8s.gcr.io/coredns/coredns:v1.8.4
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.5

-- /stdout --
I1106 18:33:46.532852   10621 cache_images.go:79] Images are preloaded, skipping loading
I1106 18:33:46.532961   10621 ssh_runner.go:152] Run: docker info --format {{.CgroupDriver}}
I1106 18:33:46.628396   10621 cni.go:93] Creating CNI manager for ""
I1106 18:33:46.628406   10621 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I1106 18:33:46.628419   10621 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1106 18:33:46.628432   10621 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I1106 18:33:46.628634   10621 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.22.3
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%!"(MISSING)
  nodefs.inodesFree: "0%!"(MISSING)
  imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
  maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
  tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
  tcpCloseWaitTimeout: 0s

I1106 18:33:46.628860   10621 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.22.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2

[Install]
 config:
{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1106 18:33:46.629066   10621 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.3
I1106 18:33:46.638069   10621 binaries.go:44] Found k8s binaries, skipping transfer
I1106 18:33:46.638222   10621 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1106 18:33:46.646405   10621 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
I1106 18:33:46.661488   10621 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1106 18:33:46.676788   10621 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2051 bytes)
I1106 18:33:46.690926   10621 ssh_runner.go:152] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
I1106 18:33:46.695337   10621 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1106 18:33:46.705778   10621 certs.go:54] Setting up /Users/jan/.minikube/profiles/minikube for IP: 192.168.49.2
I1106 18:33:46.706278   10621 certs.go:182] skipping minikubeCA CA generation: /Users/jan/.minikube/ca.key
I1106 18:33:46.706572   10621 certs.go:182] skipping proxyClientCA CA generation: /Users/jan/.minikube/proxy-client-ca.key
I1106 18:33:46.706634   10621 certs.go:302] generating minikube-user signed cert: /Users/jan/.minikube/profiles/minikube/client.key
I1106 18:33:46.706651   10621 crypto.go:68] Generating cert /Users/jan/.minikube/profiles/minikube/client.crt with IP's: []
I1106 18:33:46.823383   10621 crypto.go:156] Writing cert to /Users/jan/.minikube/profiles/minikube/client.crt ...
I1106 18:33:46.823393   10621 lock.go:35] WriteFile acquiring /Users/jan/.minikube/profiles/minikube/client.crt: {Name:mkac9e4511e921fedbe3738e2185501d944768c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1106 18:33:46.823668   10621 crypto.go:164] Writing key to /Users/jan/.minikube/profiles/minikube/client.key ...
I1106 18:33:46.823673   10621 lock.go:35] WriteFile acquiring /Users/jan/.minikube/profiles/minikube/client.key: {Name:mk5e8fc75ffb844da6da8950955a142b971d7f7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1106 18:33:46.823853   10621 certs.go:302] generating minikube signed cert: /Users/jan/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I1106 18:33:46.823869   10621 crypto.go:68] Generating cert /Users/jan/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I1106 18:33:46.920222   10621 crypto.go:156] Writing cert to /Users/jan/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I1106 18:33:46.920244   10621 lock.go:35] WriteFile acquiring /Users/jan/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk80c50be02972de564e4e6aa05051ca48613e19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1106 18:33:46.920473   10621 crypto.go:164] Writing key to /Users/jan/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I1106 18:33:46.920477   10621 lock.go:35] WriteFile acquiring /Users/jan/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mkf8c9553326f6ad828532d89ef521dd3442d6bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1106 18:33:46.920610   10621 certs.go:320] copying /Users/jan/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /Users/jan/.minikube/profiles/minikube/apiserver.crt
I1106 18:33:46.920994   10621 certs.go:324] copying /Users/jan/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /Users/jan/.minikube/profiles/minikube/apiserver.key
I1106 18:33:46.921176   10621 certs.go:302] generating aggregator signed cert: /Users/jan/.minikube/profiles/minikube/proxy-client.key
I1106 18:33:46.921192   10621 crypto.go:68] Generating cert /Users/jan/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I1106 18:33:47.039809   10621 crypto.go:156] Writing cert to /Users/jan/.minikube/profiles/minikube/proxy-client.crt ...
I1106 18:33:47.039826   10621 lock.go:35] WriteFile acquiring /Users/jan/.minikube/profiles/minikube/proxy-client.crt: {Name:mk89b4bee027f5fad00b124d94657d27918ab816 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1106 18:33:47.040098   10621 crypto.go:164] Writing key to /Users/jan/.minikube/profiles/minikube/proxy-client.key ...
I1106 18:33:47.040103   10621 lock.go:35] WriteFile acquiring /Users/jan/.minikube/profiles/minikube/proxy-client.key: {Name:mk54f6ac7a8a0185042edb81328a85ce5b3ce067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1106 18:33:47.040569   10621 certs.go:388] found cert: /Users/jan/.minikube/certs/Users/jan/.minikube/certs/ca-key.pem (1675 bytes)
I1106 18:33:47.040618   10621 certs.go:388] found cert: /Users/jan/.minikube/certs/Users/jan/.minikube/certs/ca.pem (1070 bytes)
I1106 18:33:47.040660   10621 certs.go:388] found cert: /Users/jan/.minikube/certs/Users/jan/.minikube/certs/cert.pem (1115 bytes)
I1106 18:33:47.040695   10621 certs.go:388] found cert: /Users/jan/.minikube/certs/Users/jan/.minikube/certs/key.pem (1679 bytes)
I1106 18:33:47.042275   10621 ssh_runner.go:319] scp /Users/jan/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1106 18:33:47.063478   10621 ssh_runner.go:319] scp /Users/jan/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1106 18:33:47.085719   10621 ssh_runner.go:319] scp /Users/jan/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1106 18:33:47.107683   10621 ssh_runner.go:319] scp /Users/jan/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1106 18:33:47.129466   10621 ssh_runner.go:319] scp /Users/jan/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1106 18:33:47.149931   10621 ssh_runner.go:319] scp /Users/jan/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1106 18:33:47.170074   10621 ssh_runner.go:319] scp /Users/jan/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1106 18:33:47.191925   10621 ssh_runner.go:319] scp /Users/jan/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1106 18:33:47.213205   10621 ssh_runner.go:319] scp /Users/jan/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1106 18:33:47.235082   10621 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1106 18:33:47.250105   10621 ssh_runner.go:152] Run: openssl version
I1106 18:33:47.257723   10621 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1106 18:33:47.267276   10621 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1106 18:33:47.271796   10621 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:49 /usr/share/ca-certificates/minikubeCA.pem
I1106 18:33:47.271933   10621 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1106 18:33:47.279772   10621 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1106 18:33:47.290659   10621 kubeadm.go:390] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4096 CPUs:4 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
I1106 18:33:47.290791   10621 ssh_runner.go:152] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1106 18:33:47.330663   10621 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1106 18:33:47.342546   10621 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1106 18:33:47.351713   10621 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I1106 18:33:47.351929   10621 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1106 18:33:47.360997   10621 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1106 18:33:47.361028   10621 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1106 18:33:47.991010   10621 out.go:203]     β–ͺ Generating certificates and keys ...
I1106 18:33:50.409459   10621 out.go:203]     β–ͺ Booting up control plane ...
I1106 18:33:58.001325   10621 out.go:203]     β–ͺ Configuring RBAC rules ...
I1106 18:33:58.442974   10621 cni.go:93] Creating CNI manager for ""
I1106 18:33:58.442981   10621 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I1106 18:33:58.443015   10621 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1106 18:33:58.443195   10621 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_11_06T18_33_58_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I1106 18:33:58.443199   10621 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1106 18:33:58.739336   10621 kubeadm.go:985] duration metric: took 296.313115ms to wait for elevateKubeSystemPrivileges.
I1106 18:33:58.739373   10621 ops.go:34] apiserver oom_adj: -16
I1106 18:33:58.739381   10621 kubeadm.go:392] StartCluster complete in 11.448696529s
I1106 18:33:58.739396   10621 settings.go:142] acquiring lock: {Name:mkb8c540e9cf43adbf9f5b2f7bb71764b3ae0189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1106 18:33:58.739489   10621 settings.go:150] Updating kubeconfig:  /Users/jan/.kube/config
I1106 18:33:58.742483   10621 lock.go:35] WriteFile acquiring /Users/jan/.kube/config: {Name:mk344b9d321e0dc760e76b2bfbe764f5d8a69d58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1106 18:33:59.272342   10621 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1
I1106 18:33:59.289899   10621 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1106 18:33:59.289908   10621 addons.go:415] enableAddons start: toEnable=map[], additional=[]
I1106 18:33:59.289906   10621 start.go:229] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
I1106 18:33:59.289986   10621 addons.go:65] Setting default-storageclass=true in profile "minikube"
I1106 18:33:59.308753   10621 out.go:176] πŸ”Ž  Verifying Kubernetes components...
I1106 18:33:59.290000   10621 addons.go:65] Setting storage-provisioner=true in profile "minikube"
I1106 18:33:59.290095   10621 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I1106 18:33:59.308779   10621 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I1106 18:33:59.308784   10621 addons.go:153] Setting addon storage-provisioner=true in "minikube"
W1106 18:33:59.308794   10621 addons.go:165] addon storage-provisioner should already be in state true
I1106 18:33:59.308821   10621 host.go:66] Checking if "minikube" exists ...
I1106 18:33:59.308929   10621 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
I1106 18:33:59.309255   10621 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I1106 18:33:59.309356   10621 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I1106 18:33:59.361201   10621 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1106 18:33:59.361224   10621 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I1106 18:33:59.537314   10621 out.go:176]     β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1106 18:33:59.538120   10621 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1106 18:33:59.538127   10621 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1106 18:33:59.538298   10621 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1106 18:33:59.560221   10621 addons.go:153] Setting addon default-storageclass=true in "minikube"
W1106 18:33:59.560471   10621 addons.go:165] addon default-storageclass should already be in state true
I1106 18:33:59.560498   10621 host.go:66] Checking if "minikube" exists ...
I1106 18:33:59.561175   10621 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I1106 18:33:59.582917   10621 api_server.go:51] waiting for apiserver process to appear ...
I1106 18:33:59.583402   10621 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1106 18:33:59.596152   10621 start.go:739] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
I1106 18:33:59.608180   10621 api_server.go:71] duration metric: took 318.238207ms to wait for apiserver process to appear ...
I1106 18:33:59.608194   10621 api_server.go:87] waiting for apiserver healthz status ...
I1106 18:33:59.608205   10621 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58105/healthz ...
I1106 18:33:59.616474   10621 api_server.go:266] https://127.0.0.1:58105/healthz returned 200:
ok
I1106 18:33:59.618305   10621 api_server.go:140] control plane version: v1.22.3
I1106 18:33:59.618314   10621 api_server.go:130] duration metric: took 10.116997ms to wait for apiserver health ...
I1106 18:33:59.618322   10621 system_pods.go:43] waiting for kube-system pods to appear ...
I1106 18:33:59.628260   10621 system_pods.go:59] 4 kube-system pods found
I1106 18:33:59.628273   10621 system_pods.go:61] "etcd-minikube" [583129af-74d7-4746-bab3-2d980e3bae9e] Pending
I1106 18:33:59.628277   10621 system_pods.go:61] "kube-apiserver-minikube" [ecf62d93-01b4-443c-bd1d-cab1f2bab11a] Pending
I1106 18:33:59.628280   10621 system_pods.go:61] "kube-controller-manager-minikube" [7a99026e-1f5c-45d6-9214-938f79650fba] Pending
I1106 18:33:59.628282   10621 system_pods.go:61] "kube-scheduler-minikube" [26c28bb2-0213-4e0d-b6ad-ce9555365833] Pending
I1106 18:33:59.628285   10621 system_pods.go:74] duration metric: took 9.959994ms to wait for pod list to return data ...
I1106 18:33:59.628291   10621 kubeadm.go:547] duration metric: took 338.355787ms to wait for : map[apiserver:true system_pods:true] ...
I1106 18:33:59.628300   10621 node_conditions.go:102] verifying NodePressure condition ...
I1106 18:33:59.633171   10621 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
I1106 18:33:59.633185   10621 node_conditions.go:123] node cpu capacity is 8
I1106 18:33:59.633196   10621 node_conditions.go:105] duration metric: took 4.893565ms to run NodePressure ...
I1106 18:33:59.633204   10621 start.go:234] waiting for startup goroutines ...
I1106 18:33:59.728485   10621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58106 SSHKeyPath:/Users/jan/.minikube/machines/minikube/id_rsa Username:docker}
I1106 18:33:59.746801   10621 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
I1106 18:33:59.746809   10621 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1106 18:33:59.746908   10621 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1106 18:33:59.829689   10621 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1106 18:33:59.930060   10621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58106 SSHKeyPath:/Users/jan/.minikube/machines/minikube/id_rsa Username:docker}
I1106 18:34:00.035209   10621 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1106 18:34:00.262742   10621 out.go:176] 🌟  Enabled addons: storage-provisioner, default-storageclass
I1106 18:34:00.262820   10621 addons.go:417] enableAddons completed in 972.932299ms
I1106 18:34:00.312088   10621 start.go:473] kubectl: 1.22.3, cluster: 1.22.3 (minor skew: 0)
I1106 18:34:00.347300   10621 out.go:176] πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default


==> Docker <==
-- Logs begin at Sat 2021-11-06 17:33:36 UTC, end at Sat 2021-11-06 17:49:12 UTC. --
Nov 06 17:33:36 minikube systemd[1]: Starting Docker Application Container Engine...
Nov 06 17:33:36 minikube dockerd[212]: time="2021-11-06T17:33:36.452991600Z" level=info msg="Starting up"
Nov 06 17:33:36 minikube dockerd[212]: time="2021-11-06T17:33:36.459176200Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Nov 06 17:33:36 minikube dockerd[212]: time="2021-11-06T17:33:36.459584600Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Nov 06 17:33:36 minikube dockerd[212]: time="2021-11-06T17:33:36.459735400Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Nov 06 17:33:36 minikube dockerd[212]: time="2021-11-06T17:33:36.459773300Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Nov 06 17:33:36 minikube dockerd[212]: time="2021-11-06T17:33:36.462579300Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Nov 06 17:33:36 minikube dockerd[212]: time="2021-11-06T17:33:36.462697000Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Nov 06 17:33:36 minikube dockerd[212]: time="2021-11-06T17:33:36.462731200Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Nov 06 17:33:36 minikube dockerd[212]: time="2021-11-06T17:33:36.462749200Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Nov 06 17:33:36 minikube dockerd[212]: time="2021-11-06T17:33:36.835506100Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Nov 06 17:33:36 minikube dockerd[212]: time="2021-11-06T17:33:36.868162500Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Nov 06 17:33:36 minikube dockerd[212]: time="2021-11-06T17:33:36.868210000Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Nov 06 17:33:36 minikube dockerd[212]: time="2021-11-06T17:33:36.868394100Z" level=info msg="Loading containers: start."
Nov 06 17:33:36 minikube dockerd[212]: time="2021-11-06T17:33:36.961673400Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Nov 06 17:33:37 minikube dockerd[212]: time="2021-11-06T17:33:37.020758900Z" level=info msg="Loading containers: done."
Nov 06 17:33:37 minikube dockerd[212]: time="2021-11-06T17:33:37.202052500Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8
Nov 06 17:33:37 minikube dockerd[212]: time="2021-11-06T17:33:37.202234900Z" level=info msg="Daemon has completed initialization"
Nov 06 17:33:37 minikube systemd[1]: Started Docker Application Container Engine.
Nov 06 17:33:37 minikube dockerd[212]: time="2021-11-06T17:33:37.255715400Z" level=info msg="API listen on /run/docker.sock"
Nov 06 17:33:43 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed.
Nov 06 17:33:43 minikube systemd[1]: Stopping Docker Application Container Engine...
Nov 06 17:33:43 minikube dockerd[212]: time="2021-11-06T17:33:43.975121500Z" level=info msg="Processing signal 'terminated'"
Nov 06 17:33:43 minikube dockerd[212]: time="2021-11-06T17:33:43.976680100Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Nov 06 17:33:43 minikube dockerd[212]: time="2021-11-06T17:33:43.977709000Z" level=info msg="Daemon shutdown complete"
Nov 06 17:33:43 minikube systemd[1]: docker.service: Succeeded.
Nov 06 17:33:43 minikube systemd[1]: Stopped Docker Application Container Engine.
Nov 06 17:33:43 minikube systemd[1]: Starting Docker Application Container Engine...
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.030373900Z" level=info msg="Starting up"
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.032759200Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.032801300Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.032903400Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.032930700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.034315600Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.034351800Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.034375700Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.034406600Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.046386100Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.054147000Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.054201000Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.054397900Z" level=info msg="Loading containers: start."
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.144163100Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.178104700Z" level=info msg="Loading containers: done."
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.196253500Z" level=info msg="Docker daemon" commit=75249d8 graphdriver(s)=overlay2 version=20.10.8
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.196321400Z" level=info msg="Daemon has completed initialization"
Nov 06 17:33:44 minikube systemd[1]: Started Docker Application Container Engine.
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.226365800Z" level=info msg="API listen on [::]:2376"
Nov 06 17:33:44 minikube dockerd[470]: time="2021-11-06T17:33:44.230806500Z" level=info msg="API listen on /var/run/docker.sock"
Nov 06 17:34:41 minikube dockerd[470]: time="2021-11-06T17:34:41.588635100Z" level=info msg="ignoring event" container=f625599f4d84eab4e0279ea4b8e0c0a763f2e674154c099c26dda3f3a70cf19f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 06 17:41:29 minikube dockerd[470]: time="2021-11-06T17:41:29.674423000Z" level=warning msg="reference for unknown type: " digest="sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660" remote="k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660"
Nov 06 17:41:32 minikube dockerd[470]: time="2021-11-06T17:41:32.759609600Z" level=info msg="ignoring event" container=ac4cfe9c309e466cc30e0481e40a0cd9e904acefd533785e37200e4c14207843 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 06 17:41:32 minikube dockerd[470]: time="2021-11-06T17:41:32.848377500Z" level=info msg="ignoring event" container=3248d02b5e001921a82d88ace7801faef0714cf30de9989c78acee7f80fc103e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 06 17:41:32 minikube dockerd[470]: time="2021-11-06T17:41:32.975745500Z" level=info msg="ignoring event" container=c4d481205fc21a66a0168bc88b43072084bcd6e5b09ac42825bed3dc40801df4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 06 17:41:32 minikube dockerd[470]: time="2021-11-06T17:41:32.983105800Z" level=info msg="ignoring event" container=0f151451bc8abee73284d34ca62c244b1ac10e49a1b59242ca22037e6818247e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 06 17:41:36 minikube dockerd[470]: time="2021-11-06T17:41:36.928819900Z" level=warning msg="reference for unknown type: " digest="sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef" remote="k8s.gcr.io/ingress-nginx/controller@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef"
Nov 06 17:41:51 minikube dockerd[470]: time="2021-11-06T17:41:51.334334000Z" level=warning msg="Published ports are discarded when using host network mode"
Nov 06 17:41:51 minikube dockerd[470]: time="2021-11-06T17:41:51.365728700Z" level=warning msg="Published ports are discarded when using host network mode"
Nov 06 17:41:51 minikube dockerd[470]: time="2021-11-06T17:41:51.653546100Z" level=warning msg="reference for unknown type: " digest="sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f" remote="gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f"


==> container status <==
CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID
cbb8f87e9ee0b       gcr.io/google-samples/hello-app@sha256:6f04955dfc33e9f9026be4369f5df98d29b891a964a334abbbe8a60b9585a481                 7 minutes ago       Running             hello-world-app           0                   2c966fbabfb18
81f28806c9e99       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f        7 minutes ago       Running             minikube-ingress-dns      0                   12ee3a33a4d0f
311da77d512c3       k8s.gcr.io/ingress-nginx/controller@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef             7 minutes ago       Running             controller                0                   200c623be4c03
3248d02b5e001       k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660   7 minutes ago       Exited              patch                     0                   c4d481205fc21
ac4cfe9c309e4       k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660   7 minutes ago       Exited              create                    0                   0f151451bc8ab
aeac14bc69085       6e38f40d628db                                                                                                           14 minutes ago      Running             storage-provisioner       1                   6672322eb5792
35826de53d5be       8d147537fb7d1                                                                                                           15 minutes ago      Running             coredns                   0                   51edc5207117a
4ffe5d4abd4de       6120bd723dced                                                                                                           15 minutes ago      Running             kube-proxy                0                   044e7d465ce7b
f625599f4d84e       6e38f40d628db                                                                                                           15 minutes ago      Exited              storage-provisioner       0                   6672322eb5792
0a77f561a3219       05c905cef780c                                                                                                           15 minutes ago      Running             kube-controller-manager   0                   3c20209292bc6
00ca382718e23       53224b502ea4d                                                                                                           15 minutes ago      Running             kube-apiserver            0                   a902db5c8f257
d1b9d4b6e492d       0aa9c7e31d307                                                                                                           15 minutes ago      Running             kube-scheduler            0                   c61bb1c22870e
072a4d3ae6369       0048118155842                                                                                                           15 minutes ago      Running             etcd                      0                   b2215d633b63f


==> coredns [35826de53d5b] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
CoreDNS-1.8.4
linux/amd64, go1.16.4, 053c4d5


==> describe nodes <==
Name:               minikube
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/updated_at=2021_11_06T18_33_58_0700
                    minikube.k8s.io/version=v1.24.0
                    node-role.kubernetes.io/control-plane=
                    node-role.kubernetes.io/master=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 06 Nov 2021 17:33:55 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  minikube
  AcquireTime:     <unset>
  RenewTime:       Sat, 06 Nov 2021 17:49:07 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Sat, 06 Nov 2021 17:47:31 +0000   Sat, 06 Nov 2021 17:33:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sat, 06 Nov 2021 17:47:31 +0000   Sat, 06 Nov 2021 17:33:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Sat, 06 Nov 2021 17:47:31 +0000   Sat, 06 Nov 2021 17:33:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Sat, 06 Nov 2021 17:47:31 +0000   Sat, 06 Nov 2021 17:34:08 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.49.2
  Hostname:    minikube
Capacity:
  cpu:                8
  ephemeral-storage:  61255492Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             5053972Ki
  pods:               110
Allocatable:
  cpu:                8
  ephemeral-storage:  61255492Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             5053972Ki
  pods:               110
System Info:
  Machine ID:                 bba0be70c47c400ea3cf7733f1c0b4c1
  System UUID:                6428e513-db52-40b8-944d-67e4f7f96e7a
  Boot ID:                    d0706a5f-54f3-407d-bb53-480474ae39b9
  Kernel Version:             5.10.47-linuxkit
  OS Image:                   Ubuntu 20.04.2 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.8
  Kubelet Version:            v1.22.3
  Kube-Proxy Version:         v1.22.3
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (10 in total)
  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
  default                     hello-world-app-7b9bf45d65-mgxqj             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
  ingress-nginx               ingress-nginx-controller-5f66978484-fhzgj    100m (1%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         7m45s
  kube-system                 coredns-78fcd69978-wcz26                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (3%!)(MISSING)     15m
  kube-system                 etcd-minikube                                100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         15m
  kube-system                 kube-apiserver-minikube                      250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
  kube-system                 kube-controller-manager-minikube             200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
  kube-system                 kube-proxy-9g5rq                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
  kube-system                 kube-scheduler-minikube                      100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                850m (10%!)(MISSING)  0 (0%!)(MISSING)
  memory             260Mi (5%!)(MISSING)  170Mi (3%!)(MISSING)
  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
Events:
  Type    Reason                   Age                From        Message
  ----    ------                   ----               ----        -------
  Normal  Starting                 15m                kube-proxy
  Normal  NodeHasSufficientMemory  15m (x5 over 15m)  kubelet     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    15m (x4 over 15m)  kubelet     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     15m (x4 over 15m)  kubelet     Node minikube status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientMemory  15m                kubelet     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    15m                kubelet     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     15m                kubelet     Node minikube status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  15m                kubelet     Updated Node Allocatable limit across pods
  Normal  Starting                 15m                kubelet     Starting kubelet.
  Normal  NodeReady                15m                kubelet     Node minikube status is now: NodeReady


==> dmesg <==
[  +0.000001]  do_idle+0xd6/0x1ef
[  +0.000000]  cpu_startup_entry+0x1d/0x1f
[  +0.000001]  secondary_startup_64_no_verify+0xb0/0xbb
[  +0.000036] rcu: rcu_sched kthread starved for 7201604 jiffies! g4034577 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=2
[  +0.000010] clocksource:                       'hpet' wd_now: 8d42f58b wd_last: 8c033a18 mask: ffffffff
[  +0.000626] rcu: 	5-...!: (1 GPs behind) idle=e46/0/0x1 softirq=1149618/1149618 fqs=1
[  +0.000002] 	(detected by 2, t=7201608 jiffies, g=4034577, q=77)
[  +0.001020] NMI backtrace for cpu 5
[  +0.000002] CPU: 5 PID: 0 Comm: swapper/5 Tainted: G           O    T 5.10.47-linuxkit #1
[  +0.000000] Hardware name:  BHYVE, BIOS 1.00 03/14/2014
[  +0.000001] RIP: 0010:vprintk_emit+0x14c/0x177
[  +0.000001] Code: c7 c7 d8 84 b4 b1 e8 b4 cf ff ff 84 db 75 0f e8 44 08 00 00 48 89 ef e8 74 cf ff ff eb 20 8a 05 5a 64 9e 01 84 c0 74 04 f3 90 <eb> f2 e8 27 08 00 00 48 89 ef e8 57 cf ff ff e8 b6 e9 ff ff e8 61
[  +0.000000] RSP: 0018:ffffaf7b0019ce08 EFLAGS: 00000002
[  +0.000002] RAX: ffffffffb1415901 RBX: 0000000000000001 RCX: 000000000000000c
[  +0.000000] RDX: ffff9522c0315040 RSI: 0000000000000002 RDI: ffffffffb1b484d8
[  +0.000001] RBP: 0000000000000246 R08: 00001e96d0e2cc98 R09: 0000000000000246
[  +0.000000] R10: 000000000000c4a0 R11: ffffffffb1b4813d R12: 000000000000005b
[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: ffffffffb122f545
[  +0.000001] FS:  0000000000000000(0000) GS:ffff95233ad40000(0000) knlGS:0000000000000000
[  +0.000000] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  +0.000001] CR2: 000000c0005f7000 CR3: 000000000871e004 CR4: 00000000000706a0
[  +0.000000] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  +0.000001] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[  +0.000000] Call Trace:
[  +0.000000]  <IRQ>
[  +0.000001]  printk+0x68/0x7f
[  +0.000000]  clocksource_watchdog+0x1a2/0x2bd
[  +0.000000]  ? clocksource_unregister+0x45/0x45
[  +0.000001]  ? clocksource_unregister+0x45/0x45
[  +0.000000]  call_timer_fn+0x65/0xf6
[  +0.000000]  __run_timers+0x155/0x193
[  +0.000001]  run_timer_softirq+0x19/0x2d
[  +0.000000]  __do_softirq+0xfe/0x239
[  +0.000000]  asm_call_irq_on_stack+0xf/0x20
[  +0.000001]  </IRQ>
[  +0.000000]  do_softirq_own_stack+0x31/0x3e
[  +0.000001]  __irq_exit_rcu+0x45/0x80
[  +0.000000]  sysvec_apic_timer_interrupt+0x6c/0x7a
[  +0.000000]  asm_sysvec_apic_timer_interrupt+0x12/0x20
[  +0.000001] RIP: 0010:native_safe_halt+0x7/0x8
[  +0.000001] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 46 72 5a 4f ff ff ff 7f c3 e8 39 dc 5d ff f4 c3 e8 32 dc 5d ff fb f4 <c3> 0f 1f 44 00 00 53 31 ff 65 8b 35 17 08 5a 4f e8 8e 53 6d ff e8
[  +0.000000] RSP: 0018:ffffaf7b000a7ef0 EFLAGS: 00000212
[  +0.000001] RAX: ffffffffb0a70800 RBX: ffff9522c0315040 RCX: ffffaf7b02857d00
[  +0.000001] RDX: 00000000050d2e3e RSI: 0000000000000005 RDI: 0000000000000001
[  +0.000000] RBP: 0000000000000000 R08: 0000000000000001 R09: ffff95233ad5eb20
[  +0.000000] R10: ffff9522c0911648 R11: 0000000000000000 R12: 0000000000000000
[  +0.000001] R13: 0000000000000005 R14: 0000000000000000 R15: 0000000000000000
[  +0.000000]  ? __sched_text_end+0x6/0x6
[  +0.000001]  arch_safe_halt+0x5/0x8
[  +0.000000]  default_idle_call+0x2e/0x4c
[  +0.000000]  do_idle+0xd6/0x1ef
[  +0.000001]  cpu_startup_entry+0x1d/0x1f
[  +0.000000]  secondary_startup_64_no_verify+0xb0/0xbb
[  +0.000004] rcu: rcu_sched kthread starved for 7201604 jiffies! g4034577 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=2
[  +0.000001] rcu: 	Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior.
[  +0.000000] rcu: RCU grace-period kthread stack dump:
[  +0.013616] rcu: 	Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior.
[  +0.000001] rcu: RCU grace-period kthread stack dump:
[  +0.005328] clocksource:                       'tsc' cs_now: 698da8aae7bf0 cs_last: 689c63b126fac mask: ffffffffffffffff
[  +0.011871] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.


==> etcd [072a4d3ae636] <==
{"level":"info","ts":"2021-11-06T17:33:52.209Z","caller":"etcdmain/etcd.go:72","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=minikube=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
{"level":"info","ts":"2021-11-06T17:33:52.209Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2021-11-06T17:33:52.210Z","caller":"embed/etcd.go:478","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2021-11-06T17:33:52.210Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]}
{"level":"info","ts":"2021-11-06T17:33:52.211Z","caller":"embed/etcd.go:307","msg":"starting an etcd server","etcd-version":"3.5.0","git-sha":"946a5a6f2","go-version":"go1.16.3","go-os":"linux","go-arch":"amd64","max-cpu-set":8,"max-cpu-available":8,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.49.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"info","ts":"2021-11-06T17:33:52.217Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"5.4358ms"}
{"level":"info","ts":"2021-11-06T17:33:52.225Z","caller":"etcdserver/raft.go:448","msg":"starting local member","local-member-id":"aec36adc501070cc","cluster-id":"fa54960ea34d58be"}
{"level":"info","ts":"2021-11-06T17:33:52.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"}
{"level":"info","ts":"2021-11-06T17:33:52.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 0"}
{"level":"info","ts":"2021-11-06T17:33:52.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"}
{"level":"info","ts":"2021-11-06T17:33:52.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 1"}
{"level":"info","ts":"2021-11-06T17:33:52.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
{"level":"warn","ts":"2021-11-06T17:33:52.228Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2021-11-06T17:33:52.230Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1}
{"level":"info","ts":"2021-11-06T17:33:52.231Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2021-11-06T17:33:52.232Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.0","cluster-version":"to_be_decided"}
{"level":"info","ts":"2021-11-06T17:33:52.233Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2021-11-06T17:33:52.234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
{"level":"info","ts":"2021-11-06T17:33:52.234Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2021-11-06T17:33:52.235Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2021-11-06T17:33:52.236Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2021-11-06T17:33:52.236Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2021-11-06T17:33:52.236Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2021-11-06T17:33:52.236Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2021-11-06T17:33:52.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2021-11-06T17:33:52.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2021-11-06T17:33:52.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2021-11-06T17:33:52.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2021-11-06T17:33:52.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2021-11-06T17:33:52.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2021-11-06T17:33:52.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2021-11-06T17:33:52.627Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2021-11-06T17:33:52.628Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2021-11-06T17:33:52.628Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2021-11-06T17:33:52.628Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2021-11-06T17:33:52.628Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2021-11-06T17:33:52.628Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2021-11-06T17:33:52.628Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2021-11-06T17:33:52.629Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
{"level":"info","ts":"2021-11-06T17:33:52.629Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
{"level":"info","ts":"2021-11-06T17:33:52.630Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2021-11-06T17:33:52.630Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
{"level":"warn","ts":"2021-11-06T17:41:44.764Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.1611ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2021-11-06T17:41:44.764Z","caller":"traceutil/trace.go:171","msg":"trace[930044847] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:856; }","duration":"100.287ms","start":"2021-11-06T17:41:44.664Z","end":"2021-11-06T17:41:44.764Z","steps":["trace[930044847] 'range keys from in-memory index tree'  (duration: 99.9743ms)"],"step_count":1}
{"level":"info","ts":"2021-11-06T17:41:57.667Z","caller":"traceutil/trace.go:171","msg":"trace[1816239597] linearizableReadLoop","detail":"{readStateIndex:1010; appliedIndex:1010; }","duration":"164.4368ms","start":"2021-11-06T17:41:57.537Z","end":"2021-11-06T17:41:57.667Z","steps":["trace[1816239597] 'read index received'  (duration: 164.4085ms)","trace[1816239597] 'applied index is now lower than readState.Index'  (duration: 14.4Β΅s)"],"step_count":2}
{"level":"warn","ts":"2021-11-06T17:41:57.703Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"199.8219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2021-11-06T17:41:57.703Z","caller":"traceutil/trace.go:171","msg":"trace[1856502271] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:902; }","duration":"199.9213ms","start":"2021-11-06T17:41:57.537Z","end":"2021-11-06T17:41:57.703Z","steps":["trace[1856502271] 'agreement among raft nodes before linearized reading'  (duration: 164.592ms)","trace[1856502271] 'range keys from in-memory index tree'  (duration: 35.1849ms)"],"step_count":2}
{"level":"info","ts":"2021-11-06T17:43:52.089Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":649}
{"level":"info","ts":"2021-11-06T17:43:52.089Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":649,"took":"478.8Β΅s"}
{"level":"info","ts":"2021-11-06T17:48:51.747Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1028}
{"level":"info","ts":"2021-11-06T17:48:51.748Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1028,"took":"544.2Β΅s"}


==> kernel <==
 17:49:13 up 15:57,  0 users,  load average: 0.29, 0.28, 0.31
Linux minikube 5.10.47-linuxkit #1 SMP Sat Jul 3 21:51:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"


==> kube-apiserver [00ca382718e2] <==
I1106 17:33:55.221485       1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1106 17:33:55.221509       1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1106 17:33:55.222012       1 dynamic_serving_content.go:129] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
I1106 17:33:55.222339       1 secure_serving.go:266] Serving securely on [::]:8443
I1106 17:33:55.222378       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1106 17:33:55.222614       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I1106 17:33:55.222653       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1106 17:33:55.222705       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I1106 17:33:55.222743       1 apf_controller.go:312] Starting API Priority and Fairness config controller
I1106 17:33:55.223317       1 controller.go:85] Starting OpenAPI controller
I1106 17:33:55.223398       1 naming_controller.go:291] Starting NamingConditionController
I1106 17:33:55.223427       1 establishing_controller.go:76] Starting EstablishingController
I1106 17:33:55.223493       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I1106 17:33:55.223533       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1106 17:33:55.223595       1 crd_finalizer.go:266] Starting CRDFinalizer
I1106 17:33:55.223764       1 controller.go:83] Starting OpenAPI AggregationController
I1106 17:33:55.226241       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1106 17:33:55.226278       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I1106 17:33:55.226355       1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1106 17:33:55.226437       1 dynamic_serving_content.go:129] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
I1106 17:33:55.226850       1 available_controller.go:491] Starting AvailableConditionController
I1106 17:33:55.226886       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1106 17:33:55.226939       1 autoregister_controller.go:141] Starting autoregister controller
I1106 17:33:55.226974       1 cache.go:32] Waiting for caches to sync for autoregister controller
E1106 17:33:55.228344       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg:
I1106 17:33:55.226442       1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1106 17:33:55.241775       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1106 17:33:55.241807       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I1106 17:33:55.269343       1 controller.go:611] quota admission added evaluator for: namespaces
I1106 17:33:55.323623       1 apf_controller.go:317] Running API Priority and Fairness config worker
I1106 17:33:55.324351       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1106 17:33:55.327562       1 cache.go:39] Caches are synced for autoregister controller
I1106 17:33:55.327916       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I1106 17:33:55.328049       1 cache.go:39] Caches are synced for AvailableConditionController controller
I1106 17:33:55.343965       1 shared_informer.go:247] Caches are synced for crd-autoregister
I1106 17:33:55.353709       1 shared_informer.go:247] Caches are synced for node_authorizer
I1106 17:33:56.221952       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1106 17:33:56.222067       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1106 17:33:56.231310       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I1106 17:33:56.237061       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I1106 17:33:56.237099       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1106 17:33:56.778095       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1106 17:33:56.826779       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1106 17:33:56.996541       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
I1106 17:33:56.998487       1 controller.go:611] quota admission added evaluator for: endpoints
I1106 17:33:57.004529       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1106 17:33:57.285294       1 controller.go:611] quota admission added evaluator for: serviceaccounts
I1106 17:33:58.224530       1 controller.go:611] quota admission added evaluator for: deployments.apps
I1106 17:33:58.261232       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I1106 17:33:58.541237       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I1106 17:34:10.960601       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
I1106 17:34:11.008462       1 controller.go:611] quota admission added evaluator for: replicasets.apps
I1106 17:34:11.675283       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I1106 17:41:28.758192       1 controller.go:611] quota admission added evaluator for: jobs.batch
I1106 17:42:06.886137       1 trace.go:205] Trace[1124925925]: "Call validating webhook" configuration:ingress-nginx-admission,webhook:validate.nginx.ingress.kubernetes.io,resource:networking.k8s.io/v1, Resource=ingresses,subresource:,operation:CREATE,UID:0bb14f06-8511-4da8-980a-f46ecaedf0e3 (06-Nov-2021 17:41:56.918) (total time: 10002ms):
Trace[1124925925]: [10.0021969s] [10.0021969s] END
W1106 17:42:06.886196       1 dispatcher.go:150] Failed calling webhook, failing closed validate.nginx.ingress.kubernetes.io: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": context deadline exceeded
I1106 17:42:06.886741       1 trace.go:205] Trace[92924587]: "Create" url:/apis/networking.k8s.io/v1/namespaces/kube-system/ingresses,user-agent:kubectl/v1.22.3 (darwin/amd64) kubernetes/c920368,audit-id:4da69da3-1bce-4766-bdc4-80d111dfc6b7,client:192.168.49.1,accept:application/json,protocol:HTTP/2.0 (06-Nov-2021 17:41:56.914) (total time: 10006ms):
Trace[92924587]: [10.0069237s] [10.0069237s] END
I1106 17:42:46.000236       1 controller.go:611] quota admission added evaluator for: ingresses.networking.k8s.io


==> kube-controller-manager [0a77f561a321] <==
I1106 17:34:10.297403       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone:
W1106 17:34:10.297458       1 node_lifecycle_controller.go:1013] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1106 17:34:10.297524       1 node_lifecycle_controller.go:1214] Controller detected that zone  is now in state Normal.
I1106 17:34:10.299237       1 shared_informer.go:247] Caches are synced for persistent volume
I1106 17:34:10.299543       1 shared_informer.go:247] Caches are synced for PV protection
I1106 17:34:10.299648       1 shared_informer.go:247] Caches are synced for service account
I1106 17:34:10.297736       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I1106 17:34:10.298111       1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
I1106 17:34:10.298527       1 shared_informer.go:247] Caches are synced for stateful set
I1106 17:34:10.298583       1 shared_informer.go:247] Caches are synced for endpoint_slice
I1106 17:34:10.312350       1 shared_informer.go:247] Caches are synced for HPA
I1106 17:34:10.315706       1 shared_informer.go:247] Caches are synced for endpoint
I1106 17:34:10.323377       1 shared_informer.go:247] Caches are synced for cronjob
I1106 17:34:10.323724       1 shared_informer.go:247] Caches are synced for ReplicationController
I1106 17:34:10.338717       1 shared_informer.go:247] Caches are synced for GC
I1106 17:34:10.338934       1 shared_informer.go:247] Caches are synced for attach detach
I1106 17:34:10.345027       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I1106 17:34:10.347946       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I1106 17:34:10.348548       1 shared_informer.go:247] Caches are synced for TTL
I1106 17:34:10.350083       1 shared_informer.go:247] Caches are synced for PVC protection
I1106 17:34:10.351140       1 shared_informer.go:247] Caches are synced for ephemeral
I1106 17:34:10.353849       1 shared_informer.go:247] Caches are synced for crt configmap
I1106 17:34:10.362572       1 shared_informer.go:247] Caches are synced for daemon sets
I1106 17:34:10.365939       1 shared_informer.go:247] Caches are synced for namespace
I1106 17:34:10.370107       1 shared_informer.go:247] Caches are synced for ReplicaSet
I1106 17:34:10.398125       1 shared_informer.go:247] Caches are synced for job
I1106 17:34:10.403197       1 shared_informer.go:247] Caches are synced for TTL after finished
I1106 17:34:10.524552       1 shared_informer.go:247] Caches are synced for resource quota
I1106 17:34:10.548723       1 shared_informer.go:247] Caches are synced for disruption
I1106 17:34:10.548771       1 disruption.go:371] Sending events to api server.
I1106 17:34:10.548841       1 shared_informer.go:247] Caches are synced for deployment
I1106 17:34:10.559732       1 shared_informer.go:247] Caches are synced for resource quota
I1106 17:34:10.968147       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9g5rq"
I1106 17:34:11.011113       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 1"
I1106 17:34:11.046147       1 shared_informer.go:247] Caches are synced for garbage collector
I1106 17:34:11.053088       1 shared_informer.go:247] Caches are synced for garbage collector
I1106 17:34:11.053126       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1106 17:34:11.111541       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-wcz26"
I1106 17:41:28.658126       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set ingress-nginx-controller-5f66978484 to 1"
I1106 17:41:28.670947       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-controller-5f66978484" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-controller-5f66978484-fhzgj"
I1106 17:41:28.761551       1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create
I1106 17:41:28.771436       1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
I1106 17:41:28.775529       1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create
I1106 17:41:28.776255       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-create--1-cd2z9"
I1106 17:41:28.780530       1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
I1106 17:41:28.782092       1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create
I1106 17:41:28.782886       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-patch--1-xkt6m"
I1106 17:41:28.810172       1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
I1106 17:41:28.810292       1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create
I1106 17:41:28.815271       1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create
I1106 17:41:28.817385       1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
I1106 17:41:28.825135       1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
I1106 17:41:32.926320       1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
I1106 17:41:32.926865       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I1106 17:41:32.935045       1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-patch
I1106 17:41:32.941528       1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create
I1106 17:41:32.942136       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I1106 17:41:32.951233       1 job_controller.go:406] enqueueing job ingress-nginx/ingress-nginx-admission-create
I1106 17:41:56.919863       1 event.go:291] "Event occurred" object="default/hello-world-app" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-7b9bf45d65 to 1"
I1106 17:41:56.930726       1 event.go:291] "Event occurred" object="default/hello-world-app-7b9bf45d65" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-7b9bf45d65-mgxqj"


==> kube-proxy [4ffe5d4abd4d] <==
I1106 17:34:11.640913       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
I1106 17:34:11.640989       1 server_others.go:140] Detected node IP 192.168.49.2
W1106 17:34:11.641027       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
I1106 17:34:11.669079       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I1106 17:34:11.669130       1 server_others.go:212] Using iptables Proxier.
I1106 17:34:11.669150       1 server_others.go:219] creating dualStackProxier for iptables.
W1106 17:34:11.669216       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I1106 17:34:11.669624       1 server.go:649] Version: v1.22.3
I1106 17:34:11.670168       1 config.go:315] Starting service config controller
I1106 17:34:11.670232       1 shared_informer.go:240] Waiting for caches to sync for service config
I1106 17:34:11.670281       1 config.go:224] Starting endpoint slice config controller
I1106 17:34:11.670295       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1106 17:34:11.771375       1 shared_informer.go:247] Caches are synced for endpoint slice config
I1106 17:34:11.771488       1 shared_informer.go:247] Caches are synced for service config


==> kube-scheduler [d1b9d4b6e492] <==
I1106 17:33:52.718146       1 serving.go:347] Generated self-signed cert in-memory
W1106 17:33:55.254202       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1106 17:33:55.254247       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1106 17:33:55.254262       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
W1106 17:33:55.254277       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1106 17:33:55.275486       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
I1106 17:33:55.275350       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1106 17:33:55.275678       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1106 17:33:55.275779       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
E1106 17:33:55.278828       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1106 17:33:55.280694       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1106 17:33:55.281085       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1106 17:33:55.281710       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1106 17:33:55.282044       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1106 17:33:55.282442       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1106 17:33:55.282723       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E1106 17:33:55.283583       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1106 17:33:55.284308       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E1106 17:33:55.284640       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1106 17:33:55.284753       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1106 17:33:55.285190       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1106 17:33:55.285245       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1106 17:33:55.285562       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E1106 17:33:55.285770       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1106 17:33:56.283155       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1106 17:33:56.317959       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1106 17:33:56.323625       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1106 17:33:56.332231       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1106 17:33:56.400502       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E1106 17:33:56.425342       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1106 17:33:56.465254       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1106 17:33:56.476988       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1106 17:33:56.491949       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1106 17:33:56.514639       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1106 17:33:56.558050       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1106 17:33:56.598556       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1106 17:33:58.924408       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E1106 17:33:58.924439       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E1106 17:33:58.924482       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E1106 17:33:59.088475       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
I1106 17:33:59.542277       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file


==> kubelet <==
-- Logs begin at Sat 2021-11-06 17:33:36 UTC, end at Sat 2021-11-06 17:49:14 UTC. --
Nov 06 17:34:11 minikube kubelet[2329]: I1106 17:34:11.044971    2329 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf6456f5-e26f-4897-8857-b85de1137c6e-xtables-lock\") pod \"kube-proxy-9g5rq\" (UID: \"bf6456f5-e26f-4897-8857-b85de1137c6e\") "
Nov 06 17:34:11 minikube kubelet[2329]: I1106 17:34:11.045046    2329 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5k8v\" (UniqueName: \"kubernetes.io/projected/bf6456f5-e26f-4897-8857-b85de1137c6e-kube-api-access-v5k8v\") pod \"kube-proxy-9g5rq\" (UID: \"bf6456f5-e26f-4897-8857-b85de1137c6e\") "
Nov 06 17:34:11 minikube kubelet[2329]: I1106 17:34:11.045113    2329 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bf6456f5-e26f-4897-8857-b85de1137c6e-kube-proxy\") pod \"kube-proxy-9g5rq\" (UID: \"bf6456f5-e26f-4897-8857-b85de1137c6e\") "
Nov 06 17:34:11 minikube kubelet[2329]: I1106 17:34:11.045196    2329 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf6456f5-e26f-4897-8857-b85de1137c6e-lib-modules\") pod \"kube-proxy-9g5rq\" (UID: \"bf6456f5-e26f-4897-8857-b85de1137c6e\") "
Nov 06 17:34:11 minikube kubelet[2329]: I1106 17:34:11.116528    2329 topology_manager.go:200] "Topology Admit Handler"
Nov 06 17:34:11 minikube kubelet[2329]: I1106 17:34:11.247152    2329 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cef44f4-8158-4d81-b3fd-eac93a208276-config-volume\") pod \"coredns-78fcd69978-wcz26\" (UID: \"8cef44f4-8158-4d81-b3fd-eac93a208276\") "
Nov 06 17:34:11 minikube kubelet[2329]: I1106 17:34:11.247487    2329 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s756\" (UniqueName: \"kubernetes.io/projected/8cef44f4-8158-4d81-b3fd-eac93a208276-kube-api-access-7s756\") pod \"coredns-78fcd69978-wcz26\" (UID: \"8cef44f4-8158-4d81-b3fd-eac93a208276\") "
Nov 06 17:34:11 minikube kubelet[2329]: I1106 17:34:11.776407    2329 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-wcz26 through plugin: invalid network status for"
Nov 06 17:34:11 minikube kubelet[2329]: I1106 17:34:11.856560    2329 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-wcz26 through plugin: invalid network status for"
Nov 06 17:34:12 minikube kubelet[2329]: I1106 17:34:12.924417    2329 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-78fcd69978-wcz26 through plugin: invalid network status for"
Nov 06 17:34:42 minikube kubelet[2329]: I1106 17:34:42.092144    2329 scope.go:110] "RemoveContainer" containerID="f625599f4d84eab4e0279ea4b8e0c0a763f2e674154c099c26dda3f3a70cf19f"
Nov 06 17:38:58 minikube kubelet[2329]: W1106 17:38:58.448353    2329 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Nov 06 17:41:28 minikube kubelet[2329]: I1106 17:41:28.683629    2329 topology_manager.go:200] "Topology Admit Handler"
Nov 06 17:41:28 minikube kubelet[2329]: I1106 17:41:28.774034    2329 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ddf081a-df27-4e15-a071-7b4e459af236-webhook-cert\") pod \"ingress-nginx-controller-5f66978484-fhzgj\" (UID: \"8ddf081a-df27-4e15-a071-7b4e459af236\") "
Nov 06 17:41:28 minikube kubelet[2329]: I1106 17:41:28.781745    2329 topology_manager.go:200] "Topology Admit Handler"
Nov 06 17:41:28 minikube kubelet[2329]: I1106 17:41:28.810167    2329 topology_manager.go:200] "Topology Admit Handler"
Nov 06 17:41:28 minikube kubelet[2329]: I1106 17:41:28.874587    2329 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnts8\" (UniqueName: \"kubernetes.io/projected/8ddf081a-df27-4e15-a071-7b4e459af236-kube-api-access-wnts8\") pod \"ingress-nginx-controller-5f66978484-fhzgj\" (UID: \"8ddf081a-df27-4e15-a071-7b4e459af236\") "
Nov 06 17:41:28 minikube kubelet[2329]: E1106 17:41:28.874784    2329 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
Nov 06 17:41:28 minikube kubelet[2329]: E1106 17:41:28.875052    2329 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/8ddf081a-df27-4e15-a071-7b4e459af236-webhook-cert podName:8ddf081a-df27-4e15-a071-7b4e459af236 nodeName:}" failed. No retries permitted until 2021-11-06 17:41:29.3749246 +0000 UTC m=+451.711015001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/8ddf081a-df27-4e15-a071-7b4e459af236-webhook-cert") pod "ingress-nginx-controller-5f66978484-fhzgj" (UID: "8ddf081a-df27-4e15-a071-7b4e459af236") : secret "ingress-nginx-admission" not found
Nov 06 17:41:28 minikube kubelet[2329]: I1106 17:41:28.975844    2329 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-669dw\" (UniqueName: \"kubernetes.io/projected/cc7cee7d-ec33-4287-a86d-2b5b1663cab5-kube-api-access-669dw\") pod \"ingress-nginx-admission-patch--1-xkt6m\" (UID: \"cc7cee7d-ec33-4287-a86d-2b5b1663cab5\") "
Nov 06 17:41:28 minikube kubelet[2329]: I1106 17:41:28.976102    2329 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8v76\" (UniqueName: \"kubernetes.io/projected/88de0334-e9ce-4a9e-a4f6-3a3341d6804f-kube-api-access-f8v76\") pod \"ingress-nginx-admission-create--1-cd2z9\" (UID: \"88de0334-e9ce-4a9e-a4f6-3a3341d6804f\") "
Nov 06 17:41:29 minikube kubelet[2329]: E1106 17:41:29.379649    2329 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
Nov 06 17:41:29 minikube kubelet[2329]: E1106 17:41:29.379776    2329 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/8ddf081a-df27-4e15-a071-7b4e459af236-webhook-cert podName:8ddf081a-df27-4e15-a071-7b4e459af236 nodeName:}" failed. No retries permitted until 2021-11-06 17:41:30.3797531 +0000 UTC m=+452.715755901 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/8ddf081a-df27-4e15-a071-7b4e459af236-webhook-cert") pod "ingress-nginx-controller-5f66978484-fhzgj" (UID: "8ddf081a-df27-4e15-a071-7b4e459af236") : secret "ingress-nginx-admission" not found
Nov 06 17:41:29 minikube kubelet[2329]: I1106 17:41:29.418992    2329 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-create--1-cd2z9 through plugin: invalid network status for"
Nov 06 17:41:29 minikube kubelet[2329]: I1106 17:41:29.456517    2329 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-patch--1-xkt6m through plugin: invalid network status for"
Nov 06 17:41:29 minikube kubelet[2329]: I1106 17:41:29.888691    2329 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-patch--1-xkt6m through plugin: invalid network status for"
Nov 06 17:41:29 minikube kubelet[2329]: I1106 17:41:29.892818    2329 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-create--1-cd2z9 through plugin: invalid network status for"
Nov 06 17:41:30 minikube kubelet[2329]: E1106 17:41:30.388057    2329 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
Nov 06 17:41:30 minikube kubelet[2329]: E1106 17:41:30.388251    2329 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/8ddf081a-df27-4e15-a071-7b4e459af236-webhook-cert podName:8ddf081a-df27-4e15-a071-7b4e459af236 nodeName:}" failed. No retries permitted until 2021-11-06 17:41:32.3882061 +0000 UTC m=+454.724213801 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/8ddf081a-df27-4e15-a071-7b4e459af236-webhook-cert") pod "ingress-nginx-controller-5f66978484-fhzgj" (UID: "8ddf081a-df27-4e15-a071-7b4e459af236") : secret "ingress-nginx-admission" not found
Nov 06 17:41:32 minikube kubelet[2329]: E1106 17:41:32.446094    2329 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
Nov 06 17:41:32 minikube kubelet[2329]: E1106 17:41:32.446291    2329 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/8ddf081a-df27-4e15-a071-7b4e459af236-webhook-cert podName:8ddf081a-df27-4e15-a071-7b4e459af236 nodeName:}" failed. No retries permitted until 2021-11-06 17:41:36.4462301 +0000 UTC m=+458.782232701 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/8ddf081a-df27-4e15-a071-7b4e459af236-webhook-cert") pod "ingress-nginx-controller-5f66978484-fhzgj" (UID: "8ddf081a-df27-4e15-a071-7b4e459af236") : secret "ingress-nginx-admission" not found
Nov 06 17:41:32 minikube kubelet[2329]: I1106 17:41:32.913492    2329 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-patch--1-xkt6m through plugin: invalid network status for"
Nov 06 17:41:32 minikube kubelet[2329]: I1106 17:41:32.917160    2329 scope.go:110] "RemoveContainer" containerID="3248d02b5e001921a82d88ace7801faef0714cf30de9989c78acee7f80fc103e"
Nov 06 17:41:32 minikube kubelet[2329]: I1106 17:41:32.922874    2329 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-admission-create--1-cd2z9 through plugin: invalid network status for"
Nov 06 17:41:32 minikube kubelet[2329]: I1106 17:41:32.933076    2329 scope.go:110] "RemoveContainer" containerID="ac4cfe9c309e466cc30e0481e40a0cd9e904acefd533785e37200e4c14207843"
Nov 06 17:41:33 minikube kubelet[2329]: I1106 17:41:33.946270    2329 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0f151451bc8abee73284d34ca62c244b1ac10e49a1b59242ca22037e6818247e"
Nov 06 17:41:33 minikube kubelet[2329]: I1106 17:41:33.951684    2329 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c4d481205fc21a66a0168bc88b43072084bcd6e5b09ac42825bed3dc40801df4"
Nov 06 17:41:35 minikube kubelet[2329]: I1106 17:41:35.072328    2329 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-669dw\" (UniqueName: \"kubernetes.io/projected/cc7cee7d-ec33-4287-a86d-2b5b1663cab5-kube-api-access-669dw\") pod \"cc7cee7d-ec33-4287-a86d-2b5b1663cab5\" (UID: \"cc7cee7d-ec33-4287-a86d-2b5b1663cab5\") "
Nov 06 17:41:35 minikube kubelet[2329]: I1106 17:41:35.072459    2329 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8v76\" (UniqueName: \"kubernetes.io/projected/88de0334-e9ce-4a9e-a4f6-3a3341d6804f-kube-api-access-f8v76\") pod \"88de0334-e9ce-4a9e-a4f6-3a3341d6804f\" (UID: \"88de0334-e9ce-4a9e-a4f6-3a3341d6804f\") "
Nov 06 17:41:35 minikube kubelet[2329]: I1106 17:41:35.074566    2329 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88de0334-e9ce-4a9e-a4f6-3a3341d6804f-kube-api-access-f8v76" (OuterVolumeSpecName: "kube-api-access-f8v76") pod "88de0334-e9ce-4a9e-a4f6-3a3341d6804f" (UID: "88de0334-e9ce-4a9e-a4f6-3a3341d6804f"). InnerVolumeSpecName "kube-api-access-f8v76". PluginName "kubernetes.io/projected", VolumeGidValue ""
Nov 06 17:41:35 minikube kubelet[2329]: I1106 17:41:35.075326    2329 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc7cee7d-ec33-4287-a86d-2b5b1663cab5-kube-api-access-669dw" (OuterVolumeSpecName: "kube-api-access-669dw") pod "cc7cee7d-ec33-4287-a86d-2b5b1663cab5" (UID: "cc7cee7d-ec33-4287-a86d-2b5b1663cab5"). InnerVolumeSpecName "kube-api-access-669dw". PluginName "kubernetes.io/projected", VolumeGidValue ""
Nov 06 17:41:35 minikube kubelet[2329]: I1106 17:41:35.173458    2329 reconciler.go:319] "Volume detached for volume \"kube-api-access-669dw\" (UniqueName: \"kubernetes.io/projected/cc7cee7d-ec33-4287-a86d-2b5b1663cab5-kube-api-access-669dw\") on node \"minikube\" DevicePath \"\""
Nov 06 17:41:35 minikube kubelet[2329]: I1106 17:41:35.173517    2329 reconciler.go:319] "Volume detached for volume \"kube-api-access-f8v76\" (UniqueName: \"kubernetes.io/projected/88de0334-e9ce-4a9e-a4f6-3a3341d6804f-kube-api-access-f8v76\") on node \"minikube\" DevicePath \"\""
Nov 06 17:41:36 minikube kubelet[2329]: I1106 17:41:36.782320    2329 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-controller-5f66978484-fhzgj through plugin: invalid network status for"
Nov 06 17:41:36 minikube kubelet[2329]: I1106 17:41:36.978487    2329 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-controller-5f66978484-fhzgj through plugin: invalid network status for"
Nov 06 17:41:45 minikube kubelet[2329]: I1106 17:41:45.031138    2329 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-controller-5f66978484-fhzgj through plugin: invalid network status for"
Nov 06 17:41:46 minikube kubelet[2329]: I1106 17:41:46.062395    2329 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for ingress-nginx/ingress-nginx-controller-5f66978484-fhzgj through plugin: invalid network status for"
Nov 06 17:41:51 minikube kubelet[2329]: I1106 17:41:51.018141    2329 topology_manager.go:200] "Topology Admit Handler"
Nov 06 17:41:51 minikube kubelet[2329]: I1106 17:41:51.096796    2329 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj8cq\" (UniqueName: \"kubernetes.io/projected/18c83205-bf76-440b-9b10-86db5260b568-kube-api-access-zj8cq\") pod \"kube-ingress-dns-minikube\" (UID: \"18c83205-bf76-440b-9b10-86db5260b568\") "
Nov 06 17:41:56 minikube kubelet[2329]: I1106 17:41:56.940679    2329 topology_manager.go:200] "Topology Admit Handler"
Nov 06 17:41:56 minikube kubelet[2329]: W1106 17:41:56.950791    2329 container.go:586] Failed to update stats for container "/kubepods/besteffort/podae8e95c8-a3c6-4614-b54c-dffb8a91c9db": /sys/fs/cgroup/cpuset/kubepods/besteffort/podae8e95c8-a3c6-4614-b54c-dffb8a91c9db/cpuset.mems found to be empty, continuing to push stats
Nov 06 17:41:57 minikube kubelet[2329]: I1106 17:41:57.045125    2329 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtj9f\" (UniqueName: \"kubernetes.io/projected/ae8e95c8-a3c6-4614-b54c-dffb8a91c9db-kube-api-access-vtj9f\") pod \"hello-world-app-7b9bf45d65-mgxqj\" (UID: \"ae8e95c8-a3c6-4614-b54c-dffb8a91c9db\") "
Nov 06 17:41:58 minikube kubelet[2329]: I1106 17:41:58.445120    2329 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2c966fbabfb185fc7668e0938a4e42263129c9cfe50d0cef8abdb63c88611827"
Nov 06 17:41:58 minikube kubelet[2329]: I1106 17:41:58.445201    2329 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-world-app-7b9bf45d65-mgxqj through plugin: invalid network status for"
Nov 06 17:41:59 minikube kubelet[2329]: E1106 17:41:59.135277    2329 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/podae8e95c8-a3c6-4614-b54c-dffb8a91c9db\": RecentStats: unable to find data in memory cache]"
Nov 06 17:41:59 minikube kubelet[2329]: I1106 17:41:59.466099    2329 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-world-app-7b9bf45d65-mgxqj through plugin: invalid network status for"
Nov 06 17:42:01 minikube kubelet[2329]: I1106 17:42:01.493735    2329 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-world-app-7b9bf45d65-mgxqj through plugin: invalid network status for"
Nov 06 17:42:09 minikube kubelet[2329]: E1106 17:42:09.157750    2329 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/podae8e95c8-a3c6-4614-b54c-dffb8a91c9db\": RecentStats: unable to find data in memory cache]"
Nov 06 17:43:58 minikube kubelet[2329]: W1106 17:43:58.103127    2329 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Nov 06 17:48:57 minikube kubelet[2329]: W1106 17:48:57.758087    2329 sysinfo.go:203] Nodes topology is not available, providing CPU topology


==> storage-provisioner [aeac14bc6908] <==
I1106 17:34:42.229165       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1106 17:34:42.241198       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1106 17:34:42.241251       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1106 17:34:42.259307       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1106 17:34:42.259541       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"41060379-ef56-4994-a6a5-2b162fbd7810", APIVersion:"v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_459605b6-9c4e-47aa-91dd-ab1122831e28 became leader
I1106 17:34:42.260402       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_459605b6-9c4e-47aa-91dd-ab1122831e28!
I1106 17:34:42.360889       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_459605b6-9c4e-47aa-91dd-ab1122831e28!


==> storage-provisioner [f625599f4d84] <==
I1106 17:34:11.594638       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1106 17:34:41.564965       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout

@usersina
Copy link

usersina commented Feb 6, 2024

Since this seems like a common problem, I'll go ahead and explain what's happening for MacOS M1 users

the output of $(minikube ip) is the IP address of the minikube container in docker. The host OS has no access to the containers via their IP addresses (unless otherwise is specified, which happens to be unsupported on MacOS), which is why you often see that people solve the issue by changing the driver (a note to make here is that hyperkit is not supported on M1), so that in the end the IP address of minikube is available to the host.

You can validate this as follows:

$ docker ps | grep minikube # get the minikube container
4384aae94e95   kicbase/stable:v0.0.42                     "/usr/local/bin/entr…"   2 weeks ago   Up About an hour             127.0.0.1:50144->22/tcp, 127.0.0.1:50145->2376/tcp, 127.0.0.1:50147->5000/tcp, 127.0.0.1:50143->8443/tcp, 127.0.0.1:50146->32443/tcp

$ minikube ip
192.168.49.2

$ docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' minikube
192.168.49.2 # sounds about right

$ curl -k https://192.168.49.2:8443 # does not work with internal ip
# timeout

$ curl -k https://127.0.0.1:50143/ # works with exposed port (scroll to the right for `docker ps` output)
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {},
  "code": 403
}

The solution would be to use a tool like docker-mac-net-connect.

If you don't want to, you can just update your /etc/hosts to include this line 127.0.0.1 localhost,minikube.test and then you can use minikube.test as the ingress host. But you would need to run minikube tunnel every time. It's not a big deal if you do it for one host, but if the number of unique hosts increase, then it's a DNS pollution problem. Here's a GitHub issue link if this is still unclear.

For more about DNS, see how docker implements their DNS resolution

@kajoong-yoon
Copy link

kajoong-yoon commented Mar 15, 2024

For me docker-mac-net-connect worked very well as @usersina said.
My system spec is like this.

MacBook Pro 16 (2021)
Apple M1 Max
...
minikube version : v1.27.1
Docker Desktop version : v4.28.0

Just one thing to mention, I had to reinstall Docker for applying the solution.
Reinstall is explained at Troubleshooting section and running the solution not as service but directly to see the logs really helped me.

@ammirator-administrator
Copy link

ammirator-administrator commented Apr 3, 2024

+1
Thanks @usersina
Just installing docker-mac-net-connect on my M1 mac fixed the issue

@tdtu98
Copy link

tdtu98 commented Apr 3, 2024

+1 Thanks @usersina Just installing docker-mac-net-connect on my M1 mac fixed the issue

Sorry, after installing docker-mac-net-connect, what are we gonna do next? I just installed and followed the official tutorial but still cannot use ingress-dns.

@proprietary
Copy link

+1 Thanks @usersina Just installing docker-mac-net-connect on my M1 mac fixed the issue

Sorry, after installing docker-mac-net-connect, what are we gonna do next? I just installed and followed the official tutorial but still cannot use ingress-dns.

This worked for me (using a ".test" domain on my local machine to resolve ingress resources):

$ minikube addons enable ingress
$ minikube addons enable ingress-dns
$ brew install chipmk/tap/docker-mac-net-connect
$ sudo brew services start chipmk/tap/docker-mac-net-connect
$ cat <<EOF | sudo tee /etc/resolver/minikube-test
domain test
nameserver $(minikube ip)
search_order 1
timeout 5
EOF

Check that the resolver shows up in the output of scutil --dns

Then I set the output of minikube ip to be a DNS nameserver in the WiFi settings too

$ nslookup hello-world.test $(minikube ip) # hello-world.test is an ingress endpoint in my cluster
<the same as `minikube ip`>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addon/ingress kind/bug Categorizes issue or PR as related to a bug. os/macos priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.