30

Am working on Azure Kubernates where we can store Docker Images in Azure. Here am trying to check my kubectl version, then am getting

Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.

For this I followed MSDN:uilding Microservices with AKS and VSTS – Part 2 and MSDOCS:Kubernetes on windows

So, can you please suggest me “How to resolve for this issue?”

LosManos
  • 6,117
  • 5
  • 44
  • 81
Mani
  • 903
  • 2
  • 7
  • 23

12 Answers12

26

I think you might missed out to configure the cluster, for that you need to run the below command in your command prompt.

az aks get-credentials --resource-group myResourceGroup --name myAKSCluster

The above CLI command creates .config file with complete cluster and nodes details in your local machine.

After that you run kubectl get nodes command in your command prompt, then you can get the list of nodes inside the cluster like in the below image.

enter image description here For reference follow this Deploy an Azure Kubernetes Service (AKS) cluster.

Pradeep
  • 3,671
  • 8
  • 41
  • 83
12

If you can see that your config file is correctly configured by going to $HOME/.kube/config - Linux or %UserProfile%/.kube/config - Windows but you are still receiving the error message - try running command line as an administrator.

More information on the config file can be found here: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/

Ivan Agrenich
  • 1,119
  • 1
  • 7
  • 9
5

For me it appeared to be due to Windows not having a HOME environment variable set. According to the docs kubectl will use the config file $(HOME)/.kube/config. But since this variable isn't set on Window it can't locate the file.

I created a HOME variable with the same value as USERPROFILE and it started working.

Mark Wagoner
  • 1,623
  • 11
  • 19
4

I'm using Hyper-V on Local Windows and I met this error because I didn't configure minikube.

(I know the question is about Azure, not minikube. But this article is on the top for the error message. So, I've put the solution here.)

1. enable Hyper-V.

Type in systeminfo on your Terminal. If you can find the line below,

Hyper-V Requirements:     A hypervisor has been detected. Features required for Hyper-V will not be displayed.

Hyper-V works correctly.

If you can't, enable it from settings.

2. Create Hyper-V Network Switch

Open Hyper-V manager. (Searching it is the fastest way.)

Next, click your PC name on the left.

Then, you can find Virtual Switch Manager menu on the right.

Click it and choose External Virtual Switch with name: "Minikube Switch"

Click apply to create it.

3. start minikube

Go back to terminal and type in:

minikube start --vm-driver hyperv --hyperv-virtual-switch "Minikube Switch"

For more information, check the steps in this article.

DevExcite
  • 141
  • 1
  • 4
4

Check docker is running and you started minikube or whichever cloud kube you using. my issue resolved after running "minikube start --driver=docker"

Janarthanan Ramu
  • 1,057
  • 10
  • 15
3

I was facing the same error while firing the command "kubectl get pods"

The issue has been resolved by having following steps below:

a) First find out current-context

kubectl config get-contexts
CURRENT   NAME      CLUSTER   AUTHINFO   NAMESPACE

b) if no context is set then set it manually by using

kubectl config set-context <Your context>

Hope this will help you.

anurag
  • 65
  • 3
3

In my case, I was shuffling between az aks k8s cluster and local docker-desktop.

So every time I change the cluster context I need to restart the docker, else I get the same described error.

Unable to connect to the server: dial tcp 127.0.0.1:6443: connectex: No connection could be made because the target machine actively refused it.

enter image description here

PS: make sure your cluster is started as shown in this picture showing (Stop local cluster)

Dupinder Singh
  • 4,165
  • 3
  • 19
  • 45
0

I had exactly the same problem even after having correct config (by running an azure cli command).

It seems that kubectl expects HOME env.variable set but it did not exist for me. There is however a solution:

If you add a KUBECONFIG environmental variable that will point to config it will start working.

Example:

setx KUBECONFIG %UserProfile%\.kube\config

When the variable is present kubectl has no troubles reading from file.

P.S. It is an alternative to setting a HOME variable as suggested in another answer.

Ilya Chernomordik
  • 20,693
  • 15
  • 84
  • 144
0

I encountered similar problem:

> kubectl cluster-info
"To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: dial tcp xxx.x.x.x:8080: connectex: No connection could be made because the target machine actively refused it."

> kubectl cluster-info dump
Unable to connect to the server: dial tcp xxx.0.0.x:8080: connectex: No connection could be made because the target machine actively refused it.

This setup was working fine until Docker for Desktop bought it's own copy of kubectl. There are 2 ways to overcome this situation:

1 - Quit / Stop Docker for Desktop while using the cluster

2 - Set KUBECONFIG file path

I tried both the options and they worked.

Found a good source for .kube/config, sending it over here for quick reference:

apiVersion: v1
clusters:
- cluster:
    certificate-authority: fake-ca-file
    server: https://1.2.3.4
  name: development
- cluster:
    insecure-skip-tls-verify: true
    server: https://5.6.7.8
  name: scratch
contexts:
- context:
    cluster: development
    namespace: frontend
    user: developer
  name: dev-frontend
- context:
    cluster: development
    namespace: storage
    user: developer
  name: dev-storage
- context:
    cluster: scratch
    namespace: default
    user: experimenter
  name: exp-scratch
current-context: ""
kind: Config
preferences: {}
users:
- name: developer
  user:
    client-certificate: fake-cert-file
    client-key: fake-key-file
- name: experimenter
  user:
    password: some-password
    username: exp

Reference: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
0

Following @ilya-chernomordik, I've added my config path to the System Variable by doing

setx KUBECONFIG "D:\Minikube\Minikube.minikube\config"

I have changed the default Location from C: Drive to D: Drive as i have less space in C.

Now the problem is fixed.

edit: after 5 mins, the api server again stopped. It's been more than 5-6 hours i'm trying to solve this issue. I'm not sure why this problem is happening, even after adding the coreect path.

0

Azure self-hosted agent doesn't have the permission to access Kubernates cluster:

Remove Azure self-hosted agent -  .\config.cmd Remove
configure again ( .\config.cmd) with a user have permission to access Kubernates cluster
0

Essentially this problem occurs if your minikube or kind isn't configured. Just try to restart your minikube or kind. If that doesn't solve your problem then try to restart your hypervisor which minikube uses.

minikube start

This command solved my issue.

Dharman
  • 21,838
  • 18
  • 57
  • 107