If I create a GKE cluster without external IPs and I have a VPN connection to GCP, how can I ssh with kubectl?Accessing kubernetes (hosted via google container engine) securely using kubectl from a remote serverHow to run Ansible's gce.py script from inside an instance running in the cloud?How to enable Stackdriver Monitoring and Stackdriver trace on existing GKE cluster?Restricting SSH access in GCE to Linux bastion host using firewall rules with source and target tagsCan I remove the external IP from my GKE cluster?Kubernetes (on GKE) external connection through NAT for specific kube services?GCP: kubectl exec/logs fails to container on using UBUNTU as OSKubernetes GCE Ingress cannot find servicesGKE - Kube-DNS stubDomain resolution to VPN network not workingExpose GKE k8s external loadbalancer port 22 without exposing cluster
Why not use SQL instead of GraphQL?
Why did Neo believe he could trust the machine when he asked for peace?
Today is the Center
Is this a crack on the carbon frame?
Approximately how much travel time was saved by the opening of the Suez Canal in 1869?
Is a tag line useful on a cover?
TGV timetables / schedules?
Why doesn't Newton's third law mean a person bounces back to where they started when they hit the ground?
What would happen to a modern skyscraper if it rains micro blackholes?
Font hinting is lost in Chrome-like browsers (for some languages )
How to format long polynomial?
Are the number of citations and number of published articles the most important criteria for a tenure promotion?
Why don't electron-positron collisions release infinite energy?
How is the claim "I am in New York only if I am in America" the same as "If I am in New York, then I am in America?
Has the BBC provided arguments for saying Brexit being cancelled is unlikely?
How does one intimidate enemies without having the capacity for violence?
Can an x86 CPU running in real mode be considered to be basically an 8086 CPU?
tikz: show 0 at the axis origin
How does strength of boric acid solution increase in presence of salicylic acid?
Languages that we cannot (dis)prove to be Context-Free
Why are 150k or 200k jobs considered good when there are 300k+ births a month?
Which models of the Boeing 737 are still in production?
A newer friend of my brother's gave him a load of baseball cards that are supposedly extremely valuable. Is this a scam?
To string or not to string
If I create a GKE cluster without external IPs and I have a VPN connection to GCP, how can I ssh with kubectl?
Accessing kubernetes (hosted via google container engine) securely using kubectl from a remote serverHow to run Ansible's gce.py script from inside an instance running in the cloud?How to enable Stackdriver Monitoring and Stackdriver trace on existing GKE cluster?Restricting SSH access in GCE to Linux bastion host using firewall rules with source and target tagsCan I remove the external IP from my GKE cluster?Kubernetes (on GKE) external connection through NAT for specific kube services?GCP: kubectl exec/logs fails to container on using UBUNTU as OSKubernetes GCE Ingress cannot find servicesGKE - Kube-DNS stubDomain resolution to VPN network not workingExpose GKE k8s external loadbalancer port 22 without exposing cluster
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I can (for instance) connect to the cluster compute nodes like this:gcloud compute ssh gke-test-deploy-default-pool-xxxxx --internal-ip
But if I try to set up my kubectl credentials like this:gcloud container clusters get-credentials test-deploy --internal-ip
It complains:
ERROR: (gcloud.container.clusters.get-credentials) cluster test-deploy
is not a private cluster.
I am able to do non-ssh type commands like kubectl get pods --all-namespaces
, but if I do kubectl exec -it rabbitmq-podnumber -n backbone-testdeploy bash
I get this error:
Error from server: error dialing backend: No SSH tunnels currently
open. Were the targets able to accept an ssh-key for user
"gke-xxxxxxx"
BTW, the whole point of this is to use Google Cloud NAT on my cluster so that I have a consistent external IP for all pods when connecting to an external service (Atlas) which uses an IP whitelist. I can see the NAT working for the compute instances, but I cannot connect to the pods to check them.
google-compute-engine google-kubernetes-engine
New contributor
add a comment |
I can (for instance) connect to the cluster compute nodes like this:gcloud compute ssh gke-test-deploy-default-pool-xxxxx --internal-ip
But if I try to set up my kubectl credentials like this:gcloud container clusters get-credentials test-deploy --internal-ip
It complains:
ERROR: (gcloud.container.clusters.get-credentials) cluster test-deploy
is not a private cluster.
I am able to do non-ssh type commands like kubectl get pods --all-namespaces
, but if I do kubectl exec -it rabbitmq-podnumber -n backbone-testdeploy bash
I get this error:
Error from server: error dialing backend: No SSH tunnels currently
open. Were the targets able to accept an ssh-key for user
"gke-xxxxxxx"
BTW, the whole point of this is to use Google Cloud NAT on my cluster so that I have a consistent external IP for all pods when connecting to an external service (Atlas) which uses an IP whitelist. I can see the NAT working for the compute instances, but I cannot connect to the pods to check them.
google-compute-engine google-kubernetes-engine
New contributor
add a comment |
I can (for instance) connect to the cluster compute nodes like this:gcloud compute ssh gke-test-deploy-default-pool-xxxxx --internal-ip
But if I try to set up my kubectl credentials like this:gcloud container clusters get-credentials test-deploy --internal-ip
It complains:
ERROR: (gcloud.container.clusters.get-credentials) cluster test-deploy
is not a private cluster.
I am able to do non-ssh type commands like kubectl get pods --all-namespaces
, but if I do kubectl exec -it rabbitmq-podnumber -n backbone-testdeploy bash
I get this error:
Error from server: error dialing backend: No SSH tunnels currently
open. Were the targets able to accept an ssh-key for user
"gke-xxxxxxx"
BTW, the whole point of this is to use Google Cloud NAT on my cluster so that I have a consistent external IP for all pods when connecting to an external service (Atlas) which uses an IP whitelist. I can see the NAT working for the compute instances, but I cannot connect to the pods to check them.
google-compute-engine google-kubernetes-engine
New contributor
I can (for instance) connect to the cluster compute nodes like this:gcloud compute ssh gke-test-deploy-default-pool-xxxxx --internal-ip
But if I try to set up my kubectl credentials like this:gcloud container clusters get-credentials test-deploy --internal-ip
It complains:
ERROR: (gcloud.container.clusters.get-credentials) cluster test-deploy
is not a private cluster.
I am able to do non-ssh type commands like kubectl get pods --all-namespaces
, but if I do kubectl exec -it rabbitmq-podnumber -n backbone-testdeploy bash
I get this error:
Error from server: error dialing backend: No SSH tunnels currently
open. Were the targets able to accept an ssh-key for user
"gke-xxxxxxx"
BTW, the whole point of this is to use Google Cloud NAT on my cluster so that I have a consistent external IP for all pods when connecting to an external service (Atlas) which uses an IP whitelist. I can see the NAT working for the compute instances, but I cannot connect to the pods to check them.
google-compute-engine google-kubernetes-engine
google-compute-engine google-kubernetes-engine
New contributor
New contributor
edited Apr 3 at 17:17
Dave Welling
New contributor
asked Apr 3 at 17:12
Dave WellingDave Welling
1012
1012
New contributor
New contributor
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
The Master node and the worker nodes are each in different networks, the master is in a Google managed one and the worker nodes are in your VPC. With a standard cluster, the master communicates with the nodes via external IP. With a private cluster, the Master node and the worker nodes are connected via a network peering and communicate via internal IPs.
This causes problems when connecting to the master directly over other peered networks or VPN connections because the network peering routes to the master are not propagated over VPNs and network peering.
For your use case, disable the external master endpoint. Once this is done, when you run the get-credentials command, your kube config will have the internal master endpoint instead of the external one. You will then need to connect to your master (kubectl) from the VPC network (a bastion host or a proxy).
Instead, I recommend leaving the external end point active, use get-credentials without using --internal-ip so that your kube config will use the external end point and thus you can connect from anywhere. To make sure that you master is secure, use Master Authorized Networks to define the external IPs or CIDRs you will be connecting from
I am fairly certain the kubectl exec and logs commands are failing because of how you are getting the credentials.
One last thing worth checking as well, GKE creates firewall rules and routes automatically (they will be called gke-...), these are required to ensure that SSH tunnels from the master to the nodes work properly.
This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.
– Dave Welling
yesterday
The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).
– Dave Welling
yesterday
you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster
– Patrick W
22 hours ago
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Dave Welling is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f961345%2fif-i-create-a-gke-cluster-without-external-ips-and-i-have-a-vpn-connection-to-gc%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
The Master node and the worker nodes are each in different networks, the master is in a Google managed one and the worker nodes are in your VPC. With a standard cluster, the master communicates with the nodes via external IP. With a private cluster, the Master node and the worker nodes are connected via a network peering and communicate via internal IPs.
This causes problems when connecting to the master directly over other peered networks or VPN connections because the network peering routes to the master are not propagated over VPNs and network peering.
For your use case, disable the external master endpoint. Once this is done, when you run the get-credentials command, your kube config will have the internal master endpoint instead of the external one. You will then need to connect to your master (kubectl) from the VPC network (a bastion host or a proxy).
Instead, I recommend leaving the external end point active, use get-credentials without using --internal-ip so that your kube config will use the external end point and thus you can connect from anywhere. To make sure that you master is secure, use Master Authorized Networks to define the external IPs or CIDRs you will be connecting from
I am fairly certain the kubectl exec and logs commands are failing because of how you are getting the credentials.
One last thing worth checking as well, GKE creates firewall rules and routes automatically (they will be called gke-...), these are required to ensure that SSH tunnels from the master to the nodes work properly.
This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.
– Dave Welling
yesterday
The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).
– Dave Welling
yesterday
you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster
– Patrick W
22 hours ago
add a comment |
The Master node and the worker nodes are each in different networks, the master is in a Google managed one and the worker nodes are in your VPC. With a standard cluster, the master communicates with the nodes via external IP. With a private cluster, the Master node and the worker nodes are connected via a network peering and communicate via internal IPs.
This causes problems when connecting to the master directly over other peered networks or VPN connections because the network peering routes to the master are not propagated over VPNs and network peering.
For your use case, disable the external master endpoint. Once this is done, when you run the get-credentials command, your kube config will have the internal master endpoint instead of the external one. You will then need to connect to your master (kubectl) from the VPC network (a bastion host or a proxy).
Instead, I recommend leaving the external end point active, use get-credentials without using --internal-ip so that your kube config will use the external end point and thus you can connect from anywhere. To make sure that you master is secure, use Master Authorized Networks to define the external IPs or CIDRs you will be connecting from
I am fairly certain the kubectl exec and logs commands are failing because of how you are getting the credentials.
One last thing worth checking as well, GKE creates firewall rules and routes automatically (they will be called gke-...), these are required to ensure that SSH tunnels from the master to the nodes work properly.
This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.
– Dave Welling
yesterday
The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).
– Dave Welling
yesterday
you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster
– Patrick W
22 hours ago
add a comment |
The Master node and the worker nodes are each in different networks, the master is in a Google managed one and the worker nodes are in your VPC. With a standard cluster, the master communicates with the nodes via external IP. With a private cluster, the Master node and the worker nodes are connected via a network peering and communicate via internal IPs.
This causes problems when connecting to the master directly over other peered networks or VPN connections because the network peering routes to the master are not propagated over VPNs and network peering.
For your use case, disable the external master endpoint. Once this is done, when you run the get-credentials command, your kube config will have the internal master endpoint instead of the external one. You will then need to connect to your master (kubectl) from the VPC network (a bastion host or a proxy).
Instead, I recommend leaving the external end point active, use get-credentials without using --internal-ip so that your kube config will use the external end point and thus you can connect from anywhere. To make sure that you master is secure, use Master Authorized Networks to define the external IPs or CIDRs you will be connecting from
I am fairly certain the kubectl exec and logs commands are failing because of how you are getting the credentials.
One last thing worth checking as well, GKE creates firewall rules and routes automatically (they will be called gke-...), these are required to ensure that SSH tunnels from the master to the nodes work properly.
The Master node and the worker nodes are each in different networks, the master is in a Google managed one and the worker nodes are in your VPC. With a standard cluster, the master communicates with the nodes via external IP. With a private cluster, the Master node and the worker nodes are connected via a network peering and communicate via internal IPs.
This causes problems when connecting to the master directly over other peered networks or VPN connections because the network peering routes to the master are not propagated over VPNs and network peering.
For your use case, disable the external master endpoint. Once this is done, when you run the get-credentials command, your kube config will have the internal master endpoint instead of the external one. You will then need to connect to your master (kubectl) from the VPC network (a bastion host or a proxy).
Instead, I recommend leaving the external end point active, use get-credentials without using --internal-ip so that your kube config will use the external end point and thus you can connect from anywhere. To make sure that you master is secure, use Master Authorized Networks to define the external IPs or CIDRs you will be connecting from
I am fairly certain the kubectl exec and logs commands are failing because of how you are getting the credentials.
One last thing worth checking as well, GKE creates firewall rules and routes automatically (they will be called gke-...), these are required to ensure that SSH tunnels from the master to the nodes work properly.
answered 2 days ago
Patrick WPatrick W
40918
40918
This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.
– Dave Welling
yesterday
The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).
– Dave Welling
yesterday
you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster
– Patrick W
22 hours ago
add a comment |
This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.
– Dave Welling
yesterday
The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).
– Dave Welling
yesterday
you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster
– Patrick W
22 hours ago
This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.
– Dave Welling
yesterday
This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.
– Dave Welling
yesterday
The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).
– Dave Welling
yesterday
The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).
– Dave Welling
yesterday
you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster
– Patrick W
22 hours ago
you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster
– Patrick W
22 hours ago
add a comment |
Dave Welling is a new contributor. Be nice, and check out our Code of Conduct.
Dave Welling is a new contributor. Be nice, and check out our Code of Conduct.
Dave Welling is a new contributor. Be nice, and check out our Code of Conduct.
Dave Welling is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f961345%2fif-i-create-a-gke-cluster-without-external-ips-and-i-have-a-vpn-connection-to-gc%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown