If I create a GKE cluster without external IPs and I have a VPN connection to GCP, how can I ssh with kubectl?Accessing kubernetes (hosted via google container engine) securely using kubectl from a remote serverHow to run Ansible's gce.py script from inside an instance running in the cloud?How to enable Stackdriver Monitoring and Stackdriver trace on existing GKE cluster?Restricting SSH access in GCE to Linux bastion host using firewall rules with source and target tagsCan I remove the external IP from my GKE cluster?Kubernetes (on GKE) external connection through NAT for specific kube services?GCP: kubectl exec/logs fails to container on using UBUNTU as OSKubernetes GCE Ingress cannot find servicesGKE - Kube-DNS stubDomain resolution to VPN network not workingExpose GKE k8s external loadbalancer port 22 without exposing cluster

Why not use SQL instead of GraphQL?

Why did Neo believe he could trust the machine when he asked for peace?

Today is the Center

Is this a crack on the carbon frame?

Approximately how much travel time was saved by the opening of the Suez Canal in 1869?

Is a tag line useful on a cover?

TGV timetables / schedules?

Why doesn't Newton's third law mean a person bounces back to where they started when they hit the ground?

What would happen to a modern skyscraper if it rains micro blackholes?

Font hinting is lost in Chrome-like browsers (for some languages )

How to format long polynomial?

Are the number of citations and number of published articles the most important criteria for a tenure promotion?

Why don't electron-positron collisions release infinite energy?

How is the claim "I am in New York only if I am in America" the same as "If I am in New York, then I am in America?

Has the BBC provided arguments for saying Brexit being cancelled is unlikely?

How does one intimidate enemies without having the capacity for violence?

Can an x86 CPU running in real mode be considered to be basically an 8086 CPU?

tikz: show 0 at the axis origin

How does strength of boric acid solution increase in presence of salicylic acid?

Languages that we cannot (dis)prove to be Context-Free

Why are 150k or 200k jobs considered good when there are 300k+ births a month?

Which models of the Boeing 737 are still in production?

A newer friend of my brother's gave him a load of baseball cards that are supposedly extremely valuable. Is this a scam?

To string or not to string



If I create a GKE cluster without external IPs and I have a VPN connection to GCP, how can I ssh with kubectl?


Accessing kubernetes (hosted via google container engine) securely using kubectl from a remote serverHow to run Ansible's gce.py script from inside an instance running in the cloud?How to enable Stackdriver Monitoring and Stackdriver trace on existing GKE cluster?Restricting SSH access in GCE to Linux bastion host using firewall rules with source and target tagsCan I remove the external IP from my GKE cluster?Kubernetes (on GKE) external connection through NAT for specific kube services?GCP: kubectl exec/logs fails to container on using UBUNTU as OSKubernetes GCE Ingress cannot find servicesGKE - Kube-DNS stubDomain resolution to VPN network not workingExpose GKE k8s external loadbalancer port 22 without exposing cluster






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








0















I can (for instance) connect to the cluster compute nodes like this:
gcloud compute ssh gke-test-deploy-default-pool-xxxxx --internal-ip



But if I try to set up my kubectl credentials like this:
gcloud container clusters get-credentials test-deploy --internal-ip
It complains:




ERROR: (gcloud.container.clusters.get-credentials) cluster test-deploy
is not a private cluster.




I am able to do non-ssh type commands like kubectl get pods --all-namespaces, but if I do kubectl exec -it rabbitmq-podnumber -n backbone-testdeploy bash I get this error:




Error from server: error dialing backend: No SSH tunnels currently
open. Were the targets able to accept an ssh-key for user
"gke-xxxxxxx"




BTW, the whole point of this is to use Google Cloud NAT on my cluster so that I have a consistent external IP for all pods when connecting to an external service (Atlas) which uses an IP whitelist. I can see the NAT working for the compute instances, but I cannot connect to the pods to check them.










share|improve this question









New contributor




Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.


























    0















    I can (for instance) connect to the cluster compute nodes like this:
    gcloud compute ssh gke-test-deploy-default-pool-xxxxx --internal-ip



    But if I try to set up my kubectl credentials like this:
    gcloud container clusters get-credentials test-deploy --internal-ip
    It complains:




    ERROR: (gcloud.container.clusters.get-credentials) cluster test-deploy
    is not a private cluster.




    I am able to do non-ssh type commands like kubectl get pods --all-namespaces, but if I do kubectl exec -it rabbitmq-podnumber -n backbone-testdeploy bash I get this error:




    Error from server: error dialing backend: No SSH tunnels currently
    open. Were the targets able to accept an ssh-key for user
    "gke-xxxxxxx"




    BTW, the whole point of this is to use Google Cloud NAT on my cluster so that I have a consistent external IP for all pods when connecting to an external service (Atlas) which uses an IP whitelist. I can see the NAT working for the compute instances, but I cannot connect to the pods to check them.










    share|improve this question









    New contributor




    Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






















      0












      0








      0








      I can (for instance) connect to the cluster compute nodes like this:
      gcloud compute ssh gke-test-deploy-default-pool-xxxxx --internal-ip



      But if I try to set up my kubectl credentials like this:
      gcloud container clusters get-credentials test-deploy --internal-ip
      It complains:




      ERROR: (gcloud.container.clusters.get-credentials) cluster test-deploy
      is not a private cluster.




      I am able to do non-ssh type commands like kubectl get pods --all-namespaces, but if I do kubectl exec -it rabbitmq-podnumber -n backbone-testdeploy bash I get this error:




      Error from server: error dialing backend: No SSH tunnels currently
      open. Were the targets able to accept an ssh-key for user
      "gke-xxxxxxx"




      BTW, the whole point of this is to use Google Cloud NAT on my cluster so that I have a consistent external IP for all pods when connecting to an external service (Atlas) which uses an IP whitelist. I can see the NAT working for the compute instances, but I cannot connect to the pods to check them.










      share|improve this question









      New contributor




      Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.












      I can (for instance) connect to the cluster compute nodes like this:
      gcloud compute ssh gke-test-deploy-default-pool-xxxxx --internal-ip



      But if I try to set up my kubectl credentials like this:
      gcloud container clusters get-credentials test-deploy --internal-ip
      It complains:




      ERROR: (gcloud.container.clusters.get-credentials) cluster test-deploy
      is not a private cluster.




      I am able to do non-ssh type commands like kubectl get pods --all-namespaces, but if I do kubectl exec -it rabbitmq-podnumber -n backbone-testdeploy bash I get this error:




      Error from server: error dialing backend: No SSH tunnels currently
      open. Were the targets able to accept an ssh-key for user
      "gke-xxxxxxx"




      BTW, the whole point of this is to use Google Cloud NAT on my cluster so that I have a consistent external IP for all pods when connecting to an external service (Atlas) which uses an IP whitelist. I can see the NAT working for the compute instances, but I cannot connect to the pods to check them.







      google-compute-engine google-kubernetes-engine






      share|improve this question









      New contributor




      Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question









      New contributor




      Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question








      edited Apr 3 at 17:17







      Dave Welling













      New contributor




      Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked Apr 3 at 17:12









      Dave WellingDave Welling

      1012




      1012




      New contributor




      Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















          1 Answer
          1






          active

          oldest

          votes


















          0














          The Master node and the worker nodes are each in different networks, the master is in a Google managed one and the worker nodes are in your VPC. With a standard cluster, the master communicates with the nodes via external IP. With a private cluster, the Master node and the worker nodes are connected via a network peering and communicate via internal IPs.



          This causes problems when connecting to the master directly over other peered networks or VPN connections because the network peering routes to the master are not propagated over VPNs and network peering.



          For your use case, disable the external master endpoint. Once this is done, when you run the get-credentials command, your kube config will have the internal master endpoint instead of the external one. You will then need to connect to your master (kubectl) from the VPC network (a bastion host or a proxy).



          Instead, I recommend leaving the external end point active, use get-credentials without using --internal-ip so that your kube config will use the external end point and thus you can connect from anywhere. To make sure that you master is secure, use Master Authorized Networks to define the external IPs or CIDRs you will be connecting from



          I am fairly certain the kubectl exec and logs commands are failing because of how you are getting the credentials.



          One last thing worth checking as well, GKE creates firewall rules and routes automatically (they will be called gke-...), these are required to ensure that SSH tunnels from the master to the nodes work properly.






          share|improve this answer























          • This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.

            – Dave Welling
            yesterday











          • The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).

            – Dave Welling
            yesterday











          • you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster

            – Patrick W
            22 hours ago











          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "2"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );






          Dave Welling is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f961345%2fif-i-create-a-gke-cluster-without-external-ips-and-i-have-a-vpn-connection-to-gc%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          The Master node and the worker nodes are each in different networks, the master is in a Google managed one and the worker nodes are in your VPC. With a standard cluster, the master communicates with the nodes via external IP. With a private cluster, the Master node and the worker nodes are connected via a network peering and communicate via internal IPs.



          This causes problems when connecting to the master directly over other peered networks or VPN connections because the network peering routes to the master are not propagated over VPNs and network peering.



          For your use case, disable the external master endpoint. Once this is done, when you run the get-credentials command, your kube config will have the internal master endpoint instead of the external one. You will then need to connect to your master (kubectl) from the VPC network (a bastion host or a proxy).



          Instead, I recommend leaving the external end point active, use get-credentials without using --internal-ip so that your kube config will use the external end point and thus you can connect from anywhere. To make sure that you master is secure, use Master Authorized Networks to define the external IPs or CIDRs you will be connecting from



          I am fairly certain the kubectl exec and logs commands are failing because of how you are getting the credentials.



          One last thing worth checking as well, GKE creates firewall rules and routes automatically (they will be called gke-...), these are required to ensure that SSH tunnels from the master to the nodes work properly.






          share|improve this answer























          • This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.

            – Dave Welling
            yesterday











          • The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).

            – Dave Welling
            yesterday











          • you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster

            – Patrick W
            22 hours ago















          0














          The Master node and the worker nodes are each in different networks, the master is in a Google managed one and the worker nodes are in your VPC. With a standard cluster, the master communicates with the nodes via external IP. With a private cluster, the Master node and the worker nodes are connected via a network peering and communicate via internal IPs.



          This causes problems when connecting to the master directly over other peered networks or VPN connections because the network peering routes to the master are not propagated over VPNs and network peering.



          For your use case, disable the external master endpoint. Once this is done, when you run the get-credentials command, your kube config will have the internal master endpoint instead of the external one. You will then need to connect to your master (kubectl) from the VPC network (a bastion host or a proxy).



          Instead, I recommend leaving the external end point active, use get-credentials without using --internal-ip so that your kube config will use the external end point and thus you can connect from anywhere. To make sure that you master is secure, use Master Authorized Networks to define the external IPs or CIDRs you will be connecting from



          I am fairly certain the kubectl exec and logs commands are failing because of how you are getting the credentials.



          One last thing worth checking as well, GKE creates firewall rules and routes automatically (they will be called gke-...), these are required to ensure that SSH tunnels from the master to the nodes work properly.






          share|improve this answer























          • This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.

            – Dave Welling
            yesterday











          • The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).

            – Dave Welling
            yesterday











          • you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster

            – Patrick W
            22 hours ago













          0












          0








          0







          The Master node and the worker nodes are each in different networks, the master is in a Google managed one and the worker nodes are in your VPC. With a standard cluster, the master communicates with the nodes via external IP. With a private cluster, the Master node and the worker nodes are connected via a network peering and communicate via internal IPs.



          This causes problems when connecting to the master directly over other peered networks or VPN connections because the network peering routes to the master are not propagated over VPNs and network peering.



          For your use case, disable the external master endpoint. Once this is done, when you run the get-credentials command, your kube config will have the internal master endpoint instead of the external one. You will then need to connect to your master (kubectl) from the VPC network (a bastion host or a proxy).



          Instead, I recommend leaving the external end point active, use get-credentials without using --internal-ip so that your kube config will use the external end point and thus you can connect from anywhere. To make sure that you master is secure, use Master Authorized Networks to define the external IPs or CIDRs you will be connecting from



          I am fairly certain the kubectl exec and logs commands are failing because of how you are getting the credentials.



          One last thing worth checking as well, GKE creates firewall rules and routes automatically (they will be called gke-...), these are required to ensure that SSH tunnels from the master to the nodes work properly.






          share|improve this answer













          The Master node and the worker nodes are each in different networks, the master is in a Google managed one and the worker nodes are in your VPC. With a standard cluster, the master communicates with the nodes via external IP. With a private cluster, the Master node and the worker nodes are connected via a network peering and communicate via internal IPs.



          This causes problems when connecting to the master directly over other peered networks or VPN connections because the network peering routes to the master are not propagated over VPNs and network peering.



          For your use case, disable the external master endpoint. Once this is done, when you run the get-credentials command, your kube config will have the internal master endpoint instead of the external one. You will then need to connect to your master (kubectl) from the VPC network (a bastion host or a proxy).



          Instead, I recommend leaving the external end point active, use get-credentials without using --internal-ip so that your kube config will use the external end point and thus you can connect from anywhere. To make sure that you master is secure, use Master Authorized Networks to define the external IPs or CIDRs you will be connecting from



          I am fairly certain the kubectl exec and logs commands are failing because of how you are getting the credentials.



          One last thing worth checking as well, GKE creates firewall rules and routes automatically (they will be called gke-...), these are required to ensure that SSH tunnels from the master to the nodes work properly.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered 2 days ago









          Patrick WPatrick W

          40918




          40918












          • This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.

            – Dave Welling
            yesterday











          • The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).

            – Dave Welling
            yesterday











          • you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster

            – Patrick W
            22 hours ago

















          • This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.

            – Dave Welling
            yesterday











          • The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).

            – Dave Welling
            yesterday











          • you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster

            – Patrick W
            22 hours ago
















          This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.

          – Dave Welling
          yesterday





          This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.

          – Dave Welling
          yesterday













          The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).

          – Dave Welling
          yesterday





          The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).

          – Dave Welling
          yesterday













          you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster

          – Patrick W
          22 hours ago





          you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster

          – Patrick W
          22 hours ago










          Dave Welling is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          Dave Welling is a new contributor. Be nice, and check out our Code of Conduct.












          Dave Welling is a new contributor. Be nice, and check out our Code of Conduct.











          Dave Welling is a new contributor. Be nice, and check out our Code of Conduct.














          Thanks for contributing an answer to Server Fault!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f961345%2fif-i-create-a-gke-cluster-without-external-ips-and-i-have-a-vpn-connection-to-gc%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Wikipedia:Vital articles Мазмуну Biography - Өмүр баян Philosophy and psychology - Философия жана психология Religion - Дин Social sciences - Коомдук илимдер Language and literature - Тил жана адабият Science - Илим Technology - Технология Arts and recreation - Искусство жана эс алуу History and geography - Тарых жана география Навигация менюсу

          Bruxelas-Capital Índice Historia | Composición | Situación lingüística | Clima | Cidades irmandadas | Notas | Véxase tamén | Menú de navegacióneO uso das linguas en Bruxelas e a situación do neerlandés"Rexión de Bruxelas Capital"o orixinalSitio da rexiónPáxina de Bruselas no sitio da Oficina de Promoción Turística de Valonia e BruxelasMapa Interactivo da Rexión de Bruxelas-CapitaleeWorldCat332144929079854441105155190212ID28008674080552-90000 0001 0666 3698n94104302ID540940339365017018237

          What should I write in an apology letter, since I have decided not to join a company after accepting an offer letterShould I keep looking after accepting a job offer?What should I do when I've been verbally told I would get an offer letter, but still haven't gotten one after 4 weeks?Do I accept an offer from a company that I am not likely to join?New job hasn't confirmed starting date and I want to give current employer as much notice as possibleHow should I address my manager in my resignation letter?HR delayed background verification, now jobless as resignedNo email communication after accepting a formal written offer. How should I phrase the call?What should I do if after receiving a verbal offer letter I am informed that my written job offer is put on hold due to some internal issues?Should I inform the current employer that I am about to resign within 1-2 weeks since I have signed the offer letter and waiting for visa?What company will do, if I send their offer letter to another company