If I create a GKE cluster without external IPs and I have a VPN connection to GCP, how can I ssh with kubectl?Accessing kubernetes (hosted via google container engine) securely using kubectl from a remote serverHow to run Ansible's gce.py script from inside an instance running in the cloud?How to enable Stackdriver Monitoring and Stackdriver trace on existing GKE cluster?Restricting SSH access in GCE to Linux bastion host using firewall rules with source and target tagsCan I remove the external IP from my GKE cluster?Kubernetes (on GKE) external connection through NAT for specific kube services?GCP: kubectl exec/logs fails to container on using UBUNTU as OSKubernetes GCE Ingress cannot find servicesGKE - Kube-DNS stubDomain resolution to VPN network not workingExpose GKE k8s external loadbalancer port 22 without exposing cluster

Why not use SQL instead of GraphQL?

Why did Neo believe he could trust the machine when he asked for peace?

Today is the Center

Is this a crack on the carbon frame?

Approximately how much travel time was saved by the opening of the Suez Canal in 1869?

Is a tag line useful on a cover?

TGV timetables / schedules?

Why doesn't Newton's third law mean a person bounces back to where they started when they hit the ground?

What would happen to a modern skyscraper if it rains micro blackholes?

Font hinting is lost in Chrome-like browsers (for some languages )

How to format long polynomial?

Are the number of citations and number of published articles the most important criteria for a tenure promotion?

Why don't electron-positron collisions release infinite energy?

How is the claim "I am in New York only if I am in America" the same as "If I am in New York, then I am in America?

Has the BBC provided arguments for saying Brexit being cancelled is unlikely?

How does one intimidate enemies without having the capacity for violence?

Can an x86 CPU running in real mode be considered to be basically an 8086 CPU?

tikz: show 0 at the axis origin

How does strength of boric acid solution increase in presence of salicylic acid?

Languages that we cannot (dis)prove to be Context-Free

Why are 150k or 200k jobs considered good when there are 300k+ births a month?

Which models of the Boeing 737 are still in production?

A newer friend of my brother's gave him a load of baseball cards that are supposedly extremely valuable. Is this a scam?

To string or not to string



If I create a GKE cluster without external IPs and I have a VPN connection to GCP, how can I ssh with kubectl?


Accessing kubernetes (hosted via google container engine) securely using kubectl from a remote serverHow to run Ansible's gce.py script from inside an instance running in the cloud?How to enable Stackdriver Monitoring and Stackdriver trace on existing GKE cluster?Restricting SSH access in GCE to Linux bastion host using firewall rules with source and target tagsCan I remove the external IP from my GKE cluster?Kubernetes (on GKE) external connection through NAT for specific kube services?GCP: kubectl exec/logs fails to container on using UBUNTU as OSKubernetes GCE Ingress cannot find servicesGKE - Kube-DNS stubDomain resolution to VPN network not workingExpose GKE k8s external loadbalancer port 22 without exposing cluster






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








0















I can (for instance) connect to the cluster compute nodes like this:
gcloud compute ssh gke-test-deploy-default-pool-xxxxx --internal-ip



But if I try to set up my kubectl credentials like this:
gcloud container clusters get-credentials test-deploy --internal-ip
It complains:




ERROR: (gcloud.container.clusters.get-credentials) cluster test-deploy
is not a private cluster.




I am able to do non-ssh type commands like kubectl get pods --all-namespaces, but if I do kubectl exec -it rabbitmq-podnumber -n backbone-testdeploy bash I get this error:




Error from server: error dialing backend: No SSH tunnels currently
open. Were the targets able to accept an ssh-key for user
"gke-xxxxxxx"




BTW, the whole point of this is to use Google Cloud NAT on my cluster so that I have a consistent external IP for all pods when connecting to an external service (Atlas) which uses an IP whitelist. I can see the NAT working for the compute instances, but I cannot connect to the pods to check them.










share|improve this question









New contributor




Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.


























    0















    I can (for instance) connect to the cluster compute nodes like this:
    gcloud compute ssh gke-test-deploy-default-pool-xxxxx --internal-ip



    But if I try to set up my kubectl credentials like this:
    gcloud container clusters get-credentials test-deploy --internal-ip
    It complains:




    ERROR: (gcloud.container.clusters.get-credentials) cluster test-deploy
    is not a private cluster.




    I am able to do non-ssh type commands like kubectl get pods --all-namespaces, but if I do kubectl exec -it rabbitmq-podnumber -n backbone-testdeploy bash I get this error:




    Error from server: error dialing backend: No SSH tunnels currently
    open. Were the targets able to accept an ssh-key for user
    "gke-xxxxxxx"




    BTW, the whole point of this is to use Google Cloud NAT on my cluster so that I have a consistent external IP for all pods when connecting to an external service (Atlas) which uses an IP whitelist. I can see the NAT working for the compute instances, but I cannot connect to the pods to check them.










    share|improve this question









    New contributor




    Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






















      0












      0








      0








      I can (for instance) connect to the cluster compute nodes like this:
      gcloud compute ssh gke-test-deploy-default-pool-xxxxx --internal-ip



      But if I try to set up my kubectl credentials like this:
      gcloud container clusters get-credentials test-deploy --internal-ip
      It complains:




      ERROR: (gcloud.container.clusters.get-credentials) cluster test-deploy
      is not a private cluster.




      I am able to do non-ssh type commands like kubectl get pods --all-namespaces, but if I do kubectl exec -it rabbitmq-podnumber -n backbone-testdeploy bash I get this error:




      Error from server: error dialing backend: No SSH tunnels currently
      open. Were the targets able to accept an ssh-key for user
      "gke-xxxxxxx"




      BTW, the whole point of this is to use Google Cloud NAT on my cluster so that I have a consistent external IP for all pods when connecting to an external service (Atlas) which uses an IP whitelist. I can see the NAT working for the compute instances, but I cannot connect to the pods to check them.










      share|improve this question









      New contributor




      Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.












      I can (for instance) connect to the cluster compute nodes like this:
      gcloud compute ssh gke-test-deploy-default-pool-xxxxx --internal-ip



      But if I try to set up my kubectl credentials like this:
      gcloud container clusters get-credentials test-deploy --internal-ip
      It complains:




      ERROR: (gcloud.container.clusters.get-credentials) cluster test-deploy
      is not a private cluster.




      I am able to do non-ssh type commands like kubectl get pods --all-namespaces, but if I do kubectl exec -it rabbitmq-podnumber -n backbone-testdeploy bash I get this error:




      Error from server: error dialing backend: No SSH tunnels currently
      open. Were the targets able to accept an ssh-key for user
      "gke-xxxxxxx"




      BTW, the whole point of this is to use Google Cloud NAT on my cluster so that I have a consistent external IP for all pods when connecting to an external service (Atlas) which uses an IP whitelist. I can see the NAT working for the compute instances, but I cannot connect to the pods to check them.







      google-compute-engine google-kubernetes-engine






      share|improve this question









      New contributor




      Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question









      New contributor




      Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question








      edited Apr 3 at 17:17







      Dave Welling













      New contributor




      Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked Apr 3 at 17:12









      Dave WellingDave Welling

      1012




      1012




      New contributor




      Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      Dave Welling is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















          1 Answer
          1






          active

          oldest

          votes


















          0














          The Master node and the worker nodes are each in different networks, the master is in a Google managed one and the worker nodes are in your VPC. With a standard cluster, the master communicates with the nodes via external IP. With a private cluster, the Master node and the worker nodes are connected via a network peering and communicate via internal IPs.



          This causes problems when connecting to the master directly over other peered networks or VPN connections because the network peering routes to the master are not propagated over VPNs and network peering.



          For your use case, disable the external master endpoint. Once this is done, when you run the get-credentials command, your kube config will have the internal master endpoint instead of the external one. You will then need to connect to your master (kubectl) from the VPC network (a bastion host or a proxy).



          Instead, I recommend leaving the external end point active, use get-credentials without using --internal-ip so that your kube config will use the external end point and thus you can connect from anywhere. To make sure that you master is secure, use Master Authorized Networks to define the external IPs or CIDRs you will be connecting from



          I am fairly certain the kubectl exec and logs commands are failing because of how you are getting the credentials.



          One last thing worth checking as well, GKE creates firewall rules and routes automatically (they will be called gke-...), these are required to ensure that SSH tunnels from the master to the nodes work properly.






          share|improve this answer























          • This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.

            – Dave Welling
            yesterday











          • The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).

            – Dave Welling
            yesterday











          • you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster

            – Patrick W
            22 hours ago











          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "2"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );






          Dave Welling is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f961345%2fif-i-create-a-gke-cluster-without-external-ips-and-i-have-a-vpn-connection-to-gc%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          The Master node and the worker nodes are each in different networks, the master is in a Google managed one and the worker nodes are in your VPC. With a standard cluster, the master communicates with the nodes via external IP. With a private cluster, the Master node and the worker nodes are connected via a network peering and communicate via internal IPs.



          This causes problems when connecting to the master directly over other peered networks or VPN connections because the network peering routes to the master are not propagated over VPNs and network peering.



          For your use case, disable the external master endpoint. Once this is done, when you run the get-credentials command, your kube config will have the internal master endpoint instead of the external one. You will then need to connect to your master (kubectl) from the VPC network (a bastion host or a proxy).



          Instead, I recommend leaving the external end point active, use get-credentials without using --internal-ip so that your kube config will use the external end point and thus you can connect from anywhere. To make sure that you master is secure, use Master Authorized Networks to define the external IPs or CIDRs you will be connecting from



          I am fairly certain the kubectl exec and logs commands are failing because of how you are getting the credentials.



          One last thing worth checking as well, GKE creates firewall rules and routes automatically (they will be called gke-...), these are required to ensure that SSH tunnels from the master to the nodes work properly.






          share|improve this answer























          • This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.

            – Dave Welling
            yesterday











          • The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).

            – Dave Welling
            yesterday











          • you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster

            – Patrick W
            22 hours ago















          0














          The Master node and the worker nodes are each in different networks, the master is in a Google managed one and the worker nodes are in your VPC. With a standard cluster, the master communicates with the nodes via external IP. With a private cluster, the Master node and the worker nodes are connected via a network peering and communicate via internal IPs.



          This causes problems when connecting to the master directly over other peered networks or VPN connections because the network peering routes to the master are not propagated over VPNs and network peering.



          For your use case, disable the external master endpoint. Once this is done, when you run the get-credentials command, your kube config will have the internal master endpoint instead of the external one. You will then need to connect to your master (kubectl) from the VPC network (a bastion host or a proxy).



          Instead, I recommend leaving the external end point active, use get-credentials without using --internal-ip so that your kube config will use the external end point and thus you can connect from anywhere. To make sure that you master is secure, use Master Authorized Networks to define the external IPs or CIDRs you will be connecting from



          I am fairly certain the kubectl exec and logs commands are failing because of how you are getting the credentials.



          One last thing worth checking as well, GKE creates firewall rules and routes automatically (they will be called gke-...), these are required to ensure that SSH tunnels from the master to the nodes work properly.






          share|improve this answer























          • This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.

            – Dave Welling
            yesterday











          • The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).

            – Dave Welling
            yesterday











          • you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster

            – Patrick W
            22 hours ago













          0












          0








          0







          The Master node and the worker nodes are each in different networks, the master is in a Google managed one and the worker nodes are in your VPC. With a standard cluster, the master communicates with the nodes via external IP. With a private cluster, the Master node and the worker nodes are connected via a network peering and communicate via internal IPs.



          This causes problems when connecting to the master directly over other peered networks or VPN connections because the network peering routes to the master are not propagated over VPNs and network peering.



          For your use case, disable the external master endpoint. Once this is done, when you run the get-credentials command, your kube config will have the internal master endpoint instead of the external one. You will then need to connect to your master (kubectl) from the VPC network (a bastion host or a proxy).



          Instead, I recommend leaving the external end point active, use get-credentials without using --internal-ip so that your kube config will use the external end point and thus you can connect from anywhere. To make sure that you master is secure, use Master Authorized Networks to define the external IPs or CIDRs you will be connecting from



          I am fairly certain the kubectl exec and logs commands are failing because of how you are getting the credentials.



          One last thing worth checking as well, GKE creates firewall rules and routes automatically (they will be called gke-...), these are required to ensure that SSH tunnels from the master to the nodes work properly.






          share|improve this answer













          The Master node and the worker nodes are each in different networks, the master is in a Google managed one and the worker nodes are in your VPC. With a standard cluster, the master communicates with the nodes via external IP. With a private cluster, the Master node and the worker nodes are connected via a network peering and communicate via internal IPs.



          This causes problems when connecting to the master directly over other peered networks or VPN connections because the network peering routes to the master are not propagated over VPNs and network peering.



          For your use case, disable the external master endpoint. Once this is done, when you run the get-credentials command, your kube config will have the internal master endpoint instead of the external one. You will then need to connect to your master (kubectl) from the VPC network (a bastion host or a proxy).



          Instead, I recommend leaving the external end point active, use get-credentials without using --internal-ip so that your kube config will use the external end point and thus you can connect from anywhere. To make sure that you master is secure, use Master Authorized Networks to define the external IPs or CIDRs you will be connecting from



          I am fairly certain the kubectl exec and logs commands are failing because of how you are getting the credentials.



          One last thing worth checking as well, GKE creates firewall rules and routes automatically (they will be called gke-...), these are required to ensure that SSH tunnels from the master to the nodes work properly.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered 2 days ago









          Patrick WPatrick W

          40918




          40918












          • This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.

            – Dave Welling
            yesterday











          • The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).

            – Dave Welling
            yesterday











          • you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster

            – Patrick W
            22 hours ago

















          • This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.

            – Dave Welling
            yesterday











          • The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).

            – Dave Welling
            yesterday











          • you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster

            – Patrick W
            22 hours ago
















          This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.

          – Dave Welling
          yesterday





          This is a very helpful answer, but I'm concerned that using Master Authorized Networks might not solve my original problem - I need to force the cluster to use a single static IP on egress (as with a NAT). This is so the whitelist on my 3rd party service (Atlas) remains valid as the cluster auto-scales up. It seems that the only way the Cloud NAT will assume responsibility for the cluster members is if I prevent them from having external IPs.

          – Dave Welling
          yesterday













          The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).

          – Dave Welling
          yesterday





          The solutions described here (cloud.google.com/kubernetes-engine/docs/how-to/private-clusters) seemed to have potential, but understanding them is slow going for me (I'm a developer not a network engineer).

          – Dave Welling
          yesterday













          you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster

          – Patrick W
          22 hours ago





          you can still use master authorized networks with private clusters and Cloud NAT. All your worker nodes will have no external IP, so all egress traffic will use Cloud NAT. The Master will have an external endpoint so you can run kubectl commands from outside your cluster

          – Patrick W
          22 hours ago










          Dave Welling is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          Dave Welling is a new contributor. Be nice, and check out our Code of Conduct.












          Dave Welling is a new contributor. Be nice, and check out our Code of Conduct.











          Dave Welling is a new contributor. Be nice, and check out our Code of Conduct.














          Thanks for contributing an answer to Server Fault!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f961345%2fif-i-create-a-gke-cluster-without-external-ips-and-i-have-a-vpn-connection-to-gc%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

          Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

          Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020