Is random forest for regression a 'true' regression?Decision Trees and Regression - Can predicted values be outside range of training data?MCMC sampling of decision tree space vs. random forestUsing LASSO on random forestHow is the best split point determined/predictor calculated in a regression random forest?Is getting several times the same variable in a branch of a regression tree the sign of overfitting?How to extract a splitting point for numerical values in a random forest model?Random Forest for Regression: How does a decision tree decides the value of a terminal node when outcome is many continues values?Splitting criteria based on MSE in H2O DRF (Random Forest) and GBMRandom Forest probabilityScale response variable y in random forest or gradient boosted trees for regression == scale prediction?Aggregation of “tree results” in random forest regression

How many chess players are over 2500 Elo?

How do you say “buy” in the sense of “believe”?

What is the difference between nullifying your vote and not going to vote at all?

Infinite Sequence based on Simple Rule

Should I disclose a colleague's illness (that I should not know about) when others badmouth him

How did early x86 BIOS programmers manage to program full blown TUIs given very few bytes of ROM/EPROM?

Does this degree 12 genus 1 curve have only one point over infinitely many finite fields?

What does the view outside my ship traveling at light speed look like?

Is there a down side to setting the sampling time of a SAR ADC as long as possible?

Is there a public standard for 8 and 10 character grid locators?

General purpose replacement for enum with FlagsAttribute

Why is this Simple Puzzle impossible to solve?

Looking for a soft substance that doesn't dissolve underwater

What is the largest (size) solid object ever dropped from an airplane to impact the ground in freefall?

What are these arcade games in Ghostbusters 1984?

Is this resistor leaking? If so, is it a concern?

Were pens caps holes designed to prevent death by suffocation if swallowed?

Is the first derivative operation on a signal a causal system?

I unknowingly submitted plagiarised work

Would the Geas spell work in a dead magic zone once you enter it?

Crossing US border with music files I'm legally allowed to possess

When do characters level up?

Is floating in space similar to falling under gravity?

Why do airplanes use an axial flow jet engine instead of a more compact centrifugal jet engine?



Is random forest for regression a 'true' regression?


Decision Trees and Regression - Can predicted values be outside range of training data?MCMC sampling of decision tree space vs. random forestUsing LASSO on random forestHow is the best split point determined/predictor calculated in a regression random forest?Is getting several times the same variable in a branch of a regression tree the sign of overfitting?How to extract a splitting point for numerical values in a random forest model?Random Forest for Regression: How does a decision tree decides the value of a terminal node when outcome is many continues values?Splitting criteria based on MSE in H2O DRF (Random Forest) and GBMRandom Forest probabilityScale response variable y in random forest or gradient boosted trees for regression == scale prediction?Aggregation of “tree results” in random forest regression






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








17












$begingroup$


Random forests are used for regression. However, from what I understand, they assign an average target value at each leaf. Since there are only limited leaves in each tree, there are only specific values that the target can attain from our regression model. Thus is it not just a 'discrete' regression (like a step function) and not like linear regression which is 'continuous'?



Am I understanding this correctly? If yes, what advantage does random forest offer in regression?










share|cite|improve this question











$endgroup$







  • 1




    $begingroup$
    Related: Decision Trees and Regression - Can predicted values be outside range of training data?
    $endgroup$
    – Stephan Kolassa
    May 14 at 12:23

















17












$begingroup$


Random forests are used for regression. However, from what I understand, they assign an average target value at each leaf. Since there are only limited leaves in each tree, there are only specific values that the target can attain from our regression model. Thus is it not just a 'discrete' regression (like a step function) and not like linear regression which is 'continuous'?



Am I understanding this correctly? If yes, what advantage does random forest offer in regression?










share|cite|improve this question











$endgroup$







  • 1




    $begingroup$
    Related: Decision Trees and Regression - Can predicted values be outside range of training data?
    $endgroup$
    – Stephan Kolassa
    May 14 at 12:23













17












17








17


5



$begingroup$


Random forests are used for regression. However, from what I understand, they assign an average target value at each leaf. Since there are only limited leaves in each tree, there are only specific values that the target can attain from our regression model. Thus is it not just a 'discrete' regression (like a step function) and not like linear regression which is 'continuous'?



Am I understanding this correctly? If yes, what advantage does random forest offer in regression?










share|cite|improve this question











$endgroup$




Random forests are used for regression. However, from what I understand, they assign an average target value at each leaf. Since there are only limited leaves in each tree, there are only specific values that the target can attain from our regression model. Thus is it not just a 'discrete' regression (like a step function) and not like linear regression which is 'continuous'?



Am I understanding this correctly? If yes, what advantage does random forest offer in regression?







regression random-forest cart






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited May 15 at 4:16







user110565

















asked May 14 at 12:07









user110565user110565

915




915







  • 1




    $begingroup$
    Related: Decision Trees and Regression - Can predicted values be outside range of training data?
    $endgroup$
    – Stephan Kolassa
    May 14 at 12:23












  • 1




    $begingroup$
    Related: Decision Trees and Regression - Can predicted values be outside range of training data?
    $endgroup$
    – Stephan Kolassa
    May 14 at 12:23







1




1




$begingroup$
Related: Decision Trees and Regression - Can predicted values be outside range of training data?
$endgroup$
– Stephan Kolassa
May 14 at 12:23




$begingroup$
Related: Decision Trees and Regression - Can predicted values be outside range of training data?
$endgroup$
– Stephan Kolassa
May 14 at 12:23










2 Answers
2






active

oldest

votes


















18












$begingroup$

This is correct - random forests discretize continuous variables since they are based on decision trees, which function through recursive binary partitioning. But with sufficient data and sufficient splits, a step function with many small steps can approximate a smooth function. So this need not be a problem. If you really want to capture a smooth response by a single predictor, you calculate the partial effect of any particular variable and fit a smooth function to it (this does not affect the model itself, which will retain this stepwise character).



Random forests offer quite a few advantages over standard regression techniques for some applications. To mention just three:



  1. They allow the use of arbitrarily many predictors (more predictors than data points is possible)

  2. They can approximate complex nonlinear shapes without a priori specification

  3. They can capture complex interactions between predictions without a priori specification.

As for whether it is a 'true' regression, this is somewhat semantic. After all, piecewise regression is regression too, but is also not smooth.






share|cite|improve this answer











$endgroup$








  • 7




    $begingroup$
    Also, regression with only categorical features also wouldn't be smooth.
    $endgroup$
    – Tim
    May 14 at 12:41







  • 3




    $begingroup$
    Could a regression with even one categorical feature be smooth?
    $endgroup$
    – Dave
    May 14 at 19:59


















4












$begingroup$

It is discrete, but then any output in the form of a floating point number with fixed number of bits will be discrete. If a tree has 100 leaves, then it can give 100 different numbers. If you have 100 different trees with 100 leaves each, then your random forest can theoretically have 100^100 different values, which can give 200 (decimal) digits of precision, or ~600 bits. Of course, there is going to be some overlap, so you're not actually going to see 100^100 different values. The distribution tends to get more discrete the more you get to the extremes; each tree is going to have some minimum leaf (a leaf that gives an output that's less than or equal to all the other leaves), and once you get the minimum leaf from each tree, you can't get any lower. So there's going to be some minimum overall value for the forest, and as you deviate from that value, you're going to start out with all but a few trees being at their minimum leaf, making small deviations from the minimum value increase in discrete jumps. But decreased reliability at the extremes is a property of regressions in general, not just random forests.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    The leaves can store any value from the training data (so with the right training data, 100 trees of 100 leaves can store up to 10,000 distinct values). But the returned value is the mean of the chosen leaf from each tree. So the number of bits of precision of that value is the same whether you have 2 trees or 100 trees.
    $endgroup$
    – Darren Cook
    May 17 at 7:01











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f408282%2fis-random-forest-for-regression-a-true-regression%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









18












$begingroup$

This is correct - random forests discretize continuous variables since they are based on decision trees, which function through recursive binary partitioning. But with sufficient data and sufficient splits, a step function with many small steps can approximate a smooth function. So this need not be a problem. If you really want to capture a smooth response by a single predictor, you calculate the partial effect of any particular variable and fit a smooth function to it (this does not affect the model itself, which will retain this stepwise character).



Random forests offer quite a few advantages over standard regression techniques for some applications. To mention just three:



  1. They allow the use of arbitrarily many predictors (more predictors than data points is possible)

  2. They can approximate complex nonlinear shapes without a priori specification

  3. They can capture complex interactions between predictions without a priori specification.

As for whether it is a 'true' regression, this is somewhat semantic. After all, piecewise regression is regression too, but is also not smooth.






share|cite|improve this answer











$endgroup$








  • 7




    $begingroup$
    Also, regression with only categorical features also wouldn't be smooth.
    $endgroup$
    – Tim
    May 14 at 12:41







  • 3




    $begingroup$
    Could a regression with even one categorical feature be smooth?
    $endgroup$
    – Dave
    May 14 at 19:59















18












$begingroup$

This is correct - random forests discretize continuous variables since they are based on decision trees, which function through recursive binary partitioning. But with sufficient data and sufficient splits, a step function with many small steps can approximate a smooth function. So this need not be a problem. If you really want to capture a smooth response by a single predictor, you calculate the partial effect of any particular variable and fit a smooth function to it (this does not affect the model itself, which will retain this stepwise character).



Random forests offer quite a few advantages over standard regression techniques for some applications. To mention just three:



  1. They allow the use of arbitrarily many predictors (more predictors than data points is possible)

  2. They can approximate complex nonlinear shapes without a priori specification

  3. They can capture complex interactions between predictions without a priori specification.

As for whether it is a 'true' regression, this is somewhat semantic. After all, piecewise regression is regression too, but is also not smooth.






share|cite|improve this answer











$endgroup$








  • 7




    $begingroup$
    Also, regression with only categorical features also wouldn't be smooth.
    $endgroup$
    – Tim
    May 14 at 12:41







  • 3




    $begingroup$
    Could a regression with even one categorical feature be smooth?
    $endgroup$
    – Dave
    May 14 at 19:59













18












18








18





$begingroup$

This is correct - random forests discretize continuous variables since they are based on decision trees, which function through recursive binary partitioning. But with sufficient data and sufficient splits, a step function with many small steps can approximate a smooth function. So this need not be a problem. If you really want to capture a smooth response by a single predictor, you calculate the partial effect of any particular variable and fit a smooth function to it (this does not affect the model itself, which will retain this stepwise character).



Random forests offer quite a few advantages over standard regression techniques for some applications. To mention just three:



  1. They allow the use of arbitrarily many predictors (more predictors than data points is possible)

  2. They can approximate complex nonlinear shapes without a priori specification

  3. They can capture complex interactions between predictions without a priori specification.

As for whether it is a 'true' regression, this is somewhat semantic. After all, piecewise regression is regression too, but is also not smooth.






share|cite|improve this answer











$endgroup$



This is correct - random forests discretize continuous variables since they are based on decision trees, which function through recursive binary partitioning. But with sufficient data and sufficient splits, a step function with many small steps can approximate a smooth function. So this need not be a problem. If you really want to capture a smooth response by a single predictor, you calculate the partial effect of any particular variable and fit a smooth function to it (this does not affect the model itself, which will retain this stepwise character).



Random forests offer quite a few advantages over standard regression techniques for some applications. To mention just three:



  1. They allow the use of arbitrarily many predictors (more predictors than data points is possible)

  2. They can approximate complex nonlinear shapes without a priori specification

  3. They can capture complex interactions between predictions without a priori specification.

As for whether it is a 'true' regression, this is somewhat semantic. After all, piecewise regression is regression too, but is also not smooth.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited May 14 at 12:28

























answered May 14 at 12:23









mktmkt

3,89352066




3,89352066







  • 7




    $begingroup$
    Also, regression with only categorical features also wouldn't be smooth.
    $endgroup$
    – Tim
    May 14 at 12:41







  • 3




    $begingroup$
    Could a regression with even one categorical feature be smooth?
    $endgroup$
    – Dave
    May 14 at 19:59












  • 7




    $begingroup$
    Also, regression with only categorical features also wouldn't be smooth.
    $endgroup$
    – Tim
    May 14 at 12:41







  • 3




    $begingroup$
    Could a regression with even one categorical feature be smooth?
    $endgroup$
    – Dave
    May 14 at 19:59







7




7




$begingroup$
Also, regression with only categorical features also wouldn't be smooth.
$endgroup$
– Tim
May 14 at 12:41





$begingroup$
Also, regression with only categorical features also wouldn't be smooth.
$endgroup$
– Tim
May 14 at 12:41





3




3




$begingroup$
Could a regression with even one categorical feature be smooth?
$endgroup$
– Dave
May 14 at 19:59




$begingroup$
Could a regression with even one categorical feature be smooth?
$endgroup$
– Dave
May 14 at 19:59













4












$begingroup$

It is discrete, but then any output in the form of a floating point number with fixed number of bits will be discrete. If a tree has 100 leaves, then it can give 100 different numbers. If you have 100 different trees with 100 leaves each, then your random forest can theoretically have 100^100 different values, which can give 200 (decimal) digits of precision, or ~600 bits. Of course, there is going to be some overlap, so you're not actually going to see 100^100 different values. The distribution tends to get more discrete the more you get to the extremes; each tree is going to have some minimum leaf (a leaf that gives an output that's less than or equal to all the other leaves), and once you get the minimum leaf from each tree, you can't get any lower. So there's going to be some minimum overall value for the forest, and as you deviate from that value, you're going to start out with all but a few trees being at their minimum leaf, making small deviations from the minimum value increase in discrete jumps. But decreased reliability at the extremes is a property of regressions in general, not just random forests.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    The leaves can store any value from the training data (so with the right training data, 100 trees of 100 leaves can store up to 10,000 distinct values). But the returned value is the mean of the chosen leaf from each tree. So the number of bits of precision of that value is the same whether you have 2 trees or 100 trees.
    $endgroup$
    – Darren Cook
    May 17 at 7:01















4












$begingroup$

It is discrete, but then any output in the form of a floating point number with fixed number of bits will be discrete. If a tree has 100 leaves, then it can give 100 different numbers. If you have 100 different trees with 100 leaves each, then your random forest can theoretically have 100^100 different values, which can give 200 (decimal) digits of precision, or ~600 bits. Of course, there is going to be some overlap, so you're not actually going to see 100^100 different values. The distribution tends to get more discrete the more you get to the extremes; each tree is going to have some minimum leaf (a leaf that gives an output that's less than or equal to all the other leaves), and once you get the minimum leaf from each tree, you can't get any lower. So there's going to be some minimum overall value for the forest, and as you deviate from that value, you're going to start out with all but a few trees being at their minimum leaf, making small deviations from the minimum value increase in discrete jumps. But decreased reliability at the extremes is a property of regressions in general, not just random forests.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    The leaves can store any value from the training data (so with the right training data, 100 trees of 100 leaves can store up to 10,000 distinct values). But the returned value is the mean of the chosen leaf from each tree. So the number of bits of precision of that value is the same whether you have 2 trees or 100 trees.
    $endgroup$
    – Darren Cook
    May 17 at 7:01













4












4








4





$begingroup$

It is discrete, but then any output in the form of a floating point number with fixed number of bits will be discrete. If a tree has 100 leaves, then it can give 100 different numbers. If you have 100 different trees with 100 leaves each, then your random forest can theoretically have 100^100 different values, which can give 200 (decimal) digits of precision, or ~600 bits. Of course, there is going to be some overlap, so you're not actually going to see 100^100 different values. The distribution tends to get more discrete the more you get to the extremes; each tree is going to have some minimum leaf (a leaf that gives an output that's less than or equal to all the other leaves), and once you get the minimum leaf from each tree, you can't get any lower. So there's going to be some minimum overall value for the forest, and as you deviate from that value, you're going to start out with all but a few trees being at their minimum leaf, making small deviations from the minimum value increase in discrete jumps. But decreased reliability at the extremes is a property of regressions in general, not just random forests.






share|cite|improve this answer









$endgroup$



It is discrete, but then any output in the form of a floating point number with fixed number of bits will be discrete. If a tree has 100 leaves, then it can give 100 different numbers. If you have 100 different trees with 100 leaves each, then your random forest can theoretically have 100^100 different values, which can give 200 (decimal) digits of precision, or ~600 bits. Of course, there is going to be some overlap, so you're not actually going to see 100^100 different values. The distribution tends to get more discrete the more you get to the extremes; each tree is going to have some minimum leaf (a leaf that gives an output that's less than or equal to all the other leaves), and once you get the minimum leaf from each tree, you can't get any lower. So there's going to be some minimum overall value for the forest, and as you deviate from that value, you're going to start out with all but a few trees being at their minimum leaf, making small deviations from the minimum value increase in discrete jumps. But decreased reliability at the extremes is a property of regressions in general, not just random forests.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered May 14 at 16:37









AcccumulationAcccumulation

1,76327




1,76327











  • $begingroup$
    The leaves can store any value from the training data (so with the right training data, 100 trees of 100 leaves can store up to 10,000 distinct values). But the returned value is the mean of the chosen leaf from each tree. So the number of bits of precision of that value is the same whether you have 2 trees or 100 trees.
    $endgroup$
    – Darren Cook
    May 17 at 7:01
















  • $begingroup$
    The leaves can store any value from the training data (so with the right training data, 100 trees of 100 leaves can store up to 10,000 distinct values). But the returned value is the mean of the chosen leaf from each tree. So the number of bits of precision of that value is the same whether you have 2 trees or 100 trees.
    $endgroup$
    – Darren Cook
    May 17 at 7:01















$begingroup$
The leaves can store any value from the training data (so with the right training data, 100 trees of 100 leaves can store up to 10,000 distinct values). But the returned value is the mean of the chosen leaf from each tree. So the number of bits of precision of that value is the same whether you have 2 trees or 100 trees.
$endgroup$
– Darren Cook
May 17 at 7:01




$begingroup$
The leaves can store any value from the training data (so with the right training data, 100 trees of 100 leaves can store up to 10,000 distinct values). But the returned value is the mean of the chosen leaf from each tree. So the number of bits of precision of that value is the same whether you have 2 trees or 100 trees.
$endgroup$
– Darren Cook
May 17 at 7:01

















draft saved

draft discarded
















































Thanks for contributing an answer to Cross Validated!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f408282%2fis-random-forest-for-regression-a-true-regression%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020