Why is it called Latent Vector?Why do we need Upsampling and Downsampling in Progressive Growing of Ganswhy D(x|y) and not D(x,y) in conditional generative networksWhy doesn't VAE suffer Mode collapseWhy are Variational autoencoder's output is blurred while GANs output is crisp and has sharp edges?How to study the correlation between GAN's input vector and output imageWhy is my generator loss function increasing with iterations?Why do we use the word “kernel” in the expression “Gaussian kernel”?Why use the output of the generator to train the discriminator in a GAN?

What are neighboring ports?

Explain the ending of Black Mirror's "Smithereens"

Fundamental group of the real projective plane and its universal cover

Why do so many letters in Spanish get pronounced like the English 'h'?

Why does ''cat "$1:-/dev/stdin | ... &>/dev/null'' work in bash but not dash?

What is this airplane?

If there's something that implicates the president why is there then a national security issue? (John Dowd)

What are some really overused phrases in French that are common nowadays?

How to safely destroy (a large quantity of) valid checks?

Why does this query, missing a FROM clause, not error out?

Why did Intel abandon unified CPU cache?

Why am I getting a strange double quote (“) in Open Office instead of the ordinary one (")?

bash does not know the letter 'p'

Is it expected that a reader will skip parts of what you write?

Creating an Output vs. snipping tool

Does putting salt first make it easier for attacker to bruteforce the hash?

How to trick the reader into thinking they're following a redshirt instead of the protagonist?

Why not invest in precious metals?

How can one's career as a reviewer be ended?

Code downloads a text file from a website, saves it to local disk, and then loads it into a list for further processing

Teaching a class likely meant to inflate the GPA of student athletes

PDF vs. PNG figure: why does figure load so much faster even if file sizes are the same?

Fermat's statement about the ancients: How serious was he?

How to publish items after pipeline is finished?



Why is it called Latent Vector?


Why do we need Upsampling and Downsampling in Progressive Growing of Ganswhy D(x|y) and not D(x,y) in conditional generative networksWhy doesn't VAE suffer Mode collapseWhy are Variational autoencoder's output is blurred while GANs output is crisp and has sharp edges?How to study the correlation between GAN's input vector and output imageWhy is my generator loss function increasing with iterations?Why do we use the word “kernel” in the expression “Gaussian kernel”?Why use the output of the generator to train the discriminator in a GAN?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








2












$begingroup$


I just learned about GAN and I'm a little bit confused about the naming of Latent Vector.



  • First, In my understanding, a definition of a latent variable is a random variable that can't be measured directly (we needs some calculation from other variables to get its value). For example, knowledge is a latent variable. Is it correct?



  • And then, in GAN, a latent vector $z$ is a random variable which is an input of the generator network. I read in some tutorials, it's generated using only a simple random function:



    z = np.random.uniform(-1, 1, size=(batch_size, z_size))


then how are the two things related? why don't we use the term "a vector with random values between -1 and 1" when referring $z$ (generator's input) in GAN?










share|improve this question









$endgroup$











  • $begingroup$
    I will answer here your specific question (because you are asking too many questions in one single post and because I don't want to clutter my answer below). z = np.random.uniform(-1, 1, size=(batch_size, z_size)) is a sampling operation. You're sampling $z$ from a uniform distribution. In the context e.g. of VAEs, a latent vector is sampled from some distribution. This is a "latent" distribution because this distribution outputs a compact (and hidden) representation of the inputs (e.g. images). This latent distribution is trained to learn such compact representation.
    $endgroup$
    – nbro
    May 24 at 20:58











  • $begingroup$
    I am actually not very well familiar with GANs, but the concrete sampled latent vector $z$ is not a random variable. Mathematically, a hidden variable can be a random variable. The concept of a random variable is more related to the concept of a distribution than to the concept of a vector. Each random variable has an associated distribution. A vector is a sample from a distribution (defined over vectors).
    $endgroup$
    – nbro
    May 24 at 21:05











  • $begingroup$
    @nbro "a hidden variable can be a random variable", based on your answer, is that means "a latent variable can be a random variable"?
    $endgroup$
    – malioboro
    May 25 at 15:59










  • $begingroup$
    @nbro so, do you think, can I call $z$ as a vector with random value instead of latent vector?
    $endgroup$
    – malioboro
    May 25 at 16:00

















2












$begingroup$


I just learned about GAN and I'm a little bit confused about the naming of Latent Vector.



  • First, In my understanding, a definition of a latent variable is a random variable that can't be measured directly (we needs some calculation from other variables to get its value). For example, knowledge is a latent variable. Is it correct?



  • And then, in GAN, a latent vector $z$ is a random variable which is an input of the generator network. I read in some tutorials, it's generated using only a simple random function:



    z = np.random.uniform(-1, 1, size=(batch_size, z_size))


then how are the two things related? why don't we use the term "a vector with random values between -1 and 1" when referring $z$ (generator's input) in GAN?










share|improve this question









$endgroup$











  • $begingroup$
    I will answer here your specific question (because you are asking too many questions in one single post and because I don't want to clutter my answer below). z = np.random.uniform(-1, 1, size=(batch_size, z_size)) is a sampling operation. You're sampling $z$ from a uniform distribution. In the context e.g. of VAEs, a latent vector is sampled from some distribution. This is a "latent" distribution because this distribution outputs a compact (and hidden) representation of the inputs (e.g. images). This latent distribution is trained to learn such compact representation.
    $endgroup$
    – nbro
    May 24 at 20:58











  • $begingroup$
    I am actually not very well familiar with GANs, but the concrete sampled latent vector $z$ is not a random variable. Mathematically, a hidden variable can be a random variable. The concept of a random variable is more related to the concept of a distribution than to the concept of a vector. Each random variable has an associated distribution. A vector is a sample from a distribution (defined over vectors).
    $endgroup$
    – nbro
    May 24 at 21:05











  • $begingroup$
    @nbro "a hidden variable can be a random variable", based on your answer, is that means "a latent variable can be a random variable"?
    $endgroup$
    – malioboro
    May 25 at 15:59










  • $begingroup$
    @nbro so, do you think, can I call $z$ as a vector with random value instead of latent vector?
    $endgroup$
    – malioboro
    May 25 at 16:00













2












2








2


1



$begingroup$


I just learned about GAN and I'm a little bit confused about the naming of Latent Vector.



  • First, In my understanding, a definition of a latent variable is a random variable that can't be measured directly (we needs some calculation from other variables to get its value). For example, knowledge is a latent variable. Is it correct?



  • And then, in GAN, a latent vector $z$ is a random variable which is an input of the generator network. I read in some tutorials, it's generated using only a simple random function:



    z = np.random.uniform(-1, 1, size=(batch_size, z_size))


then how are the two things related? why don't we use the term "a vector with random values between -1 and 1" when referring $z$ (generator's input) in GAN?










share|improve this question









$endgroup$




I just learned about GAN and I'm a little bit confused about the naming of Latent Vector.



  • First, In my understanding, a definition of a latent variable is a random variable that can't be measured directly (we needs some calculation from other variables to get its value). For example, knowledge is a latent variable. Is it correct?



  • And then, in GAN, a latent vector $z$ is a random variable which is an input of the generator network. I read in some tutorials, it's generated using only a simple random function:



    z = np.random.uniform(-1, 1, size=(batch_size, z_size))


then how are the two things related? why don't we use the term "a vector with random values between -1 and 1" when referring $z$ (generator's input) in GAN?







terminology generative-adversarial-networks






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked May 24 at 7:04









malioboromalioboro

812219




812219











  • $begingroup$
    I will answer here your specific question (because you are asking too many questions in one single post and because I don't want to clutter my answer below). z = np.random.uniform(-1, 1, size=(batch_size, z_size)) is a sampling operation. You're sampling $z$ from a uniform distribution. In the context e.g. of VAEs, a latent vector is sampled from some distribution. This is a "latent" distribution because this distribution outputs a compact (and hidden) representation of the inputs (e.g. images). This latent distribution is trained to learn such compact representation.
    $endgroup$
    – nbro
    May 24 at 20:58











  • $begingroup$
    I am actually not very well familiar with GANs, but the concrete sampled latent vector $z$ is not a random variable. Mathematically, a hidden variable can be a random variable. The concept of a random variable is more related to the concept of a distribution than to the concept of a vector. Each random variable has an associated distribution. A vector is a sample from a distribution (defined over vectors).
    $endgroup$
    – nbro
    May 24 at 21:05











  • $begingroup$
    @nbro "a hidden variable can be a random variable", based on your answer, is that means "a latent variable can be a random variable"?
    $endgroup$
    – malioboro
    May 25 at 15:59










  • $begingroup$
    @nbro so, do you think, can I call $z$ as a vector with random value instead of latent vector?
    $endgroup$
    – malioboro
    May 25 at 16:00
















  • $begingroup$
    I will answer here your specific question (because you are asking too many questions in one single post and because I don't want to clutter my answer below). z = np.random.uniform(-1, 1, size=(batch_size, z_size)) is a sampling operation. You're sampling $z$ from a uniform distribution. In the context e.g. of VAEs, a latent vector is sampled from some distribution. This is a "latent" distribution because this distribution outputs a compact (and hidden) representation of the inputs (e.g. images). This latent distribution is trained to learn such compact representation.
    $endgroup$
    – nbro
    May 24 at 20:58











  • $begingroup$
    I am actually not very well familiar with GANs, but the concrete sampled latent vector $z$ is not a random variable. Mathematically, a hidden variable can be a random variable. The concept of a random variable is more related to the concept of a distribution than to the concept of a vector. Each random variable has an associated distribution. A vector is a sample from a distribution (defined over vectors).
    $endgroup$
    – nbro
    May 24 at 21:05











  • $begingroup$
    @nbro "a hidden variable can be a random variable", based on your answer, is that means "a latent variable can be a random variable"?
    $endgroup$
    – malioboro
    May 25 at 15:59










  • $begingroup$
    @nbro so, do you think, can I call $z$ as a vector with random value instead of latent vector?
    $endgroup$
    – malioboro
    May 25 at 16:00















$begingroup$
I will answer here your specific question (because you are asking too many questions in one single post and because I don't want to clutter my answer below). z = np.random.uniform(-1, 1, size=(batch_size, z_size)) is a sampling operation. You're sampling $z$ from a uniform distribution. In the context e.g. of VAEs, a latent vector is sampled from some distribution. This is a "latent" distribution because this distribution outputs a compact (and hidden) representation of the inputs (e.g. images). This latent distribution is trained to learn such compact representation.
$endgroup$
– nbro
May 24 at 20:58





$begingroup$
I will answer here your specific question (because you are asking too many questions in one single post and because I don't want to clutter my answer below). z = np.random.uniform(-1, 1, size=(batch_size, z_size)) is a sampling operation. You're sampling $z$ from a uniform distribution. In the context e.g. of VAEs, a latent vector is sampled from some distribution. This is a "latent" distribution because this distribution outputs a compact (and hidden) representation of the inputs (e.g. images). This latent distribution is trained to learn such compact representation.
$endgroup$
– nbro
May 24 at 20:58













$begingroup$
I am actually not very well familiar with GANs, but the concrete sampled latent vector $z$ is not a random variable. Mathematically, a hidden variable can be a random variable. The concept of a random variable is more related to the concept of a distribution than to the concept of a vector. Each random variable has an associated distribution. A vector is a sample from a distribution (defined over vectors).
$endgroup$
– nbro
May 24 at 21:05





$begingroup$
I am actually not very well familiar with GANs, but the concrete sampled latent vector $z$ is not a random variable. Mathematically, a hidden variable can be a random variable. The concept of a random variable is more related to the concept of a distribution than to the concept of a vector. Each random variable has an associated distribution. A vector is a sample from a distribution (defined over vectors).
$endgroup$
– nbro
May 24 at 21:05













$begingroup$
@nbro "a hidden variable can be a random variable", based on your answer, is that means "a latent variable can be a random variable"?
$endgroup$
– malioboro
May 25 at 15:59




$begingroup$
@nbro "a hidden variable can be a random variable", based on your answer, is that means "a latent variable can be a random variable"?
$endgroup$
– malioboro
May 25 at 15:59












$begingroup$
@nbro so, do you think, can I call $z$ as a vector with random value instead of latent vector?
$endgroup$
– malioboro
May 25 at 16:00




$begingroup$
@nbro so, do you think, can I call $z$ as a vector with random value instead of latent vector?
$endgroup$
– malioboro
May 25 at 16:00










2 Answers
2






active

oldest

votes


















1












$begingroup$

It is called a Latent variable because you cannot access it during train time (which means manipulate it), In a normal Feed Forward NN you cannot manipulate the values output by hidden layers. Similarly the case here.



The term originally came from RBM's (they used term hidden variables). The interpretation of hidden variables in the context of RBM was that these hidden nodes helped to model the interaction between 2 input features (if both activate together, then the hidden unit will also activate). This principle can be traced to Hebb's rule which states "Neurons that fire together, wire together." Thus RBM's were used to find representation of models, in a space (generally lower dimensional than than the original). This is the principal used in Auto Encoder's also. Thus as you can see we are explicitly, not modelling the interaction between 2 features, how the process is occurring is "hidden" from us.



So, the term latent basically can be attributed to the following ideas:



  • We map higher dimensional data to a lower dimensional data with no prior convictions of how the mapping will be done. The NN trains itself for the best configuration.

  • We cannot manipulate this lower dimensional data. Thus it is "hidden from us.

  • As we do not know what each dimension means, it is "hidden" from us.





share|improve this answer









$endgroup$












  • $begingroup$
    ohh... I see.. your answer and @nbro's are quite clear! it's good to know other examples that give a clear similarity in their meaning of latent variable.. Thank you! But, what do you think why $z$ in GAN is not just called "a vector with random values"?
    $endgroup$
    – malioboro
    May 25 at 15:52











  • $begingroup$
    @malioboro You can call $z$ a vector whose values have be uniformly sampled or a vector with random values. However, in that specific case, it is likely that vector of "random values" represents/implements a latent variable.
    $endgroup$
    – nbro
    May 25 at 16:07











  • $begingroup$
    @malioboro I am not familiar with GANs but in general when an encoder decoder unsupervised learning is present it is generally derived from idea of RBM..I would highly suggest you to check out chap 16 structured pbblity model in deep learning book by Goodfellow..It is freely available.
    $endgroup$
    – DuttaA
    May 25 at 16:31











  • $begingroup$
    So latent... Surprisingly there are at least 2-3 ways to interpret why it is latent, but a wise one will see that all interpretation are actually the same.
    $endgroup$
    – DuttaA
    May 25 at 16:37










  • $begingroup$
    Ah... Ok, I get it now, thank you so much for explanations @nbro @DuttaA! I encourage the future reader to read all answers and all comments as they complete each other
    $endgroup$
    – malioboro
    May 25 at 22:13


















1












$begingroup$

Latent is a synonym for hidden.



Why is it called a hidden (or latent) variable? For example, suppose that you observe the behaviour of a person or animal. You can only observe the behaviour. You cannot observe the internal state (e.g. the mood) of this person or animal. The mood is a hidden variable because it cannot be observed directly (but only indirectly through its consequences).



A good example of statistical model that is highly based on the notion of latent variables is the hidden Markov model (HMM). If you understand the HMM, you will understand the concept of a hidden variable.






share|improve this answer











$endgroup$













    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "658"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f12499%2fwhy-is-it-called-latent-vector%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1












    $begingroup$

    It is called a Latent variable because you cannot access it during train time (which means manipulate it), In a normal Feed Forward NN you cannot manipulate the values output by hidden layers. Similarly the case here.



    The term originally came from RBM's (they used term hidden variables). The interpretation of hidden variables in the context of RBM was that these hidden nodes helped to model the interaction between 2 input features (if both activate together, then the hidden unit will also activate). This principle can be traced to Hebb's rule which states "Neurons that fire together, wire together." Thus RBM's were used to find representation of models, in a space (generally lower dimensional than than the original). This is the principal used in Auto Encoder's also. Thus as you can see we are explicitly, not modelling the interaction between 2 features, how the process is occurring is "hidden" from us.



    So, the term latent basically can be attributed to the following ideas:



    • We map higher dimensional data to a lower dimensional data with no prior convictions of how the mapping will be done. The NN trains itself for the best configuration.

    • We cannot manipulate this lower dimensional data. Thus it is "hidden from us.

    • As we do not know what each dimension means, it is "hidden" from us.





    share|improve this answer









    $endgroup$












    • $begingroup$
      ohh... I see.. your answer and @nbro's are quite clear! it's good to know other examples that give a clear similarity in their meaning of latent variable.. Thank you! But, what do you think why $z$ in GAN is not just called "a vector with random values"?
      $endgroup$
      – malioboro
      May 25 at 15:52











    • $begingroup$
      @malioboro You can call $z$ a vector whose values have be uniformly sampled or a vector with random values. However, in that specific case, it is likely that vector of "random values" represents/implements a latent variable.
      $endgroup$
      – nbro
      May 25 at 16:07











    • $begingroup$
      @malioboro I am not familiar with GANs but in general when an encoder decoder unsupervised learning is present it is generally derived from idea of RBM..I would highly suggest you to check out chap 16 structured pbblity model in deep learning book by Goodfellow..It is freely available.
      $endgroup$
      – DuttaA
      May 25 at 16:31











    • $begingroup$
      So latent... Surprisingly there are at least 2-3 ways to interpret why it is latent, but a wise one will see that all interpretation are actually the same.
      $endgroup$
      – DuttaA
      May 25 at 16:37










    • $begingroup$
      Ah... Ok, I get it now, thank you so much for explanations @nbro @DuttaA! I encourage the future reader to read all answers and all comments as they complete each other
      $endgroup$
      – malioboro
      May 25 at 22:13















    1












    $begingroup$

    It is called a Latent variable because you cannot access it during train time (which means manipulate it), In a normal Feed Forward NN you cannot manipulate the values output by hidden layers. Similarly the case here.



    The term originally came from RBM's (they used term hidden variables). The interpretation of hidden variables in the context of RBM was that these hidden nodes helped to model the interaction between 2 input features (if both activate together, then the hidden unit will also activate). This principle can be traced to Hebb's rule which states "Neurons that fire together, wire together." Thus RBM's were used to find representation of models, in a space (generally lower dimensional than than the original). This is the principal used in Auto Encoder's also. Thus as you can see we are explicitly, not modelling the interaction between 2 features, how the process is occurring is "hidden" from us.



    So, the term latent basically can be attributed to the following ideas:



    • We map higher dimensional data to a lower dimensional data with no prior convictions of how the mapping will be done. The NN trains itself for the best configuration.

    • We cannot manipulate this lower dimensional data. Thus it is "hidden from us.

    • As we do not know what each dimension means, it is "hidden" from us.





    share|improve this answer









    $endgroup$












    • $begingroup$
      ohh... I see.. your answer and @nbro's are quite clear! it's good to know other examples that give a clear similarity in their meaning of latent variable.. Thank you! But, what do you think why $z$ in GAN is not just called "a vector with random values"?
      $endgroup$
      – malioboro
      May 25 at 15:52











    • $begingroup$
      @malioboro You can call $z$ a vector whose values have be uniformly sampled or a vector with random values. However, in that specific case, it is likely that vector of "random values" represents/implements a latent variable.
      $endgroup$
      – nbro
      May 25 at 16:07











    • $begingroup$
      @malioboro I am not familiar with GANs but in general when an encoder decoder unsupervised learning is present it is generally derived from idea of RBM..I would highly suggest you to check out chap 16 structured pbblity model in deep learning book by Goodfellow..It is freely available.
      $endgroup$
      – DuttaA
      May 25 at 16:31











    • $begingroup$
      So latent... Surprisingly there are at least 2-3 ways to interpret why it is latent, but a wise one will see that all interpretation are actually the same.
      $endgroup$
      – DuttaA
      May 25 at 16:37










    • $begingroup$
      Ah... Ok, I get it now, thank you so much for explanations @nbro @DuttaA! I encourage the future reader to read all answers and all comments as they complete each other
      $endgroup$
      – malioboro
      May 25 at 22:13













    1












    1








    1





    $begingroup$

    It is called a Latent variable because you cannot access it during train time (which means manipulate it), In a normal Feed Forward NN you cannot manipulate the values output by hidden layers. Similarly the case here.



    The term originally came from RBM's (they used term hidden variables). The interpretation of hidden variables in the context of RBM was that these hidden nodes helped to model the interaction between 2 input features (if both activate together, then the hidden unit will also activate). This principle can be traced to Hebb's rule which states "Neurons that fire together, wire together." Thus RBM's were used to find representation of models, in a space (generally lower dimensional than than the original). This is the principal used in Auto Encoder's also. Thus as you can see we are explicitly, not modelling the interaction between 2 features, how the process is occurring is "hidden" from us.



    So, the term latent basically can be attributed to the following ideas:



    • We map higher dimensional data to a lower dimensional data with no prior convictions of how the mapping will be done. The NN trains itself for the best configuration.

    • We cannot manipulate this lower dimensional data. Thus it is "hidden from us.

    • As we do not know what each dimension means, it is "hidden" from us.





    share|improve this answer









    $endgroup$



    It is called a Latent variable because you cannot access it during train time (which means manipulate it), In a normal Feed Forward NN you cannot manipulate the values output by hidden layers. Similarly the case here.



    The term originally came from RBM's (they used term hidden variables). The interpretation of hidden variables in the context of RBM was that these hidden nodes helped to model the interaction between 2 input features (if both activate together, then the hidden unit will also activate). This principle can be traced to Hebb's rule which states "Neurons that fire together, wire together." Thus RBM's were used to find representation of models, in a space (generally lower dimensional than than the original). This is the principal used in Auto Encoder's also. Thus as you can see we are explicitly, not modelling the interaction between 2 features, how the process is occurring is "hidden" from us.



    So, the term latent basically can be attributed to the following ideas:



    • We map higher dimensional data to a lower dimensional data with no prior convictions of how the mapping will be done. The NN trains itself for the best configuration.

    • We cannot manipulate this lower dimensional data. Thus it is "hidden from us.

    • As we do not know what each dimension means, it is "hidden" from us.






    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered May 24 at 7:31









    DuttaADuttaA

    2,6632932




    2,6632932











    • $begingroup$
      ohh... I see.. your answer and @nbro's are quite clear! it's good to know other examples that give a clear similarity in their meaning of latent variable.. Thank you! But, what do you think why $z$ in GAN is not just called "a vector with random values"?
      $endgroup$
      – malioboro
      May 25 at 15:52











    • $begingroup$
      @malioboro You can call $z$ a vector whose values have be uniformly sampled or a vector with random values. However, in that specific case, it is likely that vector of "random values" represents/implements a latent variable.
      $endgroup$
      – nbro
      May 25 at 16:07











    • $begingroup$
      @malioboro I am not familiar with GANs but in general when an encoder decoder unsupervised learning is present it is generally derived from idea of RBM..I would highly suggest you to check out chap 16 structured pbblity model in deep learning book by Goodfellow..It is freely available.
      $endgroup$
      – DuttaA
      May 25 at 16:31











    • $begingroup$
      So latent... Surprisingly there are at least 2-3 ways to interpret why it is latent, but a wise one will see that all interpretation are actually the same.
      $endgroup$
      – DuttaA
      May 25 at 16:37










    • $begingroup$
      Ah... Ok, I get it now, thank you so much for explanations @nbro @DuttaA! I encourage the future reader to read all answers and all comments as they complete each other
      $endgroup$
      – malioboro
      May 25 at 22:13
















    • $begingroup$
      ohh... I see.. your answer and @nbro's are quite clear! it's good to know other examples that give a clear similarity in their meaning of latent variable.. Thank you! But, what do you think why $z$ in GAN is not just called "a vector with random values"?
      $endgroup$
      – malioboro
      May 25 at 15:52











    • $begingroup$
      @malioboro You can call $z$ a vector whose values have be uniformly sampled or a vector with random values. However, in that specific case, it is likely that vector of "random values" represents/implements a latent variable.
      $endgroup$
      – nbro
      May 25 at 16:07











    • $begingroup$
      @malioboro I am not familiar with GANs but in general when an encoder decoder unsupervised learning is present it is generally derived from idea of RBM..I would highly suggest you to check out chap 16 structured pbblity model in deep learning book by Goodfellow..It is freely available.
      $endgroup$
      – DuttaA
      May 25 at 16:31











    • $begingroup$
      So latent... Surprisingly there are at least 2-3 ways to interpret why it is latent, but a wise one will see that all interpretation are actually the same.
      $endgroup$
      – DuttaA
      May 25 at 16:37










    • $begingroup$
      Ah... Ok, I get it now, thank you so much for explanations @nbro @DuttaA! I encourage the future reader to read all answers and all comments as they complete each other
      $endgroup$
      – malioboro
      May 25 at 22:13















    $begingroup$
    ohh... I see.. your answer and @nbro's are quite clear! it's good to know other examples that give a clear similarity in their meaning of latent variable.. Thank you! But, what do you think why $z$ in GAN is not just called "a vector with random values"?
    $endgroup$
    – malioboro
    May 25 at 15:52





    $begingroup$
    ohh... I see.. your answer and @nbro's are quite clear! it's good to know other examples that give a clear similarity in their meaning of latent variable.. Thank you! But, what do you think why $z$ in GAN is not just called "a vector with random values"?
    $endgroup$
    – malioboro
    May 25 at 15:52













    $begingroup$
    @malioboro You can call $z$ a vector whose values have be uniformly sampled or a vector with random values. However, in that specific case, it is likely that vector of "random values" represents/implements a latent variable.
    $endgroup$
    – nbro
    May 25 at 16:07





    $begingroup$
    @malioboro You can call $z$ a vector whose values have be uniformly sampled or a vector with random values. However, in that specific case, it is likely that vector of "random values" represents/implements a latent variable.
    $endgroup$
    – nbro
    May 25 at 16:07













    $begingroup$
    @malioboro I am not familiar with GANs but in general when an encoder decoder unsupervised learning is present it is generally derived from idea of RBM..I would highly suggest you to check out chap 16 structured pbblity model in deep learning book by Goodfellow..It is freely available.
    $endgroup$
    – DuttaA
    May 25 at 16:31





    $begingroup$
    @malioboro I am not familiar with GANs but in general when an encoder decoder unsupervised learning is present it is generally derived from idea of RBM..I would highly suggest you to check out chap 16 structured pbblity model in deep learning book by Goodfellow..It is freely available.
    $endgroup$
    – DuttaA
    May 25 at 16:31













    $begingroup$
    So latent... Surprisingly there are at least 2-3 ways to interpret why it is latent, but a wise one will see that all interpretation are actually the same.
    $endgroup$
    – DuttaA
    May 25 at 16:37




    $begingroup$
    So latent... Surprisingly there are at least 2-3 ways to interpret why it is latent, but a wise one will see that all interpretation are actually the same.
    $endgroup$
    – DuttaA
    May 25 at 16:37












    $begingroup$
    Ah... Ok, I get it now, thank you so much for explanations @nbro @DuttaA! I encourage the future reader to read all answers and all comments as they complete each other
    $endgroup$
    – malioboro
    May 25 at 22:13




    $begingroup$
    Ah... Ok, I get it now, thank you so much for explanations @nbro @DuttaA! I encourage the future reader to read all answers and all comments as they complete each other
    $endgroup$
    – malioboro
    May 25 at 22:13













    1












    $begingroup$

    Latent is a synonym for hidden.



    Why is it called a hidden (or latent) variable? For example, suppose that you observe the behaviour of a person or animal. You can only observe the behaviour. You cannot observe the internal state (e.g. the mood) of this person or animal. The mood is a hidden variable because it cannot be observed directly (but only indirectly through its consequences).



    A good example of statistical model that is highly based on the notion of latent variables is the hidden Markov model (HMM). If you understand the HMM, you will understand the concept of a hidden variable.






    share|improve this answer











    $endgroup$

















      1












      $begingroup$

      Latent is a synonym for hidden.



      Why is it called a hidden (or latent) variable? For example, suppose that you observe the behaviour of a person or animal. You can only observe the behaviour. You cannot observe the internal state (e.g. the mood) of this person or animal. The mood is a hidden variable because it cannot be observed directly (but only indirectly through its consequences).



      A good example of statistical model that is highly based on the notion of latent variables is the hidden Markov model (HMM). If you understand the HMM, you will understand the concept of a hidden variable.






      share|improve this answer











      $endgroup$















        1












        1








        1





        $begingroup$

        Latent is a synonym for hidden.



        Why is it called a hidden (or latent) variable? For example, suppose that you observe the behaviour of a person or animal. You can only observe the behaviour. You cannot observe the internal state (e.g. the mood) of this person or animal. The mood is a hidden variable because it cannot be observed directly (but only indirectly through its consequences).



        A good example of statistical model that is highly based on the notion of latent variables is the hidden Markov model (HMM). If you understand the HMM, you will understand the concept of a hidden variable.






        share|improve this answer











        $endgroup$



        Latent is a synonym for hidden.



        Why is it called a hidden (or latent) variable? For example, suppose that you observe the behaviour of a person or animal. You can only observe the behaviour. You cannot observe the internal state (e.g. the mood) of this person or animal. The mood is a hidden variable because it cannot be observed directly (but only indirectly through its consequences).



        A good example of statistical model that is highly based on the notion of latent variables is the hidden Markov model (HMM). If you understand the HMM, you will understand the concept of a hidden variable.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited May 24 at 20:50

























        answered May 24 at 11:29









        nbronbro

        3,9862827




        3,9862827



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Artificial Intelligence Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f12499%2fwhy-is-it-called-latent-vector%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

            Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

            Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020