Why is it called Latent Vector?Why do we need Upsampling and Downsampling in Progressive Growing of Ganswhy D(x|y) and not D(x,y) in conditional generative networksWhy doesn't VAE suffer Mode collapseWhy are Variational autoencoder's output is blurred while GANs output is crisp and has sharp edges?How to study the correlation between GAN's input vector and output imageWhy is my generator loss function increasing with iterations?Why do we use the word “kernel” in the expression “Gaussian kernel”?Why use the output of the generator to train the discriminator in a GAN?
What are neighboring ports?
Explain the ending of Black Mirror's "Smithereens"
Fundamental group of the real projective plane and its universal cover
Why do so many letters in Spanish get pronounced like the English 'h'?
Why does ''cat "$1:-/dev/stdin | ... &>/dev/null'' work in bash but not dash?
What is this airplane?
If there's something that implicates the president why is there then a national security issue? (John Dowd)
What are some really overused phrases in French that are common nowadays?
How to safely destroy (a large quantity of) valid checks?
Why does this query, missing a FROM clause, not error out?
Why did Intel abandon unified CPU cache?
Why am I getting a strange double quote (“) in Open Office instead of the ordinary one (")?
bash does not know the letter 'p'
Is it expected that a reader will skip parts of what you write?
Creating an Output vs. snipping tool
Does putting salt first make it easier for attacker to bruteforce the hash?
How to trick the reader into thinking they're following a redshirt instead of the protagonist?
Why not invest in precious metals?
How can one's career as a reviewer be ended?
Code downloads a text file from a website, saves it to local disk, and then loads it into a list for further processing
Teaching a class likely meant to inflate the GPA of student athletes
PDF vs. PNG figure: why does figure load so much faster even if file sizes are the same?
Fermat's statement about the ancients: How serious was he?
How to publish items after pipeline is finished?
Why is it called Latent Vector?
Why do we need Upsampling and Downsampling in Progressive Growing of Ganswhy D(x|y) and not D(x,y) in conditional generative networksWhy doesn't VAE suffer Mode collapseWhy are Variational autoencoder's output is blurred while GANs output is crisp and has sharp edges?How to study the correlation between GAN's input vector and output imageWhy is my generator loss function increasing with iterations?Why do we use the word “kernel” in the expression “Gaussian kernel”?Why use the output of the generator to train the discriminator in a GAN?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
I just learned about GAN and I'm a little bit confused about the naming of Latent Vector.
First, In my understanding, a definition of a latent variable is a random variable that can't be measured directly (we needs some calculation from other variables to get its value). For example, knowledge is a latent variable. Is it correct?
And then, in GAN, a latent vector $z$ is a random variable which is an input of the generator network. I read in some tutorials, it's generated using only a simple random function:
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
then how are the two things related? why don't we use the term "a vector with random values between -1 and 1" when referring $z$ (generator's input) in GAN?
terminology generative-adversarial-networks
$endgroup$
add a comment |
$begingroup$
I just learned about GAN and I'm a little bit confused about the naming of Latent Vector.
First, In my understanding, a definition of a latent variable is a random variable that can't be measured directly (we needs some calculation from other variables to get its value). For example, knowledge is a latent variable. Is it correct?
And then, in GAN, a latent vector $z$ is a random variable which is an input of the generator network. I read in some tutorials, it's generated using only a simple random function:
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
then how are the two things related? why don't we use the term "a vector with random values between -1 and 1" when referring $z$ (generator's input) in GAN?
terminology generative-adversarial-networks
$endgroup$
$begingroup$
I will answer here your specific question (because you are asking too many questions in one single post and because I don't want to clutter my answer below).z = np.random.uniform(-1, 1, size=(batch_size, z_size))
is a sampling operation. You're sampling $z$ from a uniform distribution. In the context e.g. of VAEs, a latent vector is sampled from some distribution. This is a "latent" distribution because this distribution outputs a compact (and hidden) representation of the inputs (e.g. images). This latent distribution is trained to learn such compact representation.
$endgroup$
– nbro
May 24 at 20:58
$begingroup$
I am actually not very well familiar with GANs, but the concrete sampled latent vector $z$ is not a random variable. Mathematically, a hidden variable can be a random variable. The concept of a random variable is more related to the concept of a distribution than to the concept of a vector. Each random variable has an associated distribution. A vector is a sample from a distribution (defined over vectors).
$endgroup$
– nbro
May 24 at 21:05
$begingroup$
@nbro "a hidden variable can be a random variable", based on your answer, is that means "a latent variable can be a random variable"?
$endgroup$
– malioboro
May 25 at 15:59
$begingroup$
@nbro so, do you think, can I call $z$ as a vector with random value instead of latent vector?
$endgroup$
– malioboro
May 25 at 16:00
add a comment |
$begingroup$
I just learned about GAN and I'm a little bit confused about the naming of Latent Vector.
First, In my understanding, a definition of a latent variable is a random variable that can't be measured directly (we needs some calculation from other variables to get its value). For example, knowledge is a latent variable. Is it correct?
And then, in GAN, a latent vector $z$ is a random variable which is an input of the generator network. I read in some tutorials, it's generated using only a simple random function:
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
then how are the two things related? why don't we use the term "a vector with random values between -1 and 1" when referring $z$ (generator's input) in GAN?
terminology generative-adversarial-networks
$endgroup$
I just learned about GAN and I'm a little bit confused about the naming of Latent Vector.
First, In my understanding, a definition of a latent variable is a random variable that can't be measured directly (we needs some calculation from other variables to get its value). For example, knowledge is a latent variable. Is it correct?
And then, in GAN, a latent vector $z$ is a random variable which is an input of the generator network. I read in some tutorials, it's generated using only a simple random function:
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
then how are the two things related? why don't we use the term "a vector with random values between -1 and 1" when referring $z$ (generator's input) in GAN?
terminology generative-adversarial-networks
terminology generative-adversarial-networks
asked May 24 at 7:04
malioboromalioboro
812219
812219
$begingroup$
I will answer here your specific question (because you are asking too many questions in one single post and because I don't want to clutter my answer below).z = np.random.uniform(-1, 1, size=(batch_size, z_size))
is a sampling operation. You're sampling $z$ from a uniform distribution. In the context e.g. of VAEs, a latent vector is sampled from some distribution. This is a "latent" distribution because this distribution outputs a compact (and hidden) representation of the inputs (e.g. images). This latent distribution is trained to learn such compact representation.
$endgroup$
– nbro
May 24 at 20:58
$begingroup$
I am actually not very well familiar with GANs, but the concrete sampled latent vector $z$ is not a random variable. Mathematically, a hidden variable can be a random variable. The concept of a random variable is more related to the concept of a distribution than to the concept of a vector. Each random variable has an associated distribution. A vector is a sample from a distribution (defined over vectors).
$endgroup$
– nbro
May 24 at 21:05
$begingroup$
@nbro "a hidden variable can be a random variable", based on your answer, is that means "a latent variable can be a random variable"?
$endgroup$
– malioboro
May 25 at 15:59
$begingroup$
@nbro so, do you think, can I call $z$ as a vector with random value instead of latent vector?
$endgroup$
– malioboro
May 25 at 16:00
add a comment |
$begingroup$
I will answer here your specific question (because you are asking too many questions in one single post and because I don't want to clutter my answer below).z = np.random.uniform(-1, 1, size=(batch_size, z_size))
is a sampling operation. You're sampling $z$ from a uniform distribution. In the context e.g. of VAEs, a latent vector is sampled from some distribution. This is a "latent" distribution because this distribution outputs a compact (and hidden) representation of the inputs (e.g. images). This latent distribution is trained to learn such compact representation.
$endgroup$
– nbro
May 24 at 20:58
$begingroup$
I am actually not very well familiar with GANs, but the concrete sampled latent vector $z$ is not a random variable. Mathematically, a hidden variable can be a random variable. The concept of a random variable is more related to the concept of a distribution than to the concept of a vector. Each random variable has an associated distribution. A vector is a sample from a distribution (defined over vectors).
$endgroup$
– nbro
May 24 at 21:05
$begingroup$
@nbro "a hidden variable can be a random variable", based on your answer, is that means "a latent variable can be a random variable"?
$endgroup$
– malioboro
May 25 at 15:59
$begingroup$
@nbro so, do you think, can I call $z$ as a vector with random value instead of latent vector?
$endgroup$
– malioboro
May 25 at 16:00
$begingroup$
I will answer here your specific question (because you are asking too many questions in one single post and because I don't want to clutter my answer below).
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
is a sampling operation. You're sampling $z$ from a uniform distribution. In the context e.g. of VAEs, a latent vector is sampled from some distribution. This is a "latent" distribution because this distribution outputs a compact (and hidden) representation of the inputs (e.g. images). This latent distribution is trained to learn such compact representation.$endgroup$
– nbro
May 24 at 20:58
$begingroup$
I will answer here your specific question (because you are asking too many questions in one single post and because I don't want to clutter my answer below).
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
is a sampling operation. You're sampling $z$ from a uniform distribution. In the context e.g. of VAEs, a latent vector is sampled from some distribution. This is a "latent" distribution because this distribution outputs a compact (and hidden) representation of the inputs (e.g. images). This latent distribution is trained to learn such compact representation.$endgroup$
– nbro
May 24 at 20:58
$begingroup$
I am actually not very well familiar with GANs, but the concrete sampled latent vector $z$ is not a random variable. Mathematically, a hidden variable can be a random variable. The concept of a random variable is more related to the concept of a distribution than to the concept of a vector. Each random variable has an associated distribution. A vector is a sample from a distribution (defined over vectors).
$endgroup$
– nbro
May 24 at 21:05
$begingroup$
I am actually not very well familiar with GANs, but the concrete sampled latent vector $z$ is not a random variable. Mathematically, a hidden variable can be a random variable. The concept of a random variable is more related to the concept of a distribution than to the concept of a vector. Each random variable has an associated distribution. A vector is a sample from a distribution (defined over vectors).
$endgroup$
– nbro
May 24 at 21:05
$begingroup$
@nbro "a hidden variable can be a random variable", based on your answer, is that means "a latent variable can be a random variable"?
$endgroup$
– malioboro
May 25 at 15:59
$begingroup$
@nbro "a hidden variable can be a random variable", based on your answer, is that means "a latent variable can be a random variable"?
$endgroup$
– malioboro
May 25 at 15:59
$begingroup$
@nbro so, do you think, can I call $z$ as a vector with random value instead of latent vector?
$endgroup$
– malioboro
May 25 at 16:00
$begingroup$
@nbro so, do you think, can I call $z$ as a vector with random value instead of latent vector?
$endgroup$
– malioboro
May 25 at 16:00
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
It is called a Latent variable because you cannot access it during train time (which means manipulate it), In a normal Feed Forward NN you cannot manipulate the values output by hidden layers. Similarly the case here.
The term originally came from RBM's (they used term hidden variables). The interpretation of hidden variables in the context of RBM was that these hidden nodes helped to model the interaction between 2 input features (if both activate together, then the hidden unit will also activate). This principle can be traced to Hebb's rule which states "Neurons that fire together, wire together." Thus RBM's were used to find representation of models, in a space (generally lower dimensional than than the original). This is the principal used in Auto Encoder's also. Thus as you can see we are explicitly, not modelling the interaction between 2 features, how the process is occurring is "hidden" from us.
So, the term latent basically can be attributed to the following ideas:
- We map higher dimensional data to a lower dimensional data with no prior convictions of how the mapping will be done. The NN trains itself for the best configuration.
- We cannot manipulate this lower dimensional data. Thus it is "hidden from us.
- As we do not know what each dimension means, it is "hidden" from us.
$endgroup$
$begingroup$
ohh... I see.. your answer and @nbro's are quite clear! it's good to know other examples that give a clear similarity in their meaning of latent variable.. Thank you! But, what do you think why $z$ in GAN is not just called "a vector with random values"?
$endgroup$
– malioboro
May 25 at 15:52
$begingroup$
@malioboro You can call $z$ a vector whose values have be uniformly sampled or a vector with random values. However, in that specific case, it is likely that vector of "random values" represents/implements a latent variable.
$endgroup$
– nbro
May 25 at 16:07
$begingroup$
@malioboro I am not familiar with GANs but in general when an encoder decoder unsupervised learning is present it is generally derived from idea of RBM..I would highly suggest you to check out chap 16 structured pbblity model in deep learning book by Goodfellow..It is freely available.
$endgroup$
– DuttaA
May 25 at 16:31
$begingroup$
So latent... Surprisingly there are at least 2-3 ways to interpret why it is latent, but a wise one will see that all interpretation are actually the same.
$endgroup$
– DuttaA
May 25 at 16:37
$begingroup$
Ah... Ok, I get it now, thank you so much for explanations @nbro @DuttaA! I encourage the future reader to read all answers and all comments as they complete each other
$endgroup$
– malioboro
May 25 at 22:13
add a comment |
$begingroup$
Latent is a synonym for hidden.
Why is it called a hidden (or latent) variable? For example, suppose that you observe the behaviour of a person or animal. You can only observe the behaviour. You cannot observe the internal state (e.g. the mood) of this person or animal. The mood is a hidden variable because it cannot be observed directly (but only indirectly through its consequences).
A good example of statistical model that is highly based on the notion of latent variables is the hidden Markov model (HMM). If you understand the HMM, you will understand the concept of a hidden variable.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "658"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f12499%2fwhy-is-it-called-latent-vector%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
It is called a Latent variable because you cannot access it during train time (which means manipulate it), In a normal Feed Forward NN you cannot manipulate the values output by hidden layers. Similarly the case here.
The term originally came from RBM's (they used term hidden variables). The interpretation of hidden variables in the context of RBM was that these hidden nodes helped to model the interaction between 2 input features (if both activate together, then the hidden unit will also activate). This principle can be traced to Hebb's rule which states "Neurons that fire together, wire together." Thus RBM's were used to find representation of models, in a space (generally lower dimensional than than the original). This is the principal used in Auto Encoder's also. Thus as you can see we are explicitly, not modelling the interaction between 2 features, how the process is occurring is "hidden" from us.
So, the term latent basically can be attributed to the following ideas:
- We map higher dimensional data to a lower dimensional data with no prior convictions of how the mapping will be done. The NN trains itself for the best configuration.
- We cannot manipulate this lower dimensional data. Thus it is "hidden from us.
- As we do not know what each dimension means, it is "hidden" from us.
$endgroup$
$begingroup$
ohh... I see.. your answer and @nbro's are quite clear! it's good to know other examples that give a clear similarity in their meaning of latent variable.. Thank you! But, what do you think why $z$ in GAN is not just called "a vector with random values"?
$endgroup$
– malioboro
May 25 at 15:52
$begingroup$
@malioboro You can call $z$ a vector whose values have be uniformly sampled or a vector with random values. However, in that specific case, it is likely that vector of "random values" represents/implements a latent variable.
$endgroup$
– nbro
May 25 at 16:07
$begingroup$
@malioboro I am not familiar with GANs but in general when an encoder decoder unsupervised learning is present it is generally derived from idea of RBM..I would highly suggest you to check out chap 16 structured pbblity model in deep learning book by Goodfellow..It is freely available.
$endgroup$
– DuttaA
May 25 at 16:31
$begingroup$
So latent... Surprisingly there are at least 2-3 ways to interpret why it is latent, but a wise one will see that all interpretation are actually the same.
$endgroup$
– DuttaA
May 25 at 16:37
$begingroup$
Ah... Ok, I get it now, thank you so much for explanations @nbro @DuttaA! I encourage the future reader to read all answers and all comments as they complete each other
$endgroup$
– malioboro
May 25 at 22:13
add a comment |
$begingroup$
It is called a Latent variable because you cannot access it during train time (which means manipulate it), In a normal Feed Forward NN you cannot manipulate the values output by hidden layers. Similarly the case here.
The term originally came from RBM's (they used term hidden variables). The interpretation of hidden variables in the context of RBM was that these hidden nodes helped to model the interaction between 2 input features (if both activate together, then the hidden unit will also activate). This principle can be traced to Hebb's rule which states "Neurons that fire together, wire together." Thus RBM's were used to find representation of models, in a space (generally lower dimensional than than the original). This is the principal used in Auto Encoder's also. Thus as you can see we are explicitly, not modelling the interaction between 2 features, how the process is occurring is "hidden" from us.
So, the term latent basically can be attributed to the following ideas:
- We map higher dimensional data to a lower dimensional data with no prior convictions of how the mapping will be done. The NN trains itself for the best configuration.
- We cannot manipulate this lower dimensional data. Thus it is "hidden from us.
- As we do not know what each dimension means, it is "hidden" from us.
$endgroup$
$begingroup$
ohh... I see.. your answer and @nbro's are quite clear! it's good to know other examples that give a clear similarity in their meaning of latent variable.. Thank you! But, what do you think why $z$ in GAN is not just called "a vector with random values"?
$endgroup$
– malioboro
May 25 at 15:52
$begingroup$
@malioboro You can call $z$ a vector whose values have be uniformly sampled or a vector with random values. However, in that specific case, it is likely that vector of "random values" represents/implements a latent variable.
$endgroup$
– nbro
May 25 at 16:07
$begingroup$
@malioboro I am not familiar with GANs but in general when an encoder decoder unsupervised learning is present it is generally derived from idea of RBM..I would highly suggest you to check out chap 16 structured pbblity model in deep learning book by Goodfellow..It is freely available.
$endgroup$
– DuttaA
May 25 at 16:31
$begingroup$
So latent... Surprisingly there are at least 2-3 ways to interpret why it is latent, but a wise one will see that all interpretation are actually the same.
$endgroup$
– DuttaA
May 25 at 16:37
$begingroup$
Ah... Ok, I get it now, thank you so much for explanations @nbro @DuttaA! I encourage the future reader to read all answers and all comments as they complete each other
$endgroup$
– malioboro
May 25 at 22:13
add a comment |
$begingroup$
It is called a Latent variable because you cannot access it during train time (which means manipulate it), In a normal Feed Forward NN you cannot manipulate the values output by hidden layers. Similarly the case here.
The term originally came from RBM's (they used term hidden variables). The interpretation of hidden variables in the context of RBM was that these hidden nodes helped to model the interaction between 2 input features (if both activate together, then the hidden unit will also activate). This principle can be traced to Hebb's rule which states "Neurons that fire together, wire together." Thus RBM's were used to find representation of models, in a space (generally lower dimensional than than the original). This is the principal used in Auto Encoder's also. Thus as you can see we are explicitly, not modelling the interaction between 2 features, how the process is occurring is "hidden" from us.
So, the term latent basically can be attributed to the following ideas:
- We map higher dimensional data to a lower dimensional data with no prior convictions of how the mapping will be done. The NN trains itself for the best configuration.
- We cannot manipulate this lower dimensional data. Thus it is "hidden from us.
- As we do not know what each dimension means, it is "hidden" from us.
$endgroup$
It is called a Latent variable because you cannot access it during train time (which means manipulate it), In a normal Feed Forward NN you cannot manipulate the values output by hidden layers. Similarly the case here.
The term originally came from RBM's (they used term hidden variables). The interpretation of hidden variables in the context of RBM was that these hidden nodes helped to model the interaction between 2 input features (if both activate together, then the hidden unit will also activate). This principle can be traced to Hebb's rule which states "Neurons that fire together, wire together." Thus RBM's were used to find representation of models, in a space (generally lower dimensional than than the original). This is the principal used in Auto Encoder's also. Thus as you can see we are explicitly, not modelling the interaction between 2 features, how the process is occurring is "hidden" from us.
So, the term latent basically can be attributed to the following ideas:
- We map higher dimensional data to a lower dimensional data with no prior convictions of how the mapping will be done. The NN trains itself for the best configuration.
- We cannot manipulate this lower dimensional data. Thus it is "hidden from us.
- As we do not know what each dimension means, it is "hidden" from us.
answered May 24 at 7:31
DuttaADuttaA
2,6632932
2,6632932
$begingroup$
ohh... I see.. your answer and @nbro's are quite clear! it's good to know other examples that give a clear similarity in their meaning of latent variable.. Thank you! But, what do you think why $z$ in GAN is not just called "a vector with random values"?
$endgroup$
– malioboro
May 25 at 15:52
$begingroup$
@malioboro You can call $z$ a vector whose values have be uniformly sampled or a vector with random values. However, in that specific case, it is likely that vector of "random values" represents/implements a latent variable.
$endgroup$
– nbro
May 25 at 16:07
$begingroup$
@malioboro I am not familiar with GANs but in general when an encoder decoder unsupervised learning is present it is generally derived from idea of RBM..I would highly suggest you to check out chap 16 structured pbblity model in deep learning book by Goodfellow..It is freely available.
$endgroup$
– DuttaA
May 25 at 16:31
$begingroup$
So latent... Surprisingly there are at least 2-3 ways to interpret why it is latent, but a wise one will see that all interpretation are actually the same.
$endgroup$
– DuttaA
May 25 at 16:37
$begingroup$
Ah... Ok, I get it now, thank you so much for explanations @nbro @DuttaA! I encourage the future reader to read all answers and all comments as they complete each other
$endgroup$
– malioboro
May 25 at 22:13
add a comment |
$begingroup$
ohh... I see.. your answer and @nbro's are quite clear! it's good to know other examples that give a clear similarity in their meaning of latent variable.. Thank you! But, what do you think why $z$ in GAN is not just called "a vector with random values"?
$endgroup$
– malioboro
May 25 at 15:52
$begingroup$
@malioboro You can call $z$ a vector whose values have be uniformly sampled or a vector with random values. However, in that specific case, it is likely that vector of "random values" represents/implements a latent variable.
$endgroup$
– nbro
May 25 at 16:07
$begingroup$
@malioboro I am not familiar with GANs but in general when an encoder decoder unsupervised learning is present it is generally derived from idea of RBM..I would highly suggest you to check out chap 16 structured pbblity model in deep learning book by Goodfellow..It is freely available.
$endgroup$
– DuttaA
May 25 at 16:31
$begingroup$
So latent... Surprisingly there are at least 2-3 ways to interpret why it is latent, but a wise one will see that all interpretation are actually the same.
$endgroup$
– DuttaA
May 25 at 16:37
$begingroup$
Ah... Ok, I get it now, thank you so much for explanations @nbro @DuttaA! I encourage the future reader to read all answers and all comments as they complete each other
$endgroup$
– malioboro
May 25 at 22:13
$begingroup$
ohh... I see.. your answer and @nbro's are quite clear! it's good to know other examples that give a clear similarity in their meaning of latent variable.. Thank you! But, what do you think why $z$ in GAN is not just called "a vector with random values"?
$endgroup$
– malioboro
May 25 at 15:52
$begingroup$
ohh... I see.. your answer and @nbro's are quite clear! it's good to know other examples that give a clear similarity in their meaning of latent variable.. Thank you! But, what do you think why $z$ in GAN is not just called "a vector with random values"?
$endgroup$
– malioboro
May 25 at 15:52
$begingroup$
@malioboro You can call $z$ a vector whose values have be uniformly sampled or a vector with random values. However, in that specific case, it is likely that vector of "random values" represents/implements a latent variable.
$endgroup$
– nbro
May 25 at 16:07
$begingroup$
@malioboro You can call $z$ a vector whose values have be uniformly sampled or a vector with random values. However, in that specific case, it is likely that vector of "random values" represents/implements a latent variable.
$endgroup$
– nbro
May 25 at 16:07
$begingroup$
@malioboro I am not familiar with GANs but in general when an encoder decoder unsupervised learning is present it is generally derived from idea of RBM..I would highly suggest you to check out chap 16 structured pbblity model in deep learning book by Goodfellow..It is freely available.
$endgroup$
– DuttaA
May 25 at 16:31
$begingroup$
@malioboro I am not familiar with GANs but in general when an encoder decoder unsupervised learning is present it is generally derived from idea of RBM..I would highly suggest you to check out chap 16 structured pbblity model in deep learning book by Goodfellow..It is freely available.
$endgroup$
– DuttaA
May 25 at 16:31
$begingroup$
So latent... Surprisingly there are at least 2-3 ways to interpret why it is latent, but a wise one will see that all interpretation are actually the same.
$endgroup$
– DuttaA
May 25 at 16:37
$begingroup$
So latent... Surprisingly there are at least 2-3 ways to interpret why it is latent, but a wise one will see that all interpretation are actually the same.
$endgroup$
– DuttaA
May 25 at 16:37
$begingroup$
Ah... Ok, I get it now, thank you so much for explanations @nbro @DuttaA! I encourage the future reader to read all answers and all comments as they complete each other
$endgroup$
– malioboro
May 25 at 22:13
$begingroup$
Ah... Ok, I get it now, thank you so much for explanations @nbro @DuttaA! I encourage the future reader to read all answers and all comments as they complete each other
$endgroup$
– malioboro
May 25 at 22:13
add a comment |
$begingroup$
Latent is a synonym for hidden.
Why is it called a hidden (or latent) variable? For example, suppose that you observe the behaviour of a person or animal. You can only observe the behaviour. You cannot observe the internal state (e.g. the mood) of this person or animal. The mood is a hidden variable because it cannot be observed directly (but only indirectly through its consequences).
A good example of statistical model that is highly based on the notion of latent variables is the hidden Markov model (HMM). If you understand the HMM, you will understand the concept of a hidden variable.
$endgroup$
add a comment |
$begingroup$
Latent is a synonym for hidden.
Why is it called a hidden (or latent) variable? For example, suppose that you observe the behaviour of a person or animal. You can only observe the behaviour. You cannot observe the internal state (e.g. the mood) of this person or animal. The mood is a hidden variable because it cannot be observed directly (but only indirectly through its consequences).
A good example of statistical model that is highly based on the notion of latent variables is the hidden Markov model (HMM). If you understand the HMM, you will understand the concept of a hidden variable.
$endgroup$
add a comment |
$begingroup$
Latent is a synonym for hidden.
Why is it called a hidden (or latent) variable? For example, suppose that you observe the behaviour of a person or animal. You can only observe the behaviour. You cannot observe the internal state (e.g. the mood) of this person or animal. The mood is a hidden variable because it cannot be observed directly (but only indirectly through its consequences).
A good example of statistical model that is highly based on the notion of latent variables is the hidden Markov model (HMM). If you understand the HMM, you will understand the concept of a hidden variable.
$endgroup$
Latent is a synonym for hidden.
Why is it called a hidden (or latent) variable? For example, suppose that you observe the behaviour of a person or animal. You can only observe the behaviour. You cannot observe the internal state (e.g. the mood) of this person or animal. The mood is a hidden variable because it cannot be observed directly (but only indirectly through its consequences).
A good example of statistical model that is highly based on the notion of latent variables is the hidden Markov model (HMM). If you understand the HMM, you will understand the concept of a hidden variable.
edited May 24 at 20:50
answered May 24 at 11:29
nbronbro
3,9862827
3,9862827
add a comment |
add a comment |
Thanks for contributing an answer to Artificial Intelligence Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f12499%2fwhy-is-it-called-latent-vector%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
I will answer here your specific question (because you are asking too many questions in one single post and because I don't want to clutter my answer below).
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
is a sampling operation. You're sampling $z$ from a uniform distribution. In the context e.g. of VAEs, a latent vector is sampled from some distribution. This is a "latent" distribution because this distribution outputs a compact (and hidden) representation of the inputs (e.g. images). This latent distribution is trained to learn such compact representation.$endgroup$
– nbro
May 24 at 20:58
$begingroup$
I am actually not very well familiar with GANs, but the concrete sampled latent vector $z$ is not a random variable. Mathematically, a hidden variable can be a random variable. The concept of a random variable is more related to the concept of a distribution than to the concept of a vector. Each random variable has an associated distribution. A vector is a sample from a distribution (defined over vectors).
$endgroup$
– nbro
May 24 at 21:05
$begingroup$
@nbro "a hidden variable can be a random variable", based on your answer, is that means "a latent variable can be a random variable"?
$endgroup$
– malioboro
May 25 at 15:59
$begingroup$
@nbro so, do you think, can I call $z$ as a vector with random value instead of latent vector?
$endgroup$
– malioboro
May 25 at 16:00