Is it a acceptable way to write a loss function in this form?Does the Bishop book imply that a neuron feeds to itself in chapter 5.3?Loss function to maximize sum of targetsConnection between cross entropy and likelihood for multi-class soft label classificationDifferentiating roadmap of a loss functionPurpose of backpropagation in neural networksUsing SMAPE as a loss function for an LSTMPerceptron Learning RuleAn ambiguity in SVM equations about misclassified dataAmbiguity in Perceptron loss function (C. Bishop vs F. Rosenblatt)What is the relationship between “square loss” and “Mean squared error”?
Can you get infinite turns with this 2 card combo?
Is this the golf ball that Alan Shepard hit on the Moon?
Why does the numerical solution of an ODE move away from an unstable equilibrium?
Pronunciation of "œuf" in "deux œufs kinder" and "bœuf "in "deux bœufs bourguignons" as an exception to silent /f/ in the plural
One folder two different locations on ubuntu 18.04
Averting Real Women Don’t Wear Dresses
Stepcounter after paragraph
How often can a PC check with passive perception during a combat turn?
How well known and how commonly used was Huffman coding in 1979?
Word Wall of Whimsical Wordy Whatchamacallits
Prof. Roman emails his Class this unusual Math Quiz ahead of
Why is Madam Hooch not a professor?
Going to get married soon, should I do it on Dec 31 or Jan 1?
How can I create ribbons like these in Microsoft word 2010?
Set vertical spacing between two particular items
How to convert object fill in to fine lines?
Anagram Within an Anagram!
Generate and Graph the Recamán Sequence
What does 2>&1 | tee mean?
Was touching your nose a greeting in second millenium Mesopotamia?
Confusion about multiple information Sets
Should I tell my insurance company I have an unsecured loan for my new car?
Are Finite Automata Turing Complete?
Three column layout
Is it a acceptable way to write a loss function in this form?
Does the Bishop book imply that a neuron feeds to itself in chapter 5.3?Loss function to maximize sum of targetsConnection between cross entropy and likelihood for multi-class soft label classificationDifferentiating roadmap of a loss functionPurpose of backpropagation in neural networksUsing SMAPE as a loss function for an LSTMPerceptron Learning RuleAn ambiguity in SVM equations about misclassified dataAmbiguity in Perceptron loss function (C. Bishop vs F. Rosenblatt)What is the relationship between “square loss” and “Mean squared error”?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
I found a loss function of a perceptron on a book is in this form
$$ L(w,b) = - sumlimits_x_i in My_i(wx_i+b) $$
In the context of machine learning loss functions, is it acceptable to put $x_i$ on the bottom of a summation notation?
machine-learning loss-function
$endgroup$
add a comment |
$begingroup$
I found a loss function of a perceptron on a book is in this form
$$ L(w,b) = - sumlimits_x_i in My_i(wx_i+b) $$
In the context of machine learning loss functions, is it acceptable to put $x_i$ on the bottom of a summation notation?
machine-learning loss-function
$endgroup$
add a comment |
$begingroup$
I found a loss function of a perceptron on a book is in this form
$$ L(w,b) = - sumlimits_x_i in My_i(wx_i+b) $$
In the context of machine learning loss functions, is it acceptable to put $x_i$ on the bottom of a summation notation?
machine-learning loss-function
$endgroup$
I found a loss function of a perceptron on a book is in this form
$$ L(w,b) = - sumlimits_x_i in My_i(wx_i+b) $$
In the context of machine learning loss functions, is it acceptable to put $x_i$ on the bottom of a summation notation?
machine-learning loss-function
machine-learning loss-function
edited Jun 9 at 16:43
bkshi
1,1074 silver badges16 bronze badges
1,1074 silver badges16 bronze badges
asked Jun 9 at 13:31
JayJay
1734 bronze badges
1734 bronze badges
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.
$endgroup$
$begingroup$
Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
$endgroup$
– thushv89
Jun 12 at 4:36
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f53486%2fis-it-a-acceptable-way-to-write-a-loss-function-in-this-form%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.
$endgroup$
$begingroup$
Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
$endgroup$
– thushv89
Jun 12 at 4:36
add a comment |
$begingroup$
It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.
$endgroup$
$begingroup$
Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
$endgroup$
– thushv89
Jun 12 at 4:36
add a comment |
$begingroup$
It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.
$endgroup$
It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.
answered Jun 9 at 16:00
DaveDave
4115 bronze badges
4115 bronze badges
$begingroup$
Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
$endgroup$
– thushv89
Jun 12 at 4:36
add a comment |
$begingroup$
Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
$endgroup$
– thushv89
Jun 12 at 4:36
$begingroup$
Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
$endgroup$
– thushv89
Jun 12 at 4:36
$begingroup$
Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
$endgroup$
– thushv89
Jun 12 at 4:36
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f53486%2fis-it-a-acceptable-way-to-write-a-loss-function-in-this-form%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown