Is it a acceptable way to write a loss function in this form?Does the Bishop book imply that a neuron feeds to itself in chapter 5.3?Loss function to maximize sum of targetsConnection between cross entropy and likelihood for multi-class soft label classificationDifferentiating roadmap of a loss functionPurpose of backpropagation in neural networksUsing SMAPE as a loss function for an LSTMPerceptron Learning RuleAn ambiguity in SVM equations about misclassified dataAmbiguity in Perceptron loss function (C. Bishop vs F. Rosenblatt)What is the relationship between “square loss” and “Mean squared error”?

Can you get infinite turns with this 2 card combo?

Is this the golf ball that Alan Shepard hit on the Moon?

Why does the numerical solution of an ODE move away from an unstable equilibrium?

Pronunciation of "œuf" in "deux œufs kinder" and "bœuf "in "deux bœufs bourguignons" as an exception to silent /f/ in the plural

One folder two different locations on ubuntu 18.04

Averting Real Women Don’t Wear Dresses

Stepcounter after paragraph

How often can a PC check with passive perception during a combat turn?

How well known and how commonly used was Huffman coding in 1979?

Word Wall of Whimsical Wordy Whatchamacallits

Prof. Roman emails his Class this unusual Math Quiz ahead of

Why is Madam Hooch not a professor?

Going to get married soon, should I do it on Dec 31 or Jan 1?

How can I create ribbons like these in Microsoft word 2010?

Set vertical spacing between two particular items

How to convert object fill in to fine lines?

Anagram Within an Anagram!

Generate and Graph the Recamán Sequence

What does 2>&1 | tee mean?

Was touching your nose a greeting in second millenium Mesopotamia?

Confusion about multiple information Sets

Should I tell my insurance company I have an unsecured loan for my new car?

Are Finite Automata Turing Complete?

Three column layout



Is it a acceptable way to write a loss function in this form?


Does the Bishop book imply that a neuron feeds to itself in chapter 5.3?Loss function to maximize sum of targetsConnection between cross entropy and likelihood for multi-class soft label classificationDifferentiating roadmap of a loss functionPurpose of backpropagation in neural networksUsing SMAPE as a loss function for an LSTMPerceptron Learning RuleAn ambiguity in SVM equations about misclassified dataAmbiguity in Perceptron loss function (C. Bishop vs F. Rosenblatt)What is the relationship between “square loss” and “Mean squared error”?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








1












$begingroup$


I found a loss function of a perceptron on a book is in this form



$$ L(w,b) = - sumlimits_x_i in My_i(wx_i+b) $$



In the context of machine learning loss functions, is it acceptable to put $x_i$ on the bottom of a summation notation?










share|improve this question











$endgroup$


















    1












    $begingroup$


    I found a loss function of a perceptron on a book is in this form



    $$ L(w,b) = - sumlimits_x_i in My_i(wx_i+b) $$



    In the context of machine learning loss functions, is it acceptable to put $x_i$ on the bottom of a summation notation?










    share|improve this question











    $endgroup$














      1












      1








      1


      1



      $begingroup$


      I found a loss function of a perceptron on a book is in this form



      $$ L(w,b) = - sumlimits_x_i in My_i(wx_i+b) $$



      In the context of machine learning loss functions, is it acceptable to put $x_i$ on the bottom of a summation notation?










      share|improve this question











      $endgroup$




      I found a loss function of a perceptron on a book is in this form



      $$ L(w,b) = - sumlimits_x_i in My_i(wx_i+b) $$



      In the context of machine learning loss functions, is it acceptable to put $x_i$ on the bottom of a summation notation?







      machine-learning loss-function






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Jun 9 at 16:43









      bkshi

      1,1074 silver badges16 bronze badges




      1,1074 silver badges16 bronze badges










      asked Jun 9 at 13:31









      JayJay

      1734 bronze badges




      1734 bronze badges




















          1 Answer
          1






          active

          oldest

          votes


















          4












          $begingroup$

          It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.






          share|improve this answer









          $endgroup$












          • $begingroup$
            Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
            $endgroup$
            – thushv89
            Jun 12 at 4:36














          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f53486%2fis-it-a-acceptable-way-to-write-a-loss-function-in-this-form%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          4












          $begingroup$

          It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.






          share|improve this answer









          $endgroup$












          • $begingroup$
            Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
            $endgroup$
            – thushv89
            Jun 12 at 4:36
















          4












          $begingroup$

          It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.






          share|improve this answer









          $endgroup$












          • $begingroup$
            Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
            $endgroup$
            – thushv89
            Jun 12 at 4:36














          4












          4








          4





          $begingroup$

          It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.






          share|improve this answer









          $endgroup$



          It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Jun 9 at 16:00









          DaveDave

          4115 bronze badges




          4115 bronze badges











          • $begingroup$
            Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
            $endgroup$
            – thushv89
            Jun 12 at 4:36

















          • $begingroup$
            Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
            $endgroup$
            – thushv89
            Jun 12 at 4:36
















          $begingroup$
          Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
          $endgroup$
          – thushv89
          Jun 12 at 4:36





          $begingroup$
          Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
          $endgroup$
          – thushv89
          Jun 12 at 4:36


















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f53486%2fis-it-a-acceptable-way-to-write-a-loss-function-in-this-form%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          RemoteApp sporadic failureWindows 2008 RemoteAPP client disconnects within a matter of minutesWhat is the minimum version of RDP supported by Server 2012 RDS?How to configure a Remoteapp server to increase stabilityMicrosoft RemoteApp Active SessionRDWeb TS connection broken for some users post RemoteApp certificate changeRemote Desktop Licensing, RemoteAPPRDS 2012 R2 some users are not able to logon after changed date and time on Connection BrokersWhat happens during Remote Desktop logon, and is there any logging?After installing RDS on WinServer 2016 I still can only connect with two users?RD Connection via RDGW to Session host is not connecting

          How to write a 12-bar blues melodyI-IV-V blues progressionHow to play the bridges in a standard blues progressionHow does Gdim7 fit in C# minor?question on a certain chord progressionMusicology of Melody12 bar blues, spread rhythm: alternative to 6th chord to avoid finger stretchChord progressions/ Root key/ MelodiesHow to put chords (POP-EDM) under a given lead vocal melody (starting from a good knowledge in music theory)Are there “rules” for improvising with the minor pentatonic scale over 12-bar shuffle?Confusion about blues scale and chords

          Esgonzo ibérico Índice Descrición Distribución Hábitat Ameazas Notas Véxase tamén "Acerca dos nomes dos anfibios e réptiles galegos""Chalcides bedriagai"Chalcides bedriagai en Carrascal, L. M. Salvador, A. (Eds). Enciclopedia virtual de los vertebrados españoles. Museo Nacional de Ciencias Naturales, Madrid. España.Fotos