Is it a acceptable way to write a loss function in this form?Does the Bishop book imply that a neuron feeds to itself in chapter 5.3?Loss function to maximize sum of targetsConnection between cross entropy and likelihood for multi-class soft label classificationDifferentiating roadmap of a loss functionPurpose of backpropagation in neural networksUsing SMAPE as a loss function for an LSTMPerceptron Learning RuleAn ambiguity in SVM equations about misclassified dataAmbiguity in Perceptron loss function (C. Bishop vs F. Rosenblatt)What is the relationship between “square loss” and “Mean squared error”?

Can you get infinite turns with this 2 card combo?

Is this the golf ball that Alan Shepard hit on the Moon?

Why does the numerical solution of an ODE move away from an unstable equilibrium?

Pronunciation of "œuf" in "deux œufs kinder" and "bœuf "in "deux bœufs bourguignons" as an exception to silent /f/ in the plural

One folder two different locations on ubuntu 18.04

Averting Real Women Don’t Wear Dresses

Stepcounter after paragraph

How often can a PC check with passive perception during a combat turn?

How well known and how commonly used was Huffman coding in 1979?

Word Wall of Whimsical Wordy Whatchamacallits

Prof. Roman emails his Class this unusual Math Quiz ahead of

Why is Madam Hooch not a professor?

Going to get married soon, should I do it on Dec 31 or Jan 1?

How can I create ribbons like these in Microsoft word 2010?

Set vertical spacing between two particular items

How to convert object fill in to fine lines?

Anagram Within an Anagram!

Generate and Graph the Recamán Sequence

What does 2>&1 | tee mean?

Was touching your nose a greeting in second millenium Mesopotamia?

Confusion about multiple information Sets

Should I tell my insurance company I have an unsecured loan for my new car?

Are Finite Automata Turing Complete?

Three column layout



Is it a acceptable way to write a loss function in this form?


Does the Bishop book imply that a neuron feeds to itself in chapter 5.3?Loss function to maximize sum of targetsConnection between cross entropy and likelihood for multi-class soft label classificationDifferentiating roadmap of a loss functionPurpose of backpropagation in neural networksUsing SMAPE as a loss function for an LSTMPerceptron Learning RuleAn ambiguity in SVM equations about misclassified dataAmbiguity in Perceptron loss function (C. Bishop vs F. Rosenblatt)What is the relationship between “square loss” and “Mean squared error”?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








1












$begingroup$


I found a loss function of a perceptron on a book is in this form



$$ L(w,b) = - sumlimits_x_i in My_i(wx_i+b) $$



In the context of machine learning loss functions, is it acceptable to put $x_i$ on the bottom of a summation notation?










share|improve this question











$endgroup$


















    1












    $begingroup$


    I found a loss function of a perceptron on a book is in this form



    $$ L(w,b) = - sumlimits_x_i in My_i(wx_i+b) $$



    In the context of machine learning loss functions, is it acceptable to put $x_i$ on the bottom of a summation notation?










    share|improve this question











    $endgroup$














      1












      1








      1


      1



      $begingroup$


      I found a loss function of a perceptron on a book is in this form



      $$ L(w,b) = - sumlimits_x_i in My_i(wx_i+b) $$



      In the context of machine learning loss functions, is it acceptable to put $x_i$ on the bottom of a summation notation?










      share|improve this question











      $endgroup$




      I found a loss function of a perceptron on a book is in this form



      $$ L(w,b) = - sumlimits_x_i in My_i(wx_i+b) $$



      In the context of machine learning loss functions, is it acceptable to put $x_i$ on the bottom of a summation notation?







      machine-learning loss-function






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Jun 9 at 16:43









      bkshi

      1,1074 silver badges16 bronze badges




      1,1074 silver badges16 bronze badges










      asked Jun 9 at 13:31









      JayJay

      1734 bronze badges




      1734 bronze badges




















          1 Answer
          1






          active

          oldest

          votes


















          4












          $begingroup$

          It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.






          share|improve this answer









          $endgroup$












          • $begingroup$
            Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
            $endgroup$
            – thushv89
            Jun 12 at 4:36














          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f53486%2fis-it-a-acceptable-way-to-write-a-loss-function-in-this-form%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          4












          $begingroup$

          It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.






          share|improve this answer









          $endgroup$












          • $begingroup$
            Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
            $endgroup$
            – thushv89
            Jun 12 at 4:36
















          4












          $begingroup$

          It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.






          share|improve this answer









          $endgroup$












          • $begingroup$
            Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
            $endgroup$
            – thushv89
            Jun 12 at 4:36














          4












          4








          4





          $begingroup$

          It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.






          share|improve this answer









          $endgroup$



          It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Jun 9 at 16:00









          DaveDave

          4115 bronze badges




          4115 bronze badges











          • $begingroup$
            Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
            $endgroup$
            – thushv89
            Jun 12 at 4:36

















          • $begingroup$
            Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
            $endgroup$
            – thushv89
            Jun 12 at 4:36
















          $begingroup$
          Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
          $endgroup$
          – thushv89
          Jun 12 at 4:36





          $begingroup$
          Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
          $endgroup$
          – thushv89
          Jun 12 at 4:36


















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f53486%2fis-it-a-acceptable-way-to-write-a-loss-function-in-this-form%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Wikipedia:Vital articles Мазмуну Biography - Өмүр баян Philosophy and psychology - Философия жана психология Religion - Дин Social sciences - Коомдук илимдер Language and literature - Тил жана адабият Science - Илим Technology - Технология Arts and recreation - Искусство жана эс алуу History and geography - Тарых жана география Навигация менюсу

          Bruxelas-Capital Índice Historia | Composición | Situación lingüística | Clima | Cidades irmandadas | Notas | Véxase tamén | Menú de navegacióneO uso das linguas en Bruxelas e a situación do neerlandés"Rexión de Bruxelas Capital"o orixinalSitio da rexiónPáxina de Bruselas no sitio da Oficina de Promoción Turística de Valonia e BruxelasMapa Interactivo da Rexión de Bruxelas-CapitaleeWorldCat332144929079854441105155190212ID28008674080552-90000 0001 0666 3698n94104302ID540940339365017018237

          What should I write in an apology letter, since I have decided not to join a company after accepting an offer letterShould I keep looking after accepting a job offer?What should I do when I've been verbally told I would get an offer letter, but still haven't gotten one after 4 weeks?Do I accept an offer from a company that I am not likely to join?New job hasn't confirmed starting date and I want to give current employer as much notice as possibleHow should I address my manager in my resignation letter?HR delayed background verification, now jobless as resignedNo email communication after accepting a formal written offer. How should I phrase the call?What should I do if after receiving a verbal offer letter I am informed that my written job offer is put on hold due to some internal issues?Should I inform the current employer that I am about to resign within 1-2 weeks since I have signed the offer letter and waiting for visa?What company will do, if I send their offer letter to another company