Is it a acceptable way to write a loss function in this form?Does the Bishop book imply that a neuron feeds to itself in chapter 5.3?Loss function to maximize sum of targetsConnection between cross entropy and likelihood for multi-class soft label classificationDifferentiating roadmap of a loss functionPurpose of backpropagation in neural networksUsing SMAPE as a loss function for an LSTMPerceptron Learning RuleAn ambiguity in SVM equations about misclassified dataAmbiguity in Perceptron loss function (C. Bishop vs F. Rosenblatt)What is the relationship between “square loss” and “Mean squared error”?

Can you get infinite turns with this 2 card combo?

Is this the golf ball that Alan Shepard hit on the Moon?

Why does the numerical solution of an ODE move away from an unstable equilibrium?

Pronunciation of "œuf" in "deux œufs kinder" and "bœuf "in "deux bœufs bourguignons" as an exception to silent /f/ in the plural

One folder two different locations on ubuntu 18.04

Averting Real Women Don’t Wear Dresses

Stepcounter after paragraph

How often can a PC check with passive perception during a combat turn?

How well known and how commonly used was Huffman coding in 1979?

Word Wall of Whimsical Wordy Whatchamacallits

Prof. Roman emails his Class this unusual Math Quiz ahead of

Why is Madam Hooch not a professor?

Going to get married soon, should I do it on Dec 31 or Jan 1?

How can I create ribbons like these in Microsoft word 2010?

Set vertical spacing between two particular items

How to convert object fill in to fine lines?

Anagram Within an Anagram!

Generate and Graph the Recamán Sequence

What does 2>&1 | tee mean?

Was touching your nose a greeting in second millenium Mesopotamia?

Confusion about multiple information Sets

Should I tell my insurance company I have an unsecured loan for my new car?

Are Finite Automata Turing Complete?

Three column layout



Is it a acceptable way to write a loss function in this form?


Does the Bishop book imply that a neuron feeds to itself in chapter 5.3?Loss function to maximize sum of targetsConnection between cross entropy and likelihood for multi-class soft label classificationDifferentiating roadmap of a loss functionPurpose of backpropagation in neural networksUsing SMAPE as a loss function for an LSTMPerceptron Learning RuleAn ambiguity in SVM equations about misclassified dataAmbiguity in Perceptron loss function (C. Bishop vs F. Rosenblatt)What is the relationship between “square loss” and “Mean squared error”?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








1












$begingroup$


I found a loss function of a perceptron on a book is in this form



$$ L(w,b) = - sumlimits_x_i in My_i(wx_i+b) $$



In the context of machine learning loss functions, is it acceptable to put $x_i$ on the bottom of a summation notation?










share|improve this question











$endgroup$


















    1












    $begingroup$


    I found a loss function of a perceptron on a book is in this form



    $$ L(w,b) = - sumlimits_x_i in My_i(wx_i+b) $$



    In the context of machine learning loss functions, is it acceptable to put $x_i$ on the bottom of a summation notation?










    share|improve this question











    $endgroup$














      1












      1








      1


      1



      $begingroup$


      I found a loss function of a perceptron on a book is in this form



      $$ L(w,b) = - sumlimits_x_i in My_i(wx_i+b) $$



      In the context of machine learning loss functions, is it acceptable to put $x_i$ on the bottom of a summation notation?










      share|improve this question











      $endgroup$




      I found a loss function of a perceptron on a book is in this form



      $$ L(w,b) = - sumlimits_x_i in My_i(wx_i+b) $$



      In the context of machine learning loss functions, is it acceptable to put $x_i$ on the bottom of a summation notation?







      machine-learning loss-function






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Jun 9 at 16:43









      bkshi

      1,1074 silver badges16 bronze badges




      1,1074 silver badges16 bronze badges










      asked Jun 9 at 13:31









      JayJay

      1734 bronze badges




      1734 bronze badges




















          1 Answer
          1






          active

          oldest

          votes


















          4












          $begingroup$

          It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.






          share|improve this answer









          $endgroup$












          • $begingroup$
            Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
            $endgroup$
            – thushv89
            Jun 12 at 4:36














          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f53486%2fis-it-a-acceptable-way-to-write-a-loss-function-in-this-form%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          4












          $begingroup$

          It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.






          share|improve this answer









          $endgroup$












          • $begingroup$
            Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
            $endgroup$
            – thushv89
            Jun 12 at 4:36
















          4












          $begingroup$

          It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.






          share|improve this answer









          $endgroup$












          • $begingroup$
            Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
            $endgroup$
            – thushv89
            Jun 12 at 4:36














          4












          4








          4





          $begingroup$

          It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.






          share|improve this answer









          $endgroup$



          It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Jun 9 at 16:00









          DaveDave

          4115 bronze badges




          4115 bronze badges











          • $begingroup$
            Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
            $endgroup$
            – thushv89
            Jun 12 at 4:36

















          • $begingroup$
            Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
            $endgroup$
            – thushv89
            Jun 12 at 4:36
















          $begingroup$
          Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
          $endgroup$
          – thushv89
          Jun 12 at 4:36





          $begingroup$
          Adding to what @Dave pointed out, when optimizing a perceptron you are essentially compute the loss for each input in your training dataset and sum that. To do that you need to traverse the full training data set, which is exactly what $sum_x_i in M$ means.
          $endgroup$
          – thushv89
          Jun 12 at 4:36


















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f53486%2fis-it-a-acceptable-way-to-write-a-loss-function-in-this-form%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Wikipedia:Vital articles Мазмуну Biography - Өмүр баян Philosophy and psychology - Философия жана психология Religion - Дин Social sciences - Коомдук илимдер Language and literature - Тил жана адабият Science - Илим Technology - Технология Arts and recreation - Искусство жана эс алуу History and geography - Тарых жана география Навигация менюсу

          Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

          Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070