Formal Definition of Dot ProductSpecial relativity: how to prove that $g = L^t g L$?More Vector Product Possibilities?Extension of Lami's theoremHow the Poisson bracket transform when we change coordinates?Definition of vector cross productWhat exactly is the Parity transformation? Parity in spherical coordinatesSimple question about change of coordinatesDefinition of velocity in classical mechanicsDefinition of inner product as in the case of workConfusion about Change in Integration Variable

Pirate democracy at its finest

What are these arcade games in Ghostbusters 1984?

Is it true that cut time means "play twice as fast as written"?

My employer faked my resume to acquire projects

How to connect Wolfram Engine to Jupyter on Ubuntu?

I know that there is a preselected candidate for a position to be filled at my department. What should I do?

Grammar Question Regarding "Are the" or "Is the" When Referring to Something that May or May not be Plural

Where have Brexit voters gone?

Why is this Simple Puzzle impossible to solve?

Is the Indo-European language family made up?

Can a person survive on blood in place of water?

Is DateWithin30Days(Date 1, Date 2) an Apex Method?

Should I disclose a colleague's illness (that I should not know) when others badmouth him

Make 24 using exactly three 3s

I unknowingly submitted plagarised work

Do photons bend spacetime or not?

How to patch glass cuts in a bicycle tire?

Is the derivative with respect to a fermion field Grassmann-odd?

Should one buy new hardware after a system compromise?

Did 20% of US soldiers in Vietnam use heroin, 95% of whom quit afterwards?

Where can I find visible/radio telescopic observations of the center of the Milky Way galaxy?

Is the field of q-series 'dead'?

How to illustrate the Mean Value theorem?

What to do when you've set the wrong ISO for your film?



Formal Definition of Dot Product


Special relativity: how to prove that $g = L^t g L$?More Vector Product Possibilities?Extension of Lami's theoremHow the Poisson bracket transform when we change coordinates?Definition of vector cross productWhat exactly is the Parity transformation? Parity in spherical coordinatesSimple question about change of coordinatesDefinition of velocity in classical mechanicsDefinition of inner product as in the case of workConfusion about Change in Integration Variable













16












$begingroup$


In most textbooks, dot product between two vectors is defined as:



$$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



I understand how this definition works most of the time. However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). So, if I had two vectors in two different coordinate systems:



$$x_1 vece_1 + x_2 vece_2 + x_3 vece_3$$
$$y_1 vece_1' + y_2 vece_2' + y_3 vece_3'$$



How, would I compute their dot product? In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute $vece_1 cdot vece_1'$ without converting the vectors to the same coordinate system)? Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?










share|cite|improve this question











$endgroup$







  • 1




    $begingroup$
    However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). But I think that it IS always strongly IMPLIED that the 2 vector component sets are obtained with respect to the same orthonormal basis.
    $endgroup$
    – Trunk
    May 13 at 16:03










  • $begingroup$
    What you'd need is the change of basis matrix for the relationship between $hate_i$ and $hate_j'$, you should be able to then go from there. But as it stands technically this question could be even better answered on mathematics stack exchange as it's purely mathematical in nature
    $endgroup$
    – Triatticus
    May 13 at 18:20







  • 1




    $begingroup$
    @Evpok: In hindsight, I'm wondering how I got the cross product and dot product mixed up, especially given the definition in the question itself. Let's blame mondays.
    $endgroup$
    – MSalters
    May 14 at 7:06















16












$begingroup$


In most textbooks, dot product between two vectors is defined as:



$$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



I understand how this definition works most of the time. However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). So, if I had two vectors in two different coordinate systems:



$$x_1 vece_1 + x_2 vece_2 + x_3 vece_3$$
$$y_1 vece_1' + y_2 vece_2' + y_3 vece_3'$$



How, would I compute their dot product? In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute $vece_1 cdot vece_1'$ without converting the vectors to the same coordinate system)? Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?










share|cite|improve this question











$endgroup$







  • 1




    $begingroup$
    However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). But I think that it IS always strongly IMPLIED that the 2 vector component sets are obtained with respect to the same orthonormal basis.
    $endgroup$
    – Trunk
    May 13 at 16:03










  • $begingroup$
    What you'd need is the change of basis matrix for the relationship between $hate_i$ and $hate_j'$, you should be able to then go from there. But as it stands technically this question could be even better answered on mathematics stack exchange as it's purely mathematical in nature
    $endgroup$
    – Triatticus
    May 13 at 18:20







  • 1




    $begingroup$
    @Evpok: In hindsight, I'm wondering how I got the cross product and dot product mixed up, especially given the definition in the question itself. Let's blame mondays.
    $endgroup$
    – MSalters
    May 14 at 7:06













16












16








16


3



$begingroup$


In most textbooks, dot product between two vectors is defined as:



$$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



I understand how this definition works most of the time. However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). So, if I had two vectors in two different coordinate systems:



$$x_1 vece_1 + x_2 vece_2 + x_3 vece_3$$
$$y_1 vece_1' + y_2 vece_2' + y_3 vece_3'$$



How, would I compute their dot product? In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute $vece_1 cdot vece_1'$ without converting the vectors to the same coordinate system)? Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?










share|cite|improve this question











$endgroup$




In most textbooks, dot product between two vectors is defined as:



$$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



I understand how this definition works most of the time. However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). So, if I had two vectors in two different coordinate systems:



$$x_1 vece_1 + x_2 vece_2 + x_3 vece_3$$
$$y_1 vece_1' + y_2 vece_2' + y_3 vece_3'$$



How, would I compute their dot product? In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute $vece_1 cdot vece_1'$ without converting the vectors to the same coordinate system)? Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?







vectors coordinate-systems linear-algebra






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited May 13 at 1:56









Gilbert

5,215919




5,215919










asked May 13 at 0:08









dtsdts

411615




411615







  • 1




    $begingroup$
    However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). But I think that it IS always strongly IMPLIED that the 2 vector component sets are obtained with respect to the same orthonormal basis.
    $endgroup$
    – Trunk
    May 13 at 16:03










  • $begingroup$
    What you'd need is the change of basis matrix for the relationship between $hate_i$ and $hate_j'$, you should be able to then go from there. But as it stands technically this question could be even better answered on mathematics stack exchange as it's purely mathematical in nature
    $endgroup$
    – Triatticus
    May 13 at 18:20







  • 1




    $begingroup$
    @Evpok: In hindsight, I'm wondering how I got the cross product and dot product mixed up, especially given the definition in the question itself. Let's blame mondays.
    $endgroup$
    – MSalters
    May 14 at 7:06












  • 1




    $begingroup$
    However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). But I think that it IS always strongly IMPLIED that the 2 vector component sets are obtained with respect to the same orthonormal basis.
    $endgroup$
    – Trunk
    May 13 at 16:03










  • $begingroup$
    What you'd need is the change of basis matrix for the relationship between $hate_i$ and $hate_j'$, you should be able to then go from there. But as it stands technically this question could be even better answered on mathematics stack exchange as it's purely mathematical in nature
    $endgroup$
    – Triatticus
    May 13 at 18:20







  • 1




    $begingroup$
    @Evpok: In hindsight, I'm wondering how I got the cross product and dot product mixed up, especially given the definition in the question itself. Let's blame mondays.
    $endgroup$
    – MSalters
    May 14 at 7:06







1




1




$begingroup$
However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). But I think that it IS always strongly IMPLIED that the 2 vector component sets are obtained with respect to the same orthonormal basis.
$endgroup$
– Trunk
May 13 at 16:03




$begingroup$
However, in this definition, there is no reference to coordinate system (i.e. no basis is included for the vector components). But I think that it IS always strongly IMPLIED that the 2 vector component sets are obtained with respect to the same orthonormal basis.
$endgroup$
– Trunk
May 13 at 16:03












$begingroup$
What you'd need is the change of basis matrix for the relationship between $hate_i$ and $hate_j'$, you should be able to then go from there. But as it stands technically this question could be even better answered on mathematics stack exchange as it's purely mathematical in nature
$endgroup$
– Triatticus
May 13 at 18:20





$begingroup$
What you'd need is the change of basis matrix for the relationship between $hate_i$ and $hate_j'$, you should be able to then go from there. But as it stands technically this question could be even better answered on mathematics stack exchange as it's purely mathematical in nature
$endgroup$
– Triatticus
May 13 at 18:20





1




1




$begingroup$
@Evpok: In hindsight, I'm wondering how I got the cross product and dot product mixed up, especially given the definition in the question itself. Let's blame mondays.
$endgroup$
– MSalters
May 14 at 7:06




$begingroup$
@Evpok: In hindsight, I'm wondering how I got the cross product and dot product mixed up, especially given the definition in the question itself. Let's blame mondays.
$endgroup$
– MSalters
May 14 at 7:06










8 Answers
8






active

oldest

votes


















16












$begingroup$

Your top-line question can be answered at many levels. Setting aside issues of forms and covariant/contravariant, the answer is:




The dot product is the product of the magnitudes of the two vectors, times the cosine of the angle between them.




No matter what basis you compute that in, you have to get the same answer because it's a physical quantity.



The usual "sum of products of orthonormal components" is then a convenient computational approach, but as you've seen it's not the only way to compute them.



The dot product's properties includes linear, commutative, distributive, etc. So when you expand the dot product



$$(a_x hatx+a_y haty + a_z hatz) cdot (b_x hatX+b_y hatY + b_z hatZ)$$



you get nine terms like $( a_x b_x hatxcdothatX) + (a_x b_y hatxcdothatY)+$ etc. In the usual orthonormal basis, the same-axis $hatxcdothatX$ factors just become 1, while the different-axis $hatxcdothatY$ et al factors are zero. That reduces to the formula you know.



In a non-orthonormal basis, you have to figure out what those basis products are. To do that, you refer back to the definition: The product of the size of each, times the cosine of the angle between. Once you have all of those, you're again all set to compute. It just looks a bit more complicated...






share|cite|improve this answer











$endgroup$








  • 13




    $begingroup$
    I don't think the dot product is associative.
    $endgroup$
    – eyeballfrog
    May 13 at 1:18






  • 4




    $begingroup$
    "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." Only if you have a physical vector. If we're speaking mathematically, vectors can be abstract objects, and the "angle" is not defined. In fact, generally speaking, if "angle" is defined, it's defined in terms of the dot product, making your definition circular.
    $endgroup$
    – Acccumulation
    May 13 at 17:14






  • 3




    $begingroup$
    @Acccumulation This is Physics Stack Exchange.
    $endgroup$
    – Bob Jacobsen
    May 13 at 17:18






  • 4




    $begingroup$
    @Bob Jacobsen Yes, but physics also has abstract Hilbert spaces. Consider, for example, Quantum Mechanics.
    $endgroup$
    – scaphys
    May 13 at 18:36






  • 2




    $begingroup$
    "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." What about two bases which are not related by an orthogonal transformation? For example $hat e_i = 2 hat f_i$.
    $endgroup$
    – Display Name
    May 13 at 18:54


















18












$begingroup$

Dot products, or inner products are defined axiomatically, or abstractly. An inner product on a vector space $V$ over $R$ is a pairing $Vtimes Vto R$, denoted by $ langle u,vrangle$, with properties $langle u,vrangle=langle v,urangle$, $langle u+cw,vrangle= langle u,vrangle+clangle w,vrangle$, and $ langle u,uranglegt0$ if $une0$. In general, a vector space can be endowed with an inner product in many ways. Notice here there is no reference to a basis/coordinate system.



Using what is called the Gram-Schmidt process, one can then construct a basis $e_1,cdots e_n$ for $V$ in which the inner product takes the computational form which you stated in your question.



In your question, you are actually starting with what is called an orthonormal basis for an inner product. The coordinate-free approach is to state the postulates that an inner product should obey, then after being given an explicit inner product, construct an orthonormal basis in which to do computations.



In general, an orthonormal basis $e_1,e_2,e_3$ for one inner product on $V$ will not be an orthonormal basis for another inner product on $V$.






share|cite|improve this answer









$endgroup$




















    9












    $begingroup$

    The dot product can be defined in a coordinate-independent way as



    $$vecacdotvecb=|veca||vecb|costheta$$



    where $theta$ is the angle between the two vectors. This involves only lengths and angles, not coordinates.



    To use your first formula, the coordinates must be in the same basis.



    You can convert between bases using a rotation matrix, and the fact that a rotation matrix preserves vector lengths is sufficient to show that it preserves the dot product. This is because



    $$vecacdotvecb=frac12left(|veca+vecb|^2-|veca|^2-|vecb|^2right).$$



    This formula is another purely-geometric, coordinate-free definition of the dot product.






    share|cite|improve this answer











    $endgroup$












    • $begingroup$
      Thank you! That makes sense. But what happens if you are dealing with a non-orthonormal system? Is the dot product's value preserved in making the coordinate transformation?
      $endgroup$
      – dts
      May 13 at 0:21










    • $begingroup$
      Yes, the value is preserved, but the coordinate-based formula in a non-orthonormal basis is more complicated than your first formula.
      $endgroup$
      – G. Smith
      May 13 at 0:31






    • 1




      $begingroup$
      "You can convert between bases using a rotation matrix", I strongly disagree. Only if the base vectors are normalised, but that needn't be the case. However there exists a Matrix $A$ such that $e_i^prime = A e_i$ where $e_i$ is to be understood at the ith basic vector (not the component).
      $endgroup$
      – infinitezero
      May 13 at 16:33



















    7












    $begingroup$

    The coordinate free definition of a dot product is:



    $$ vec a cdot vec b = frac 1 4 [(vec a + vec b)^2 - (vec a - vec b)^2]$$



    It's up to you to figure out what the norm is:



    $$ ||vec a|| = sqrt(vec a)^2$$



    Here is a reference for this viewpoint:
    http://www.pmaweb.caltech.edu/Courses/ph136/yr2012/1202.1.K.pdf
    Section 2.3






    share|cite|improve this answer











    $endgroup$








    • 3




      $begingroup$
      This is a circular definition as the norm is defined via the dot product.
      $endgroup$
      – Winther
      May 13 at 9:01










    • $begingroup$
      @Winther You've got to input something: the dot product cannot be derived only from the underlying vector space structure. The norm seems a reasonable choice here, for geometric intuition.
      $endgroup$
      – Denis Nardin
      May 13 at 11:06






    • 2




      $begingroup$
      This will only define an inner product iff the norm satisfies the parallelogram identity $2||x||^2+2||y||^2=||x+y||^2+||x-y||^2$
      $endgroup$
      – Jannik Pitt
      May 13 at 11:50










    • $begingroup$
      Yes you have to input something: either define a norm or define an inner product and have the norm be induced by this. However my point was that you seem to define the norm via $|a|=sqrtacdot a$ which is why I said it was circular. On second reading it does look like you say you need to specify the norm externally so then this would be fine. However doesn't then the definition of the norm require you to specify a coordinate system so it's not really coordinate free?
      $endgroup$
      – Winther
      May 13 at 13:48










    • $begingroup$
      @Winther Well, it depends on how your vector space is given to you. If your vectors are a bunch of coordinates (like in the usual description of $mathbbR^n$), of course every definition you give will be coordinate dependent (coordinates are all you have!), but if your vector space is composed by something more exotic (e.g. the space of solutions of a certain ODE) then you can hope to write down a definition of the norm using something else. (and yes, indeed a Banach space is Hilbert iff the norm satisfies the parallelogram identity, plus some added condition if over $mathbbC$).
      $endgroup$
      – Denis Nardin
      May 13 at 18:10


















    3












    $begingroup$

    On computing the following matrix will give you the dot product $$beginbmatrix x_1 & x_2& x_3 endbmatrix.beginbmatrix e_1.e'_1 & e_1.e'_2 & e_1.e'_3 \ e_2.e'_1 & e_2.e'_2 & e_2.e'_3 \ e_3.e'_1 & e_3.e'_2 & e_3.e'_3endbmatrix.beginbmatrixy_1\y_2\y_3endbmatrix$$ If we transform the cordinate of the a vector, only the components and basis of vector changes. The vector remains unchanged. Thus the dot product remain unchanged even if we compute dot product between primed and unprimed vectors.






    share|cite|improve this answer











    $endgroup$








    • 1




      $begingroup$
      I like this because it provides a prior motivation for representing inner products with a metric tensor in relativity.
      $endgroup$
      – dmckee
      May 16 at 16:28


















    1












    $begingroup$

    A vector space (or linear space) is a set and two operations, which are vector addition and scalar multiplication, and some rules (spelled out in the Definition section of this Wikipedia article). The net result of this definition is that vectors behave like little arrows or ordered tuples under addition and scalar multiplication.



    This is good, but often more structure is needed. (See the Vector Spaces with Additional Structure section of the link above.)



    For example, a norm can be defined on a vector space. This defines a magnitude or length for each vector. Again there are some rules. No magnitude can be negative. Only the $vec0$ vector can have a magnitude of $0$. The triangle inequality: $lvert(a+b)rvert <= lvert arvert + lvert brvert$



    Likewise an inner product can be defined on a vector space. It adds enough structure to support the ideas of orthogonality and projection. For spaces where it makes sense, this leads to the idea of angle.



    The formal definition or a norm is that an inner product is a function that associates two vectors with a number, with some rules. See this for the details.




    These are general definitions which work on all vector spaces. The links above give examples of vector spaces that may not be familiar. E.G. The set of all functions of the form $y = ax^2 + bx + c$ is a 3 dimensional vector space.



    The most familiar vector spaces are N dimensional Euclidian spaces. These are normed vector spaces, where the norm matches the everyday definition of distance.



    The dot product is the inner product on these spaces that matches the everyday definition of orthogonality and angle. See this Wikipedia article.






    share|cite|improve this answer









    $endgroup$




















      1












      $begingroup$


      How, would I compute their dot product?




      You pretty much have to convert them to the same basis system. You can multiply them out and get nine different terms, and then find the dot product in terms of the nine dot products of the basis vectors, but the math is pretty much the same as converting to the same coordinate system.




      In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute e1→⋅e′1→ without converting the vectors to the same coordinate system)?




      The value of $vece_1 cdot vece_1'$ is an empirical value. You can't calculate it simply from a definition.




      Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?




      Given a physical system in which "length" and "angle" are defined, the dot product is invariant under rotations and reflections, i.e. orthonormal transformations. So given two coordinate systems, as long the axes are orthogonal to each other within each coordinate system, and the two coordinate systems have the same origin and the same scale (one unit is the same length, regardless of which direction or coordinate system), dot products will be the same.



      In that case, the change of basis can be represented with a matrix $U$ such that $(U^*)U=I$ (For real numbers, $U^*$ is just the transpose, so I'll be using that for the rest, since presumably you're asking about vectors over the real numbers). The dot product of two vectors $x$ and $y$ is $x^Ty$. If $x'=UX$ and $y'=Uy$, then the dot product of $x'$ and $y'$ is $x'^Ty'=(Ux)^TUy=x^TU^TUy=x^TIy=x^Ty$






      share|cite|improve this answer









      $endgroup$




















        1












        $begingroup$

        The formula



        $$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



        is just a start and, as you go further down in physics, will need quite a few generalizations. The assumptions here are that your vectors are (a) real (b) three-dimensional (c) tuples (d) written in a "standard basis". There are points at which either of these are broken: for example, one of the first things you learn in special theory of relativity(*) is how to work with (b') four-dimensional vectors that (d') don't even allow a standard basis at all, so you get a different formula (of which this is a special case). Similarly, in quantum mechanics, depending on the text, you need to grasp in quantum mechanics are (a') complex vector spaces of (b'') infinite-dimensional things that (c') may not be tuples at all (although often can be written so, again allowing a formula of which this is a special case).



        You just yourself figured out that (d) will not always be the case, and that's a splendid job on your part.



        Before any of those generalizations take place, the assumptions (a - d) are taken for granted. That is, we are working in a basis
        $$e_1 equiv langle 1,0,0 rangle \
        e_2 equiv langle 0,1,0 rangle \
        e_3 equiv langle 0,0,1 rangle$$

        and
        $$e_1 cdot e_1 = 1, e_1 cdot e_2 = 0, e_1 cdot e_3 = 0 textetc.$$
        If a triple of numbers is written it is in this basis. While there are other bases, they just represent concrete triples which you have to multiply by the corresponding coefficients and sum up, effectively transforming to $(e_1, e_2, e_3)$, if you insist on applying the scalar product formula above.



        The generalization to taking vectors not as triples of numbers, but as combinations of some abstract $e'_1$, $e'_2$, $e'_3$, then requires specifying what $e'_i cdot e'_j$ is for all $i$, $j$, as other answers have already said in a plenty of ways. If $(e_i)$ and $(e'_i)$ are two different bases, and you know the scalar product in one, the scalar product in the other can be computed from the relations between the basis vectors. And so can a formula for taking scalar products of two vectors, one in each of the two bases.



        The basic idea remains, though, and it is a good idea to get oneself familiarized with all the aspects of the above as deeply as possible: to understand the relation between scalar product and norm, orthogonality, expression of geometrical properties and relations (length, angle, distance), etc., before things get too abstract. That's why many texts just hold on to the simplest formula as long as they can.




        To actually answer your question: let



        $$vecx = x_1 vece_1 + x_2 vece_2 + x_3 vece_3$$
        $$vecy = y_1 vece_1' + y_2 vece_2' + y_3 vece_3'$$



        such that $(vece_1, vece_2, vece_3)$ is the standard basis. Let further



        $$vece_i' = sum_j=1^3 E_i,j vece_j,$$



        so using distributivity and linearity it holds that



        $$vece_i' cdot vece_k
        = left( sum_j=1^3 E_i,j vece_j right) cdot vece_k
        = sum_j=1^3 E_i,j left( vece_j cdot vece_k right)
        = sum_j=1^3 E_i,j delta_jk (**)
        = E_i,k,$$



        (also $vece_k cdot vece_i' = E_i,k$), so



        $$vecx cdot vecy
        = left( sum_i=1^3 x_i vece_i right) cdot left( sum_j=1^3 y_j vece_j' right)
        = sum_i=1^3 sum_j=1^3 x_i y_j left( vece_i cdot vece_j' right)
        = sum_i=1^3 sum_j=1^3 x_i y_j E_j,i.$$



        You can use this formula for taking dot products of two vertices in different bases.
        I'm not sure if this counts as not converting to the same basis or not: you will need the conversion matrix $(E_i,j)$ anyway. You won't need to explicitly write $vecy$ in the $(vece_i)$ basis beforehand, though.




        (*) Mathematically speaking, special relativity does not use an actual 'scalar product'. But for my example this suffices without further details.



        (**) $delta_jk$ is shorthand for "one when $j=k$ and zero otherwise".






        share|cite|improve this answer











        $endgroup$













          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "151"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphysics.stackexchange.com%2fquestions%2f479656%2fformal-definition-of-dot-product%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          8 Answers
          8






          active

          oldest

          votes








          8 Answers
          8






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          16












          $begingroup$

          Your top-line question can be answered at many levels. Setting aside issues of forms and covariant/contravariant, the answer is:




          The dot product is the product of the magnitudes of the two vectors, times the cosine of the angle between them.




          No matter what basis you compute that in, you have to get the same answer because it's a physical quantity.



          The usual "sum of products of orthonormal components" is then a convenient computational approach, but as you've seen it's not the only way to compute them.



          The dot product's properties includes linear, commutative, distributive, etc. So when you expand the dot product



          $$(a_x hatx+a_y haty + a_z hatz) cdot (b_x hatX+b_y hatY + b_z hatZ)$$



          you get nine terms like $( a_x b_x hatxcdothatX) + (a_x b_y hatxcdothatY)+$ etc. In the usual orthonormal basis, the same-axis $hatxcdothatX$ factors just become 1, while the different-axis $hatxcdothatY$ et al factors are zero. That reduces to the formula you know.



          In a non-orthonormal basis, you have to figure out what those basis products are. To do that, you refer back to the definition: The product of the size of each, times the cosine of the angle between. Once you have all of those, you're again all set to compute. It just looks a bit more complicated...






          share|cite|improve this answer











          $endgroup$








          • 13




            $begingroup$
            I don't think the dot product is associative.
            $endgroup$
            – eyeballfrog
            May 13 at 1:18






          • 4




            $begingroup$
            "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." Only if you have a physical vector. If we're speaking mathematically, vectors can be abstract objects, and the "angle" is not defined. In fact, generally speaking, if "angle" is defined, it's defined in terms of the dot product, making your definition circular.
            $endgroup$
            – Acccumulation
            May 13 at 17:14






          • 3




            $begingroup$
            @Acccumulation This is Physics Stack Exchange.
            $endgroup$
            – Bob Jacobsen
            May 13 at 17:18






          • 4




            $begingroup$
            @Bob Jacobsen Yes, but physics also has abstract Hilbert spaces. Consider, for example, Quantum Mechanics.
            $endgroup$
            – scaphys
            May 13 at 18:36






          • 2




            $begingroup$
            "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." What about two bases which are not related by an orthogonal transformation? For example $hat e_i = 2 hat f_i$.
            $endgroup$
            – Display Name
            May 13 at 18:54















          16












          $begingroup$

          Your top-line question can be answered at many levels. Setting aside issues of forms and covariant/contravariant, the answer is:




          The dot product is the product of the magnitudes of the two vectors, times the cosine of the angle between them.




          No matter what basis you compute that in, you have to get the same answer because it's a physical quantity.



          The usual "sum of products of orthonormal components" is then a convenient computational approach, but as you've seen it's not the only way to compute them.



          The dot product's properties includes linear, commutative, distributive, etc. So when you expand the dot product



          $$(a_x hatx+a_y haty + a_z hatz) cdot (b_x hatX+b_y hatY + b_z hatZ)$$



          you get nine terms like $( a_x b_x hatxcdothatX) + (a_x b_y hatxcdothatY)+$ etc. In the usual orthonormal basis, the same-axis $hatxcdothatX$ factors just become 1, while the different-axis $hatxcdothatY$ et al factors are zero. That reduces to the formula you know.



          In a non-orthonormal basis, you have to figure out what those basis products are. To do that, you refer back to the definition: The product of the size of each, times the cosine of the angle between. Once you have all of those, you're again all set to compute. It just looks a bit more complicated...






          share|cite|improve this answer











          $endgroup$








          • 13




            $begingroup$
            I don't think the dot product is associative.
            $endgroup$
            – eyeballfrog
            May 13 at 1:18






          • 4




            $begingroup$
            "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." Only if you have a physical vector. If we're speaking mathematically, vectors can be abstract objects, and the "angle" is not defined. In fact, generally speaking, if "angle" is defined, it's defined in terms of the dot product, making your definition circular.
            $endgroup$
            – Acccumulation
            May 13 at 17:14






          • 3




            $begingroup$
            @Acccumulation This is Physics Stack Exchange.
            $endgroup$
            – Bob Jacobsen
            May 13 at 17:18






          • 4




            $begingroup$
            @Bob Jacobsen Yes, but physics also has abstract Hilbert spaces. Consider, for example, Quantum Mechanics.
            $endgroup$
            – scaphys
            May 13 at 18:36






          • 2




            $begingroup$
            "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." What about two bases which are not related by an orthogonal transformation? For example $hat e_i = 2 hat f_i$.
            $endgroup$
            – Display Name
            May 13 at 18:54













          16












          16








          16





          $begingroup$

          Your top-line question can be answered at many levels. Setting aside issues of forms and covariant/contravariant, the answer is:




          The dot product is the product of the magnitudes of the two vectors, times the cosine of the angle between them.




          No matter what basis you compute that in, you have to get the same answer because it's a physical quantity.



          The usual "sum of products of orthonormal components" is then a convenient computational approach, but as you've seen it's not the only way to compute them.



          The dot product's properties includes linear, commutative, distributive, etc. So when you expand the dot product



          $$(a_x hatx+a_y haty + a_z hatz) cdot (b_x hatX+b_y hatY + b_z hatZ)$$



          you get nine terms like $( a_x b_x hatxcdothatX) + (a_x b_y hatxcdothatY)+$ etc. In the usual orthonormal basis, the same-axis $hatxcdothatX$ factors just become 1, while the different-axis $hatxcdothatY$ et al factors are zero. That reduces to the formula you know.



          In a non-orthonormal basis, you have to figure out what those basis products are. To do that, you refer back to the definition: The product of the size of each, times the cosine of the angle between. Once you have all of those, you're again all set to compute. It just looks a bit more complicated...






          share|cite|improve this answer











          $endgroup$



          Your top-line question can be answered at many levels. Setting aside issues of forms and covariant/contravariant, the answer is:




          The dot product is the product of the magnitudes of the two vectors, times the cosine of the angle between them.




          No matter what basis you compute that in, you have to get the same answer because it's a physical quantity.



          The usual "sum of products of orthonormal components" is then a convenient computational approach, but as you've seen it's not the only way to compute them.



          The dot product's properties includes linear, commutative, distributive, etc. So when you expand the dot product



          $$(a_x hatx+a_y haty + a_z hatz) cdot (b_x hatX+b_y hatY + b_z hatZ)$$



          you get nine terms like $( a_x b_x hatxcdothatX) + (a_x b_y hatxcdothatY)+$ etc. In the usual orthonormal basis, the same-axis $hatxcdothatX$ factors just become 1, while the different-axis $hatxcdothatY$ et al factors are zero. That reduces to the formula you know.



          In a non-orthonormal basis, you have to figure out what those basis products are. To do that, you refer back to the definition: The product of the size of each, times the cosine of the angle between. Once you have all of those, you're again all set to compute. It just looks a bit more complicated...







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited May 13 at 1:36

























          answered May 13 at 0:26









          Bob JacobsenBob Jacobsen

          6,1801021




          6,1801021







          • 13




            $begingroup$
            I don't think the dot product is associative.
            $endgroup$
            – eyeballfrog
            May 13 at 1:18






          • 4




            $begingroup$
            "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." Only if you have a physical vector. If we're speaking mathematically, vectors can be abstract objects, and the "angle" is not defined. In fact, generally speaking, if "angle" is defined, it's defined in terms of the dot product, making your definition circular.
            $endgroup$
            – Acccumulation
            May 13 at 17:14






          • 3




            $begingroup$
            @Acccumulation This is Physics Stack Exchange.
            $endgroup$
            – Bob Jacobsen
            May 13 at 17:18






          • 4




            $begingroup$
            @Bob Jacobsen Yes, but physics also has abstract Hilbert spaces. Consider, for example, Quantum Mechanics.
            $endgroup$
            – scaphys
            May 13 at 18:36






          • 2




            $begingroup$
            "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." What about two bases which are not related by an orthogonal transformation? For example $hat e_i = 2 hat f_i$.
            $endgroup$
            – Display Name
            May 13 at 18:54












          • 13




            $begingroup$
            I don't think the dot product is associative.
            $endgroup$
            – eyeballfrog
            May 13 at 1:18






          • 4




            $begingroup$
            "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." Only if you have a physical vector. If we're speaking mathematically, vectors can be abstract objects, and the "angle" is not defined. In fact, generally speaking, if "angle" is defined, it's defined in terms of the dot product, making your definition circular.
            $endgroup$
            – Acccumulation
            May 13 at 17:14






          • 3




            $begingroup$
            @Acccumulation This is Physics Stack Exchange.
            $endgroup$
            – Bob Jacobsen
            May 13 at 17:18






          • 4




            $begingroup$
            @Bob Jacobsen Yes, but physics also has abstract Hilbert spaces. Consider, for example, Quantum Mechanics.
            $endgroup$
            – scaphys
            May 13 at 18:36






          • 2




            $begingroup$
            "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." What about two bases which are not related by an orthogonal transformation? For example $hat e_i = 2 hat f_i$.
            $endgroup$
            – Display Name
            May 13 at 18:54







          13




          13




          $begingroup$
          I don't think the dot product is associative.
          $endgroup$
          – eyeballfrog
          May 13 at 1:18




          $begingroup$
          I don't think the dot product is associative.
          $endgroup$
          – eyeballfrog
          May 13 at 1:18




          4




          4




          $begingroup$
          "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." Only if you have a physical vector. If we're speaking mathematically, vectors can be abstract objects, and the "angle" is not defined. In fact, generally speaking, if "angle" is defined, it's defined in terms of the dot product, making your definition circular.
          $endgroup$
          – Acccumulation
          May 13 at 17:14




          $begingroup$
          "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." Only if you have a physical vector. If we're speaking mathematically, vectors can be abstract objects, and the "angle" is not defined. In fact, generally speaking, if "angle" is defined, it's defined in terms of the dot product, making your definition circular.
          $endgroup$
          – Acccumulation
          May 13 at 17:14




          3




          3




          $begingroup$
          @Acccumulation This is Physics Stack Exchange.
          $endgroup$
          – Bob Jacobsen
          May 13 at 17:18




          $begingroup$
          @Acccumulation This is Physics Stack Exchange.
          $endgroup$
          – Bob Jacobsen
          May 13 at 17:18




          4




          4




          $begingroup$
          @Bob Jacobsen Yes, but physics also has abstract Hilbert spaces. Consider, for example, Quantum Mechanics.
          $endgroup$
          – scaphys
          May 13 at 18:36




          $begingroup$
          @Bob Jacobsen Yes, but physics also has abstract Hilbert spaces. Consider, for example, Quantum Mechanics.
          $endgroup$
          – scaphys
          May 13 at 18:36




          2




          2




          $begingroup$
          "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." What about two bases which are not related by an orthogonal transformation? For example $hat e_i = 2 hat f_i$.
          $endgroup$
          – Display Name
          May 13 at 18:54




          $begingroup$
          "No matter what basis you compute that in, you have to get the same answer because it's a physical quantity." What about two bases which are not related by an orthogonal transformation? For example $hat e_i = 2 hat f_i$.
          $endgroup$
          – Display Name
          May 13 at 18:54











          18












          $begingroup$

          Dot products, or inner products are defined axiomatically, or abstractly. An inner product on a vector space $V$ over $R$ is a pairing $Vtimes Vto R$, denoted by $ langle u,vrangle$, with properties $langle u,vrangle=langle v,urangle$, $langle u+cw,vrangle= langle u,vrangle+clangle w,vrangle$, and $ langle u,uranglegt0$ if $une0$. In general, a vector space can be endowed with an inner product in many ways. Notice here there is no reference to a basis/coordinate system.



          Using what is called the Gram-Schmidt process, one can then construct a basis $e_1,cdots e_n$ for $V$ in which the inner product takes the computational form which you stated in your question.



          In your question, you are actually starting with what is called an orthonormal basis for an inner product. The coordinate-free approach is to state the postulates that an inner product should obey, then after being given an explicit inner product, construct an orthonormal basis in which to do computations.



          In general, an orthonormal basis $e_1,e_2,e_3$ for one inner product on $V$ will not be an orthonormal basis for another inner product on $V$.






          share|cite|improve this answer









          $endgroup$

















            18












            $begingroup$

            Dot products, or inner products are defined axiomatically, or abstractly. An inner product on a vector space $V$ over $R$ is a pairing $Vtimes Vto R$, denoted by $ langle u,vrangle$, with properties $langle u,vrangle=langle v,urangle$, $langle u+cw,vrangle= langle u,vrangle+clangle w,vrangle$, and $ langle u,uranglegt0$ if $une0$. In general, a vector space can be endowed with an inner product in many ways. Notice here there is no reference to a basis/coordinate system.



            Using what is called the Gram-Schmidt process, one can then construct a basis $e_1,cdots e_n$ for $V$ in which the inner product takes the computational form which you stated in your question.



            In your question, you are actually starting with what is called an orthonormal basis for an inner product. The coordinate-free approach is to state the postulates that an inner product should obey, then after being given an explicit inner product, construct an orthonormal basis in which to do computations.



            In general, an orthonormal basis $e_1,e_2,e_3$ for one inner product on $V$ will not be an orthonormal basis for another inner product on $V$.






            share|cite|improve this answer









            $endgroup$















              18












              18








              18





              $begingroup$

              Dot products, or inner products are defined axiomatically, or abstractly. An inner product on a vector space $V$ over $R$ is a pairing $Vtimes Vto R$, denoted by $ langle u,vrangle$, with properties $langle u,vrangle=langle v,urangle$, $langle u+cw,vrangle= langle u,vrangle+clangle w,vrangle$, and $ langle u,uranglegt0$ if $une0$. In general, a vector space can be endowed with an inner product in many ways. Notice here there is no reference to a basis/coordinate system.



              Using what is called the Gram-Schmidt process, one can then construct a basis $e_1,cdots e_n$ for $V$ in which the inner product takes the computational form which you stated in your question.



              In your question, you are actually starting with what is called an orthonormal basis for an inner product. The coordinate-free approach is to state the postulates that an inner product should obey, then after being given an explicit inner product, construct an orthonormal basis in which to do computations.



              In general, an orthonormal basis $e_1,e_2,e_3$ for one inner product on $V$ will not be an orthonormal basis for another inner product on $V$.






              share|cite|improve this answer









              $endgroup$



              Dot products, or inner products are defined axiomatically, or abstractly. An inner product on a vector space $V$ over $R$ is a pairing $Vtimes Vto R$, denoted by $ langle u,vrangle$, with properties $langle u,vrangle=langle v,urangle$, $langle u+cw,vrangle= langle u,vrangle+clangle w,vrangle$, and $ langle u,uranglegt0$ if $une0$. In general, a vector space can be endowed with an inner product in many ways. Notice here there is no reference to a basis/coordinate system.



              Using what is called the Gram-Schmidt process, one can then construct a basis $e_1,cdots e_n$ for $V$ in which the inner product takes the computational form which you stated in your question.



              In your question, you are actually starting with what is called an orthonormal basis for an inner product. The coordinate-free approach is to state the postulates that an inner product should obey, then after being given an explicit inner product, construct an orthonormal basis in which to do computations.



              In general, an orthonormal basis $e_1,e_2,e_3$ for one inner product on $V$ will not be an orthonormal basis for another inner product on $V$.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered May 13 at 2:32









              user52817user52817

              3312




              3312





















                  9












                  $begingroup$

                  The dot product can be defined in a coordinate-independent way as



                  $$vecacdotvecb=|veca||vecb|costheta$$



                  where $theta$ is the angle between the two vectors. This involves only lengths and angles, not coordinates.



                  To use your first formula, the coordinates must be in the same basis.



                  You can convert between bases using a rotation matrix, and the fact that a rotation matrix preserves vector lengths is sufficient to show that it preserves the dot product. This is because



                  $$vecacdotvecb=frac12left(|veca+vecb|^2-|veca|^2-|vecb|^2right).$$



                  This formula is another purely-geometric, coordinate-free definition of the dot product.






                  share|cite|improve this answer











                  $endgroup$












                  • $begingroup$
                    Thank you! That makes sense. But what happens if you are dealing with a non-orthonormal system? Is the dot product's value preserved in making the coordinate transformation?
                    $endgroup$
                    – dts
                    May 13 at 0:21










                  • $begingroup$
                    Yes, the value is preserved, but the coordinate-based formula in a non-orthonormal basis is more complicated than your first formula.
                    $endgroup$
                    – G. Smith
                    May 13 at 0:31






                  • 1




                    $begingroup$
                    "You can convert between bases using a rotation matrix", I strongly disagree. Only if the base vectors are normalised, but that needn't be the case. However there exists a Matrix $A$ such that $e_i^prime = A e_i$ where $e_i$ is to be understood at the ith basic vector (not the component).
                    $endgroup$
                    – infinitezero
                    May 13 at 16:33
















                  9












                  $begingroup$

                  The dot product can be defined in a coordinate-independent way as



                  $$vecacdotvecb=|veca||vecb|costheta$$



                  where $theta$ is the angle between the two vectors. This involves only lengths and angles, not coordinates.



                  To use your first formula, the coordinates must be in the same basis.



                  You can convert between bases using a rotation matrix, and the fact that a rotation matrix preserves vector lengths is sufficient to show that it preserves the dot product. This is because



                  $$vecacdotvecb=frac12left(|veca+vecb|^2-|veca|^2-|vecb|^2right).$$



                  This formula is another purely-geometric, coordinate-free definition of the dot product.






                  share|cite|improve this answer











                  $endgroup$












                  • $begingroup$
                    Thank you! That makes sense. But what happens if you are dealing with a non-orthonormal system? Is the dot product's value preserved in making the coordinate transformation?
                    $endgroup$
                    – dts
                    May 13 at 0:21










                  • $begingroup$
                    Yes, the value is preserved, but the coordinate-based formula in a non-orthonormal basis is more complicated than your first formula.
                    $endgroup$
                    – G. Smith
                    May 13 at 0:31






                  • 1




                    $begingroup$
                    "You can convert between bases using a rotation matrix", I strongly disagree. Only if the base vectors are normalised, but that needn't be the case. However there exists a Matrix $A$ such that $e_i^prime = A e_i$ where $e_i$ is to be understood at the ith basic vector (not the component).
                    $endgroup$
                    – infinitezero
                    May 13 at 16:33














                  9












                  9








                  9





                  $begingroup$

                  The dot product can be defined in a coordinate-independent way as



                  $$vecacdotvecb=|veca||vecb|costheta$$



                  where $theta$ is the angle between the two vectors. This involves only lengths and angles, not coordinates.



                  To use your first formula, the coordinates must be in the same basis.



                  You can convert between bases using a rotation matrix, and the fact that a rotation matrix preserves vector lengths is sufficient to show that it preserves the dot product. This is because



                  $$vecacdotvecb=frac12left(|veca+vecb|^2-|veca|^2-|vecb|^2right).$$



                  This formula is another purely-geometric, coordinate-free definition of the dot product.






                  share|cite|improve this answer











                  $endgroup$



                  The dot product can be defined in a coordinate-independent way as



                  $$vecacdotvecb=|veca||vecb|costheta$$



                  where $theta$ is the angle between the two vectors. This involves only lengths and angles, not coordinates.



                  To use your first formula, the coordinates must be in the same basis.



                  You can convert between bases using a rotation matrix, and the fact that a rotation matrix preserves vector lengths is sufficient to show that it preserves the dot product. This is because



                  $$vecacdotvecb=frac12left(|veca+vecb|^2-|veca|^2-|vecb|^2right).$$



                  This formula is another purely-geometric, coordinate-free definition of the dot product.







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited May 13 at 0:23

























                  answered May 13 at 0:14









                  G. SmithG. Smith

                  13.9k12247




                  13.9k12247











                  • $begingroup$
                    Thank you! That makes sense. But what happens if you are dealing with a non-orthonormal system? Is the dot product's value preserved in making the coordinate transformation?
                    $endgroup$
                    – dts
                    May 13 at 0:21










                  • $begingroup$
                    Yes, the value is preserved, but the coordinate-based formula in a non-orthonormal basis is more complicated than your first formula.
                    $endgroup$
                    – G. Smith
                    May 13 at 0:31






                  • 1




                    $begingroup$
                    "You can convert between bases using a rotation matrix", I strongly disagree. Only if the base vectors are normalised, but that needn't be the case. However there exists a Matrix $A$ such that $e_i^prime = A e_i$ where $e_i$ is to be understood at the ith basic vector (not the component).
                    $endgroup$
                    – infinitezero
                    May 13 at 16:33

















                  • $begingroup$
                    Thank you! That makes sense. But what happens if you are dealing with a non-orthonormal system? Is the dot product's value preserved in making the coordinate transformation?
                    $endgroup$
                    – dts
                    May 13 at 0:21










                  • $begingroup$
                    Yes, the value is preserved, but the coordinate-based formula in a non-orthonormal basis is more complicated than your first formula.
                    $endgroup$
                    – G. Smith
                    May 13 at 0:31






                  • 1




                    $begingroup$
                    "You can convert between bases using a rotation matrix", I strongly disagree. Only if the base vectors are normalised, but that needn't be the case. However there exists a Matrix $A$ such that $e_i^prime = A e_i$ where $e_i$ is to be understood at the ith basic vector (not the component).
                    $endgroup$
                    – infinitezero
                    May 13 at 16:33
















                  $begingroup$
                  Thank you! That makes sense. But what happens if you are dealing with a non-orthonormal system? Is the dot product's value preserved in making the coordinate transformation?
                  $endgroup$
                  – dts
                  May 13 at 0:21




                  $begingroup$
                  Thank you! That makes sense. But what happens if you are dealing with a non-orthonormal system? Is the dot product's value preserved in making the coordinate transformation?
                  $endgroup$
                  – dts
                  May 13 at 0:21












                  $begingroup$
                  Yes, the value is preserved, but the coordinate-based formula in a non-orthonormal basis is more complicated than your first formula.
                  $endgroup$
                  – G. Smith
                  May 13 at 0:31




                  $begingroup$
                  Yes, the value is preserved, but the coordinate-based formula in a non-orthonormal basis is more complicated than your first formula.
                  $endgroup$
                  – G. Smith
                  May 13 at 0:31




                  1




                  1




                  $begingroup$
                  "You can convert between bases using a rotation matrix", I strongly disagree. Only if the base vectors are normalised, but that needn't be the case. However there exists a Matrix $A$ such that $e_i^prime = A e_i$ where $e_i$ is to be understood at the ith basic vector (not the component).
                  $endgroup$
                  – infinitezero
                  May 13 at 16:33





                  $begingroup$
                  "You can convert between bases using a rotation matrix", I strongly disagree. Only if the base vectors are normalised, but that needn't be the case. However there exists a Matrix $A$ such that $e_i^prime = A e_i$ where $e_i$ is to be understood at the ith basic vector (not the component).
                  $endgroup$
                  – infinitezero
                  May 13 at 16:33












                  7












                  $begingroup$

                  The coordinate free definition of a dot product is:



                  $$ vec a cdot vec b = frac 1 4 [(vec a + vec b)^2 - (vec a - vec b)^2]$$



                  It's up to you to figure out what the norm is:



                  $$ ||vec a|| = sqrt(vec a)^2$$



                  Here is a reference for this viewpoint:
                  http://www.pmaweb.caltech.edu/Courses/ph136/yr2012/1202.1.K.pdf
                  Section 2.3






                  share|cite|improve this answer











                  $endgroup$








                  • 3




                    $begingroup$
                    This is a circular definition as the norm is defined via the dot product.
                    $endgroup$
                    – Winther
                    May 13 at 9:01










                  • $begingroup$
                    @Winther You've got to input something: the dot product cannot be derived only from the underlying vector space structure. The norm seems a reasonable choice here, for geometric intuition.
                    $endgroup$
                    – Denis Nardin
                    May 13 at 11:06






                  • 2




                    $begingroup$
                    This will only define an inner product iff the norm satisfies the parallelogram identity $2||x||^2+2||y||^2=||x+y||^2+||x-y||^2$
                    $endgroup$
                    – Jannik Pitt
                    May 13 at 11:50










                  • $begingroup$
                    Yes you have to input something: either define a norm or define an inner product and have the norm be induced by this. However my point was that you seem to define the norm via $|a|=sqrtacdot a$ which is why I said it was circular. On second reading it does look like you say you need to specify the norm externally so then this would be fine. However doesn't then the definition of the norm require you to specify a coordinate system so it's not really coordinate free?
                    $endgroup$
                    – Winther
                    May 13 at 13:48










                  • $begingroup$
                    @Winther Well, it depends on how your vector space is given to you. If your vectors are a bunch of coordinates (like in the usual description of $mathbbR^n$), of course every definition you give will be coordinate dependent (coordinates are all you have!), but if your vector space is composed by something more exotic (e.g. the space of solutions of a certain ODE) then you can hope to write down a definition of the norm using something else. (and yes, indeed a Banach space is Hilbert iff the norm satisfies the parallelogram identity, plus some added condition if over $mathbbC$).
                    $endgroup$
                    – Denis Nardin
                    May 13 at 18:10















                  7












                  $begingroup$

                  The coordinate free definition of a dot product is:



                  $$ vec a cdot vec b = frac 1 4 [(vec a + vec b)^2 - (vec a - vec b)^2]$$



                  It's up to you to figure out what the norm is:



                  $$ ||vec a|| = sqrt(vec a)^2$$



                  Here is a reference for this viewpoint:
                  http://www.pmaweb.caltech.edu/Courses/ph136/yr2012/1202.1.K.pdf
                  Section 2.3






                  share|cite|improve this answer











                  $endgroup$








                  • 3




                    $begingroup$
                    This is a circular definition as the norm is defined via the dot product.
                    $endgroup$
                    – Winther
                    May 13 at 9:01










                  • $begingroup$
                    @Winther You've got to input something: the dot product cannot be derived only from the underlying vector space structure. The norm seems a reasonable choice here, for geometric intuition.
                    $endgroup$
                    – Denis Nardin
                    May 13 at 11:06






                  • 2




                    $begingroup$
                    This will only define an inner product iff the norm satisfies the parallelogram identity $2||x||^2+2||y||^2=||x+y||^2+||x-y||^2$
                    $endgroup$
                    – Jannik Pitt
                    May 13 at 11:50










                  • $begingroup$
                    Yes you have to input something: either define a norm or define an inner product and have the norm be induced by this. However my point was that you seem to define the norm via $|a|=sqrtacdot a$ which is why I said it was circular. On second reading it does look like you say you need to specify the norm externally so then this would be fine. However doesn't then the definition of the norm require you to specify a coordinate system so it's not really coordinate free?
                    $endgroup$
                    – Winther
                    May 13 at 13:48










                  • $begingroup$
                    @Winther Well, it depends on how your vector space is given to you. If your vectors are a bunch of coordinates (like in the usual description of $mathbbR^n$), of course every definition you give will be coordinate dependent (coordinates are all you have!), but if your vector space is composed by something more exotic (e.g. the space of solutions of a certain ODE) then you can hope to write down a definition of the norm using something else. (and yes, indeed a Banach space is Hilbert iff the norm satisfies the parallelogram identity, plus some added condition if over $mathbbC$).
                    $endgroup$
                    – Denis Nardin
                    May 13 at 18:10













                  7












                  7








                  7





                  $begingroup$

                  The coordinate free definition of a dot product is:



                  $$ vec a cdot vec b = frac 1 4 [(vec a + vec b)^2 - (vec a - vec b)^2]$$



                  It's up to you to figure out what the norm is:



                  $$ ||vec a|| = sqrt(vec a)^2$$



                  Here is a reference for this viewpoint:
                  http://www.pmaweb.caltech.edu/Courses/ph136/yr2012/1202.1.K.pdf
                  Section 2.3






                  share|cite|improve this answer











                  $endgroup$



                  The coordinate free definition of a dot product is:



                  $$ vec a cdot vec b = frac 1 4 [(vec a + vec b)^2 - (vec a - vec b)^2]$$



                  It's up to you to figure out what the norm is:



                  $$ ||vec a|| = sqrt(vec a)^2$$



                  Here is a reference for this viewpoint:
                  http://www.pmaweb.caltech.edu/Courses/ph136/yr2012/1202.1.K.pdf
                  Section 2.3







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited May 13 at 3:50

























                  answered May 13 at 1:30









                  JEBJEB

                  7,0431819




                  7,0431819







                  • 3




                    $begingroup$
                    This is a circular definition as the norm is defined via the dot product.
                    $endgroup$
                    – Winther
                    May 13 at 9:01










                  • $begingroup$
                    @Winther You've got to input something: the dot product cannot be derived only from the underlying vector space structure. The norm seems a reasonable choice here, for geometric intuition.
                    $endgroup$
                    – Denis Nardin
                    May 13 at 11:06






                  • 2




                    $begingroup$
                    This will only define an inner product iff the norm satisfies the parallelogram identity $2||x||^2+2||y||^2=||x+y||^2+||x-y||^2$
                    $endgroup$
                    – Jannik Pitt
                    May 13 at 11:50










                  • $begingroup$
                    Yes you have to input something: either define a norm or define an inner product and have the norm be induced by this. However my point was that you seem to define the norm via $|a|=sqrtacdot a$ which is why I said it was circular. On second reading it does look like you say you need to specify the norm externally so then this would be fine. However doesn't then the definition of the norm require you to specify a coordinate system so it's not really coordinate free?
                    $endgroup$
                    – Winther
                    May 13 at 13:48










                  • $begingroup$
                    @Winther Well, it depends on how your vector space is given to you. If your vectors are a bunch of coordinates (like in the usual description of $mathbbR^n$), of course every definition you give will be coordinate dependent (coordinates are all you have!), but if your vector space is composed by something more exotic (e.g. the space of solutions of a certain ODE) then you can hope to write down a definition of the norm using something else. (and yes, indeed a Banach space is Hilbert iff the norm satisfies the parallelogram identity, plus some added condition if over $mathbbC$).
                    $endgroup$
                    – Denis Nardin
                    May 13 at 18:10












                  • 3




                    $begingroup$
                    This is a circular definition as the norm is defined via the dot product.
                    $endgroup$
                    – Winther
                    May 13 at 9:01










                  • $begingroup$
                    @Winther You've got to input something: the dot product cannot be derived only from the underlying vector space structure. The norm seems a reasonable choice here, for geometric intuition.
                    $endgroup$
                    – Denis Nardin
                    May 13 at 11:06






                  • 2




                    $begingroup$
                    This will only define an inner product iff the norm satisfies the parallelogram identity $2||x||^2+2||y||^2=||x+y||^2+||x-y||^2$
                    $endgroup$
                    – Jannik Pitt
                    May 13 at 11:50










                  • $begingroup$
                    Yes you have to input something: either define a norm or define an inner product and have the norm be induced by this. However my point was that you seem to define the norm via $|a|=sqrtacdot a$ which is why I said it was circular. On second reading it does look like you say you need to specify the norm externally so then this would be fine. However doesn't then the definition of the norm require you to specify a coordinate system so it's not really coordinate free?
                    $endgroup$
                    – Winther
                    May 13 at 13:48










                  • $begingroup$
                    @Winther Well, it depends on how your vector space is given to you. If your vectors are a bunch of coordinates (like in the usual description of $mathbbR^n$), of course every definition you give will be coordinate dependent (coordinates are all you have!), but if your vector space is composed by something more exotic (e.g. the space of solutions of a certain ODE) then you can hope to write down a definition of the norm using something else. (and yes, indeed a Banach space is Hilbert iff the norm satisfies the parallelogram identity, plus some added condition if over $mathbbC$).
                    $endgroup$
                    – Denis Nardin
                    May 13 at 18:10







                  3




                  3




                  $begingroup$
                  This is a circular definition as the norm is defined via the dot product.
                  $endgroup$
                  – Winther
                  May 13 at 9:01




                  $begingroup$
                  This is a circular definition as the norm is defined via the dot product.
                  $endgroup$
                  – Winther
                  May 13 at 9:01












                  $begingroup$
                  @Winther You've got to input something: the dot product cannot be derived only from the underlying vector space structure. The norm seems a reasonable choice here, for geometric intuition.
                  $endgroup$
                  – Denis Nardin
                  May 13 at 11:06




                  $begingroup$
                  @Winther You've got to input something: the dot product cannot be derived only from the underlying vector space structure. The norm seems a reasonable choice here, for geometric intuition.
                  $endgroup$
                  – Denis Nardin
                  May 13 at 11:06




                  2




                  2




                  $begingroup$
                  This will only define an inner product iff the norm satisfies the parallelogram identity $2||x||^2+2||y||^2=||x+y||^2+||x-y||^2$
                  $endgroup$
                  – Jannik Pitt
                  May 13 at 11:50




                  $begingroup$
                  This will only define an inner product iff the norm satisfies the parallelogram identity $2||x||^2+2||y||^2=||x+y||^2+||x-y||^2$
                  $endgroup$
                  – Jannik Pitt
                  May 13 at 11:50












                  $begingroup$
                  Yes you have to input something: either define a norm or define an inner product and have the norm be induced by this. However my point was that you seem to define the norm via $|a|=sqrtacdot a$ which is why I said it was circular. On second reading it does look like you say you need to specify the norm externally so then this would be fine. However doesn't then the definition of the norm require you to specify a coordinate system so it's not really coordinate free?
                  $endgroup$
                  – Winther
                  May 13 at 13:48




                  $begingroup$
                  Yes you have to input something: either define a norm or define an inner product and have the norm be induced by this. However my point was that you seem to define the norm via $|a|=sqrtacdot a$ which is why I said it was circular. On second reading it does look like you say you need to specify the norm externally so then this would be fine. However doesn't then the definition of the norm require you to specify a coordinate system so it's not really coordinate free?
                  $endgroup$
                  – Winther
                  May 13 at 13:48












                  $begingroup$
                  @Winther Well, it depends on how your vector space is given to you. If your vectors are a bunch of coordinates (like in the usual description of $mathbbR^n$), of course every definition you give will be coordinate dependent (coordinates are all you have!), but if your vector space is composed by something more exotic (e.g. the space of solutions of a certain ODE) then you can hope to write down a definition of the norm using something else. (and yes, indeed a Banach space is Hilbert iff the norm satisfies the parallelogram identity, plus some added condition if over $mathbbC$).
                  $endgroup$
                  – Denis Nardin
                  May 13 at 18:10




                  $begingroup$
                  @Winther Well, it depends on how your vector space is given to you. If your vectors are a bunch of coordinates (like in the usual description of $mathbbR^n$), of course every definition you give will be coordinate dependent (coordinates are all you have!), but if your vector space is composed by something more exotic (e.g. the space of solutions of a certain ODE) then you can hope to write down a definition of the norm using something else. (and yes, indeed a Banach space is Hilbert iff the norm satisfies the parallelogram identity, plus some added condition if over $mathbbC$).
                  $endgroup$
                  – Denis Nardin
                  May 13 at 18:10











                  3












                  $begingroup$

                  On computing the following matrix will give you the dot product $$beginbmatrix x_1 & x_2& x_3 endbmatrix.beginbmatrix e_1.e'_1 & e_1.e'_2 & e_1.e'_3 \ e_2.e'_1 & e_2.e'_2 & e_2.e'_3 \ e_3.e'_1 & e_3.e'_2 & e_3.e'_3endbmatrix.beginbmatrixy_1\y_2\y_3endbmatrix$$ If we transform the cordinate of the a vector, only the components and basis of vector changes. The vector remains unchanged. Thus the dot product remain unchanged even if we compute dot product between primed and unprimed vectors.






                  share|cite|improve this answer











                  $endgroup$








                  • 1




                    $begingroup$
                    I like this because it provides a prior motivation for representing inner products with a metric tensor in relativity.
                    $endgroup$
                    – dmckee
                    May 16 at 16:28















                  3












                  $begingroup$

                  On computing the following matrix will give you the dot product $$beginbmatrix x_1 & x_2& x_3 endbmatrix.beginbmatrix e_1.e'_1 & e_1.e'_2 & e_1.e'_3 \ e_2.e'_1 & e_2.e'_2 & e_2.e'_3 \ e_3.e'_1 & e_3.e'_2 & e_3.e'_3endbmatrix.beginbmatrixy_1\y_2\y_3endbmatrix$$ If we transform the cordinate of the a vector, only the components and basis of vector changes. The vector remains unchanged. Thus the dot product remain unchanged even if we compute dot product between primed and unprimed vectors.






                  share|cite|improve this answer











                  $endgroup$








                  • 1




                    $begingroup$
                    I like this because it provides a prior motivation for representing inner products with a metric tensor in relativity.
                    $endgroup$
                    – dmckee
                    May 16 at 16:28













                  3












                  3








                  3





                  $begingroup$

                  On computing the following matrix will give you the dot product $$beginbmatrix x_1 & x_2& x_3 endbmatrix.beginbmatrix e_1.e'_1 & e_1.e'_2 & e_1.e'_3 \ e_2.e'_1 & e_2.e'_2 & e_2.e'_3 \ e_3.e'_1 & e_3.e'_2 & e_3.e'_3endbmatrix.beginbmatrixy_1\y_2\y_3endbmatrix$$ If we transform the cordinate of the a vector, only the components and basis of vector changes. The vector remains unchanged. Thus the dot product remain unchanged even if we compute dot product between primed and unprimed vectors.






                  share|cite|improve this answer











                  $endgroup$



                  On computing the following matrix will give you the dot product $$beginbmatrix x_1 & x_2& x_3 endbmatrix.beginbmatrix e_1.e'_1 & e_1.e'_2 & e_1.e'_3 \ e_2.e'_1 & e_2.e'_2 & e_2.e'_3 \ e_3.e'_1 & e_3.e'_2 & e_3.e'_3endbmatrix.beginbmatrixy_1\y_2\y_3endbmatrix$$ If we transform the cordinate of the a vector, only the components and basis of vector changes. The vector remains unchanged. Thus the dot product remain unchanged even if we compute dot product between primed and unprimed vectors.







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited May 17 at 2:34

























                  answered May 13 at 1:13









                  walber97walber97

                  489111




                  489111







                  • 1




                    $begingroup$
                    I like this because it provides a prior motivation for representing inner products with a metric tensor in relativity.
                    $endgroup$
                    – dmckee
                    May 16 at 16:28












                  • 1




                    $begingroup$
                    I like this because it provides a prior motivation for representing inner products with a metric tensor in relativity.
                    $endgroup$
                    – dmckee
                    May 16 at 16:28







                  1




                  1




                  $begingroup$
                  I like this because it provides a prior motivation for representing inner products with a metric tensor in relativity.
                  $endgroup$
                  – dmckee
                  May 16 at 16:28




                  $begingroup$
                  I like this because it provides a prior motivation for representing inner products with a metric tensor in relativity.
                  $endgroup$
                  – dmckee
                  May 16 at 16:28











                  1












                  $begingroup$

                  A vector space (or linear space) is a set and two operations, which are vector addition and scalar multiplication, and some rules (spelled out in the Definition section of this Wikipedia article). The net result of this definition is that vectors behave like little arrows or ordered tuples under addition and scalar multiplication.



                  This is good, but often more structure is needed. (See the Vector Spaces with Additional Structure section of the link above.)



                  For example, a norm can be defined on a vector space. This defines a magnitude or length for each vector. Again there are some rules. No magnitude can be negative. Only the $vec0$ vector can have a magnitude of $0$. The triangle inequality: $lvert(a+b)rvert <= lvert arvert + lvert brvert$



                  Likewise an inner product can be defined on a vector space. It adds enough structure to support the ideas of orthogonality and projection. For spaces where it makes sense, this leads to the idea of angle.



                  The formal definition or a norm is that an inner product is a function that associates two vectors with a number, with some rules. See this for the details.




                  These are general definitions which work on all vector spaces. The links above give examples of vector spaces that may not be familiar. E.G. The set of all functions of the form $y = ax^2 + bx + c$ is a 3 dimensional vector space.



                  The most familiar vector spaces are N dimensional Euclidian spaces. These are normed vector spaces, where the norm matches the everyday definition of distance.



                  The dot product is the inner product on these spaces that matches the everyday definition of orthogonality and angle. See this Wikipedia article.






                  share|cite|improve this answer









                  $endgroup$

















                    1












                    $begingroup$

                    A vector space (or linear space) is a set and two operations, which are vector addition and scalar multiplication, and some rules (spelled out in the Definition section of this Wikipedia article). The net result of this definition is that vectors behave like little arrows or ordered tuples under addition and scalar multiplication.



                    This is good, but often more structure is needed. (See the Vector Spaces with Additional Structure section of the link above.)



                    For example, a norm can be defined on a vector space. This defines a magnitude or length for each vector. Again there are some rules. No magnitude can be negative. Only the $vec0$ vector can have a magnitude of $0$. The triangle inequality: $lvert(a+b)rvert <= lvert arvert + lvert brvert$



                    Likewise an inner product can be defined on a vector space. It adds enough structure to support the ideas of orthogonality and projection. For spaces where it makes sense, this leads to the idea of angle.



                    The formal definition or a norm is that an inner product is a function that associates two vectors with a number, with some rules. See this for the details.




                    These are general definitions which work on all vector spaces. The links above give examples of vector spaces that may not be familiar. E.G. The set of all functions of the form $y = ax^2 + bx + c$ is a 3 dimensional vector space.



                    The most familiar vector spaces are N dimensional Euclidian spaces. These are normed vector spaces, where the norm matches the everyday definition of distance.



                    The dot product is the inner product on these spaces that matches the everyday definition of orthogonality and angle. See this Wikipedia article.






                    share|cite|improve this answer









                    $endgroup$















                      1












                      1








                      1





                      $begingroup$

                      A vector space (or linear space) is a set and two operations, which are vector addition and scalar multiplication, and some rules (spelled out in the Definition section of this Wikipedia article). The net result of this definition is that vectors behave like little arrows or ordered tuples under addition and scalar multiplication.



                      This is good, but often more structure is needed. (See the Vector Spaces with Additional Structure section of the link above.)



                      For example, a norm can be defined on a vector space. This defines a magnitude or length for each vector. Again there are some rules. No magnitude can be negative. Only the $vec0$ vector can have a magnitude of $0$. The triangle inequality: $lvert(a+b)rvert <= lvert arvert + lvert brvert$



                      Likewise an inner product can be defined on a vector space. It adds enough structure to support the ideas of orthogonality and projection. For spaces where it makes sense, this leads to the idea of angle.



                      The formal definition or a norm is that an inner product is a function that associates two vectors with a number, with some rules. See this for the details.




                      These are general definitions which work on all vector spaces. The links above give examples of vector spaces that may not be familiar. E.G. The set of all functions of the form $y = ax^2 + bx + c$ is a 3 dimensional vector space.



                      The most familiar vector spaces are N dimensional Euclidian spaces. These are normed vector spaces, where the norm matches the everyday definition of distance.



                      The dot product is the inner product on these spaces that matches the everyday definition of orthogonality and angle. See this Wikipedia article.






                      share|cite|improve this answer









                      $endgroup$



                      A vector space (or linear space) is a set and two operations, which are vector addition and scalar multiplication, and some rules (spelled out in the Definition section of this Wikipedia article). The net result of this definition is that vectors behave like little arrows or ordered tuples under addition and scalar multiplication.



                      This is good, but often more structure is needed. (See the Vector Spaces with Additional Structure section of the link above.)



                      For example, a norm can be defined on a vector space. This defines a magnitude or length for each vector. Again there are some rules. No magnitude can be negative. Only the $vec0$ vector can have a magnitude of $0$. The triangle inequality: $lvert(a+b)rvert <= lvert arvert + lvert brvert$



                      Likewise an inner product can be defined on a vector space. It adds enough structure to support the ideas of orthogonality and projection. For spaces where it makes sense, this leads to the idea of angle.



                      The formal definition or a norm is that an inner product is a function that associates two vectors with a number, with some rules. See this for the details.




                      These are general definitions which work on all vector spaces. The links above give examples of vector spaces that may not be familiar. E.G. The set of all functions of the form $y = ax^2 + bx + c$ is a 3 dimensional vector space.



                      The most familiar vector spaces are N dimensional Euclidian spaces. These are normed vector spaces, where the norm matches the everyday definition of distance.



                      The dot product is the inner product on these spaces that matches the everyday definition of orthogonality and angle. See this Wikipedia article.







                      share|cite|improve this answer












                      share|cite|improve this answer



                      share|cite|improve this answer










                      answered May 13 at 3:03









                      mmesser314mmesser314

                      9,66821834




                      9,66821834





















                          1












                          $begingroup$


                          How, would I compute their dot product?




                          You pretty much have to convert them to the same basis system. You can multiply them out and get nine different terms, and then find the dot product in terms of the nine dot products of the basis vectors, but the math is pretty much the same as converting to the same coordinate system.




                          In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute e1→⋅e′1→ without converting the vectors to the same coordinate system)?




                          The value of $vece_1 cdot vece_1'$ is an empirical value. You can't calculate it simply from a definition.




                          Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?




                          Given a physical system in which "length" and "angle" are defined, the dot product is invariant under rotations and reflections, i.e. orthonormal transformations. So given two coordinate systems, as long the axes are orthogonal to each other within each coordinate system, and the two coordinate systems have the same origin and the same scale (one unit is the same length, regardless of which direction or coordinate system), dot products will be the same.



                          In that case, the change of basis can be represented with a matrix $U$ such that $(U^*)U=I$ (For real numbers, $U^*$ is just the transpose, so I'll be using that for the rest, since presumably you're asking about vectors over the real numbers). The dot product of two vectors $x$ and $y$ is $x^Ty$. If $x'=UX$ and $y'=Uy$, then the dot product of $x'$ and $y'$ is $x'^Ty'=(Ux)^TUy=x^TU^TUy=x^TIy=x^Ty$






                          share|cite|improve this answer









                          $endgroup$

















                            1












                            $begingroup$


                            How, would I compute their dot product?




                            You pretty much have to convert them to the same basis system. You can multiply them out and get nine different terms, and then find the dot product in terms of the nine dot products of the basis vectors, but the math is pretty much the same as converting to the same coordinate system.




                            In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute e1→⋅e′1→ without converting the vectors to the same coordinate system)?




                            The value of $vece_1 cdot vece_1'$ is an empirical value. You can't calculate it simply from a definition.




                            Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?




                            Given a physical system in which "length" and "angle" are defined, the dot product is invariant under rotations and reflections, i.e. orthonormal transformations. So given two coordinate systems, as long the axes are orthogonal to each other within each coordinate system, and the two coordinate systems have the same origin and the same scale (one unit is the same length, regardless of which direction or coordinate system), dot products will be the same.



                            In that case, the change of basis can be represented with a matrix $U$ such that $(U^*)U=I$ (For real numbers, $U^*$ is just the transpose, so I'll be using that for the rest, since presumably you're asking about vectors over the real numbers). The dot product of two vectors $x$ and $y$ is $x^Ty$. If $x'=UX$ and $y'=Uy$, then the dot product of $x'$ and $y'$ is $x'^Ty'=(Ux)^TUy=x^TU^TUy=x^TIy=x^Ty$






                            share|cite|improve this answer









                            $endgroup$















                              1












                              1








                              1





                              $begingroup$


                              How, would I compute their dot product?




                              You pretty much have to convert them to the same basis system. You can multiply them out and get nine different terms, and then find the dot product in terms of the nine dot products of the basis vectors, but the math is pretty much the same as converting to the same coordinate system.




                              In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute e1→⋅e′1→ without converting the vectors to the same coordinate system)?




                              The value of $vece_1 cdot vece_1'$ is an empirical value. You can't calculate it simply from a definition.




                              Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?




                              Given a physical system in which "length" and "angle" are defined, the dot product is invariant under rotations and reflections, i.e. orthonormal transformations. So given two coordinate systems, as long the axes are orthogonal to each other within each coordinate system, and the two coordinate systems have the same origin and the same scale (one unit is the same length, regardless of which direction or coordinate system), dot products will be the same.



                              In that case, the change of basis can be represented with a matrix $U$ such that $(U^*)U=I$ (For real numbers, $U^*$ is just the transpose, so I'll be using that for the rest, since presumably you're asking about vectors over the real numbers). The dot product of two vectors $x$ and $y$ is $x^Ty$. If $x'=UX$ and $y'=Uy$, then the dot product of $x'$ and $y'$ is $x'^Ty'=(Ux)^TUy=x^TU^TUy=x^TIy=x^Ty$






                              share|cite|improve this answer









                              $endgroup$




                              How, would I compute their dot product?




                              You pretty much have to convert them to the same basis system. You can multiply them out and get nine different terms, and then find the dot product in terms of the nine dot products of the basis vectors, but the math is pretty much the same as converting to the same coordinate system.




                              In particular, is there a more formal/abstract/generalized definition of the dot product (that would allow me to compute e1→⋅e′1→ without converting the vectors to the same coordinate system)?




                              The value of $vece_1 cdot vece_1'$ is an empirical value. You can't calculate it simply from a definition.




                              Even if I did convert the vectors to the same coordinate system, why do we know that the result will be the same if I multiply the components in the primed system versus in the unprimed system?




                              Given a physical system in which "length" and "angle" are defined, the dot product is invariant under rotations and reflections, i.e. orthonormal transformations. So given two coordinate systems, as long the axes are orthogonal to each other within each coordinate system, and the two coordinate systems have the same origin and the same scale (one unit is the same length, regardless of which direction or coordinate system), dot products will be the same.



                              In that case, the change of basis can be represented with a matrix $U$ such that $(U^*)U=I$ (For real numbers, $U^*$ is just the transpose, so I'll be using that for the rest, since presumably you're asking about vectors over the real numbers). The dot product of two vectors $x$ and $y$ is $x^Ty$. If $x'=UX$ and $y'=Uy$, then the dot product of $x'$ and $y'$ is $x'^Ty'=(Ux)^TUy=x^TU^TUy=x^TIy=x^Ty$







                              share|cite|improve this answer












                              share|cite|improve this answer



                              share|cite|improve this answer










                              answered May 13 at 18:09









                              AcccumulationAcccumulation

                              3,694715




                              3,694715





















                                  1












                                  $begingroup$

                                  The formula



                                  $$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



                                  is just a start and, as you go further down in physics, will need quite a few generalizations. The assumptions here are that your vectors are (a) real (b) three-dimensional (c) tuples (d) written in a "standard basis". There are points at which either of these are broken: for example, one of the first things you learn in special theory of relativity(*) is how to work with (b') four-dimensional vectors that (d') don't even allow a standard basis at all, so you get a different formula (of which this is a special case). Similarly, in quantum mechanics, depending on the text, you need to grasp in quantum mechanics are (a') complex vector spaces of (b'') infinite-dimensional things that (c') may not be tuples at all (although often can be written so, again allowing a formula of which this is a special case).



                                  You just yourself figured out that (d) will not always be the case, and that's a splendid job on your part.



                                  Before any of those generalizations take place, the assumptions (a - d) are taken for granted. That is, we are working in a basis
                                  $$e_1 equiv langle 1,0,0 rangle \
                                  e_2 equiv langle 0,1,0 rangle \
                                  e_3 equiv langle 0,0,1 rangle$$

                                  and
                                  $$e_1 cdot e_1 = 1, e_1 cdot e_2 = 0, e_1 cdot e_3 = 0 textetc.$$
                                  If a triple of numbers is written it is in this basis. While there are other bases, they just represent concrete triples which you have to multiply by the corresponding coefficients and sum up, effectively transforming to $(e_1, e_2, e_3)$, if you insist on applying the scalar product formula above.



                                  The generalization to taking vectors not as triples of numbers, but as combinations of some abstract $e'_1$, $e'_2$, $e'_3$, then requires specifying what $e'_i cdot e'_j$ is for all $i$, $j$, as other answers have already said in a plenty of ways. If $(e_i)$ and $(e'_i)$ are two different bases, and you know the scalar product in one, the scalar product in the other can be computed from the relations between the basis vectors. And so can a formula for taking scalar products of two vectors, one in each of the two bases.



                                  The basic idea remains, though, and it is a good idea to get oneself familiarized with all the aspects of the above as deeply as possible: to understand the relation between scalar product and norm, orthogonality, expression of geometrical properties and relations (length, angle, distance), etc., before things get too abstract. That's why many texts just hold on to the simplest formula as long as they can.




                                  To actually answer your question: let



                                  $$vecx = x_1 vece_1 + x_2 vece_2 + x_3 vece_3$$
                                  $$vecy = y_1 vece_1' + y_2 vece_2' + y_3 vece_3'$$



                                  such that $(vece_1, vece_2, vece_3)$ is the standard basis. Let further



                                  $$vece_i' = sum_j=1^3 E_i,j vece_j,$$



                                  so using distributivity and linearity it holds that



                                  $$vece_i' cdot vece_k
                                  = left( sum_j=1^3 E_i,j vece_j right) cdot vece_k
                                  = sum_j=1^3 E_i,j left( vece_j cdot vece_k right)
                                  = sum_j=1^3 E_i,j delta_jk (**)
                                  = E_i,k,$$



                                  (also $vece_k cdot vece_i' = E_i,k$), so



                                  $$vecx cdot vecy
                                  = left( sum_i=1^3 x_i vece_i right) cdot left( sum_j=1^3 y_j vece_j' right)
                                  = sum_i=1^3 sum_j=1^3 x_i y_j left( vece_i cdot vece_j' right)
                                  = sum_i=1^3 sum_j=1^3 x_i y_j E_j,i.$$



                                  You can use this formula for taking dot products of two vertices in different bases.
                                  I'm not sure if this counts as not converting to the same basis or not: you will need the conversion matrix $(E_i,j)$ anyway. You won't need to explicitly write $vecy$ in the $(vece_i)$ basis beforehand, though.




                                  (*) Mathematically speaking, special relativity does not use an actual 'scalar product'. But for my example this suffices without further details.



                                  (**) $delta_jk$ is shorthand for "one when $j=k$ and zero otherwise".






                                  share|cite|improve this answer











                                  $endgroup$

















                                    1












                                    $begingroup$

                                    The formula



                                    $$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



                                    is just a start and, as you go further down in physics, will need quite a few generalizations. The assumptions here are that your vectors are (a) real (b) three-dimensional (c) tuples (d) written in a "standard basis". There are points at which either of these are broken: for example, one of the first things you learn in special theory of relativity(*) is how to work with (b') four-dimensional vectors that (d') don't even allow a standard basis at all, so you get a different formula (of which this is a special case). Similarly, in quantum mechanics, depending on the text, you need to grasp in quantum mechanics are (a') complex vector spaces of (b'') infinite-dimensional things that (c') may not be tuples at all (although often can be written so, again allowing a formula of which this is a special case).



                                    You just yourself figured out that (d) will not always be the case, and that's a splendid job on your part.



                                    Before any of those generalizations take place, the assumptions (a - d) are taken for granted. That is, we are working in a basis
                                    $$e_1 equiv langle 1,0,0 rangle \
                                    e_2 equiv langle 0,1,0 rangle \
                                    e_3 equiv langle 0,0,1 rangle$$

                                    and
                                    $$e_1 cdot e_1 = 1, e_1 cdot e_2 = 0, e_1 cdot e_3 = 0 textetc.$$
                                    If a triple of numbers is written it is in this basis. While there are other bases, they just represent concrete triples which you have to multiply by the corresponding coefficients and sum up, effectively transforming to $(e_1, e_2, e_3)$, if you insist on applying the scalar product formula above.



                                    The generalization to taking vectors not as triples of numbers, but as combinations of some abstract $e'_1$, $e'_2$, $e'_3$, then requires specifying what $e'_i cdot e'_j$ is for all $i$, $j$, as other answers have already said in a plenty of ways. If $(e_i)$ and $(e'_i)$ are two different bases, and you know the scalar product in one, the scalar product in the other can be computed from the relations between the basis vectors. And so can a formula for taking scalar products of two vectors, one in each of the two bases.



                                    The basic idea remains, though, and it is a good idea to get oneself familiarized with all the aspects of the above as deeply as possible: to understand the relation between scalar product and norm, orthogonality, expression of geometrical properties and relations (length, angle, distance), etc., before things get too abstract. That's why many texts just hold on to the simplest formula as long as they can.




                                    To actually answer your question: let



                                    $$vecx = x_1 vece_1 + x_2 vece_2 + x_3 vece_3$$
                                    $$vecy = y_1 vece_1' + y_2 vece_2' + y_3 vece_3'$$



                                    such that $(vece_1, vece_2, vece_3)$ is the standard basis. Let further



                                    $$vece_i' = sum_j=1^3 E_i,j vece_j,$$



                                    so using distributivity and linearity it holds that



                                    $$vece_i' cdot vece_k
                                    = left( sum_j=1^3 E_i,j vece_j right) cdot vece_k
                                    = sum_j=1^3 E_i,j left( vece_j cdot vece_k right)
                                    = sum_j=1^3 E_i,j delta_jk (**)
                                    = E_i,k,$$



                                    (also $vece_k cdot vece_i' = E_i,k$), so



                                    $$vecx cdot vecy
                                    = left( sum_i=1^3 x_i vece_i right) cdot left( sum_j=1^3 y_j vece_j' right)
                                    = sum_i=1^3 sum_j=1^3 x_i y_j left( vece_i cdot vece_j' right)
                                    = sum_i=1^3 sum_j=1^3 x_i y_j E_j,i.$$



                                    You can use this formula for taking dot products of two vertices in different bases.
                                    I'm not sure if this counts as not converting to the same basis or not: you will need the conversion matrix $(E_i,j)$ anyway. You won't need to explicitly write $vecy$ in the $(vece_i)$ basis beforehand, though.




                                    (*) Mathematically speaking, special relativity does not use an actual 'scalar product'. But for my example this suffices without further details.



                                    (**) $delta_jk$ is shorthand for "one when $j=k$ and zero otherwise".






                                    share|cite|improve this answer











                                    $endgroup$















                                      1












                                      1








                                      1





                                      $begingroup$

                                      The formula



                                      $$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



                                      is just a start and, as you go further down in physics, will need quite a few generalizations. The assumptions here are that your vectors are (a) real (b) three-dimensional (c) tuples (d) written in a "standard basis". There are points at which either of these are broken: for example, one of the first things you learn in special theory of relativity(*) is how to work with (b') four-dimensional vectors that (d') don't even allow a standard basis at all, so you get a different formula (of which this is a special case). Similarly, in quantum mechanics, depending on the text, you need to grasp in quantum mechanics are (a') complex vector spaces of (b'') infinite-dimensional things that (c') may not be tuples at all (although often can be written so, again allowing a formula of which this is a special case).



                                      You just yourself figured out that (d) will not always be the case, and that's a splendid job on your part.



                                      Before any of those generalizations take place, the assumptions (a - d) are taken for granted. That is, we are working in a basis
                                      $$e_1 equiv langle 1,0,0 rangle \
                                      e_2 equiv langle 0,1,0 rangle \
                                      e_3 equiv langle 0,0,1 rangle$$

                                      and
                                      $$e_1 cdot e_1 = 1, e_1 cdot e_2 = 0, e_1 cdot e_3 = 0 textetc.$$
                                      If a triple of numbers is written it is in this basis. While there are other bases, they just represent concrete triples which you have to multiply by the corresponding coefficients and sum up, effectively transforming to $(e_1, e_2, e_3)$, if you insist on applying the scalar product formula above.



                                      The generalization to taking vectors not as triples of numbers, but as combinations of some abstract $e'_1$, $e'_2$, $e'_3$, then requires specifying what $e'_i cdot e'_j$ is for all $i$, $j$, as other answers have already said in a plenty of ways. If $(e_i)$ and $(e'_i)$ are two different bases, and you know the scalar product in one, the scalar product in the other can be computed from the relations between the basis vectors. And so can a formula for taking scalar products of two vectors, one in each of the two bases.



                                      The basic idea remains, though, and it is a good idea to get oneself familiarized with all the aspects of the above as deeply as possible: to understand the relation between scalar product and norm, orthogonality, expression of geometrical properties and relations (length, angle, distance), etc., before things get too abstract. That's why many texts just hold on to the simplest formula as long as they can.




                                      To actually answer your question: let



                                      $$vecx = x_1 vece_1 + x_2 vece_2 + x_3 vece_3$$
                                      $$vecy = y_1 vece_1' + y_2 vece_2' + y_3 vece_3'$$



                                      such that $(vece_1, vece_2, vece_3)$ is the standard basis. Let further



                                      $$vece_i' = sum_j=1^3 E_i,j vece_j,$$



                                      so using distributivity and linearity it holds that



                                      $$vece_i' cdot vece_k
                                      = left( sum_j=1^3 E_i,j vece_j right) cdot vece_k
                                      = sum_j=1^3 E_i,j left( vece_j cdot vece_k right)
                                      = sum_j=1^3 E_i,j delta_jk (**)
                                      = E_i,k,$$



                                      (also $vece_k cdot vece_i' = E_i,k$), so



                                      $$vecx cdot vecy
                                      = left( sum_i=1^3 x_i vece_i right) cdot left( sum_j=1^3 y_j vece_j' right)
                                      = sum_i=1^3 sum_j=1^3 x_i y_j left( vece_i cdot vece_j' right)
                                      = sum_i=1^3 sum_j=1^3 x_i y_j E_j,i.$$



                                      You can use this formula for taking dot products of two vertices in different bases.
                                      I'm not sure if this counts as not converting to the same basis or not: you will need the conversion matrix $(E_i,j)$ anyway. You won't need to explicitly write $vecy$ in the $(vece_i)$ basis beforehand, though.




                                      (*) Mathematically speaking, special relativity does not use an actual 'scalar product'. But for my example this suffices without further details.



                                      (**) $delta_jk$ is shorthand for "one when $j=k$ and zero otherwise".






                                      share|cite|improve this answer











                                      $endgroup$



                                      The formula



                                      $$langle x_1,x_2,x_3rangle cdot langle y_1,y_2,y_3rangle = x_1 y_1 + x_2 y_2 + x_3 y _3$$



                                      is just a start and, as you go further down in physics, will need quite a few generalizations. The assumptions here are that your vectors are (a) real (b) three-dimensional (c) tuples (d) written in a "standard basis". There are points at which either of these are broken: for example, one of the first things you learn in special theory of relativity(*) is how to work with (b') four-dimensional vectors that (d') don't even allow a standard basis at all, so you get a different formula (of which this is a special case). Similarly, in quantum mechanics, depending on the text, you need to grasp in quantum mechanics are (a') complex vector spaces of (b'') infinite-dimensional things that (c') may not be tuples at all (although often can be written so, again allowing a formula of which this is a special case).



                                      You just yourself figured out that (d) will not always be the case, and that's a splendid job on your part.



                                      Before any of those generalizations take place, the assumptions (a - d) are taken for granted. That is, we are working in a basis
                                      $$e_1 equiv langle 1,0,0 rangle \
                                      e_2 equiv langle 0,1,0 rangle \
                                      e_3 equiv langle 0,0,1 rangle$$

                                      and
                                      $$e_1 cdot e_1 = 1, e_1 cdot e_2 = 0, e_1 cdot e_3 = 0 textetc.$$
                                      If a triple of numbers is written it is in this basis. While there are other bases, they just represent concrete triples which you have to multiply by the corresponding coefficients and sum up, effectively transforming to $(e_1, e_2, e_3)$, if you insist on applying the scalar product formula above.



                                      The generalization to taking vectors not as triples of numbers, but as combinations of some abstract $e'_1$, $e'_2$, $e'_3$, then requires specifying what $e'_i cdot e'_j$ is for all $i$, $j$, as other answers have already said in a plenty of ways. If $(e_i)$ and $(e'_i)$ are two different bases, and you know the scalar product in one, the scalar product in the other can be computed from the relations between the basis vectors. And so can a formula for taking scalar products of two vectors, one in each of the two bases.



                                      The basic idea remains, though, and it is a good idea to get oneself familiarized with all the aspects of the above as deeply as possible: to understand the relation between scalar product and norm, orthogonality, expression of geometrical properties and relations (length, angle, distance), etc., before things get too abstract. That's why many texts just hold on to the simplest formula as long as they can.




                                      To actually answer your question: let



                                      $$vecx = x_1 vece_1 + x_2 vece_2 + x_3 vece_3$$
                                      $$vecy = y_1 vece_1' + y_2 vece_2' + y_3 vece_3'$$



                                      such that $(vece_1, vece_2, vece_3)$ is the standard basis. Let further



                                      $$vece_i' = sum_j=1^3 E_i,j vece_j,$$



                                      so using distributivity and linearity it holds that



                                      $$vece_i' cdot vece_k
                                      = left( sum_j=1^3 E_i,j vece_j right) cdot vece_k
                                      = sum_j=1^3 E_i,j left( vece_j cdot vece_k right)
                                      = sum_j=1^3 E_i,j delta_jk (**)
                                      = E_i,k,$$



                                      (also $vece_k cdot vece_i' = E_i,k$), so



                                      $$vecx cdot vecy
                                      = left( sum_i=1^3 x_i vece_i right) cdot left( sum_j=1^3 y_j vece_j' right)
                                      = sum_i=1^3 sum_j=1^3 x_i y_j left( vece_i cdot vece_j' right)
                                      = sum_i=1^3 sum_j=1^3 x_i y_j E_j,i.$$



                                      You can use this formula for taking dot products of two vertices in different bases.
                                      I'm not sure if this counts as not converting to the same basis or not: you will need the conversion matrix $(E_i,j)$ anyway. You won't need to explicitly write $vecy$ in the $(vece_i)$ basis beforehand, though.




                                      (*) Mathematically speaking, special relativity does not use an actual 'scalar product'. But for my example this suffices without further details.



                                      (**) $delta_jk$ is shorthand for "one when $j=k$ and zero otherwise".







                                      share|cite|improve this answer














                                      share|cite|improve this answer



                                      share|cite|improve this answer








                                      edited May 14 at 7:35

























                                      answered May 14 at 7:28









                                      The VeeThe Vee

                                      873413




                                      873413



























                                          draft saved

                                          draft discarded
















































                                          Thanks for contributing an answer to Physics Stack Exchange!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid


                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.

                                          Use MathJax to format equations. MathJax reference.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function ()
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphysics.stackexchange.com%2fquestions%2f479656%2fformal-definition-of-dot-product%23new-answer', 'question_page');

                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

                                          Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

                                          Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020