# What is the difference between the dot product and the element by element multiplication?

What is the different between the dot product$$⋅\cdot$$” and the element-by-element multiplication notation $$⊙\odot$$ in Statistics? I referred to Hamilton’s Time-Series Analysis, and these seem to have the same definition. For instance, for two $$n×1n\times 1$$ vectors $$→a\vec{a}$$ and $$→b\vec{b}$$, is
$$→a⊙→b?≡→a⋅→b=n∑i=1aibi$$\vec{a}\odot \vec{b}\overset{?}{\equiv} \vec{a} \cdot \vec{b}=\sum\limits_{i=1}^n a_ib_i$$$$

The difference operationally is the aggregation by summation. With the dot product, you multiply the corresponding components and add those products together. With the Hadamard product (element-wise product) you multiply the corresponding components, but do not aggregate by summation, leaving a new vector with the same dimension as the original operand vectors. And on that point, the dot product of two vectors gives a scalar number while the Hadamard product of two vectors gives a vector.

$$c=→xT⋅→y=[x1x2⋮xn]T⋅[y1y2⋮yn]=[x1,x2,⋯,xn]⋅[y1y2⋮yn]=n∑i=1xiyi∈Rc = \vec{x}^T \cdot \vec{y} = \begin{bmatrix} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \end{bmatrix}^T \cdot \begin{bmatrix} y_{1} \\ y_{2} \\ \vdots \\ y_{n} \end{bmatrix} = [x_1, x_2, \cdots, x_n] \cdot \begin{bmatrix} y_{1} \\ y_{2} \\ \vdots \\ y_{n} \end{bmatrix} = \sum_{i=1}^n x_iy_i \in \mathbb{R}$$
$$→z=→x⊙→y=[x1x2⋮xn]⊙[y1y2⋮yn]=[x1y1x2y2⋮xnyn]∈Rn\vec{z} = \vec{x} \odot \vec{y} = \begin{bmatrix} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \end{bmatrix} \odot \begin{bmatrix} y_{1} \\ y_{2} \\ \vdots \\ y_{n} \end{bmatrix} = \begin{bmatrix} x_1y_{1} \\ x_2y_{2} \\ \vdots \\ x_ny_{n} \end{bmatrix} \in \mathbb{R}^n$$

Note that $$→x,→y\vec{x}, \vec{y}$$ must have the same dimension. The Hadamard product is often defined in terms of matrices in many sources, but for general tensors it suffices that they have the same shape. For two vectors, this is tantamount to having the same dimension.

So in general $$→x⋅→y≠→x⊙→y\vec{x} \cdot \vec{y} \neq \vec{x} \odot \vec{y}$$.

Furthermore, $$‖\|\vec{x} \odot \vec{y}\| \leq \|\vec{x}\|\|\vec{y}\|$$ and $$\|\vec{x} \cdot \vec{y}\| \leq \|\vec{x}\| \|\vec{y}\|\|\vec{x} \cdot \vec{y}\| \leq \|\vec{x}\| \|\vec{y}\|$$. But since $$Pr\left( \|\vec{x} \cdot \vec{y}\| \leq \|\vec{x} \odot \vec{y}\| \right) \approx .67Pr\left( \|\vec{x} \cdot \vec{y}\| \leq \|\vec{x} \odot \vec{y}\| \right) \approx .67$$ for some choice of probability space (see link on $$\|\vec{x} \odot \vec{y}\| \leq \|\vec{x}\|\|\vec{y}\|\|\vec{x} \odot \vec{y}\| \leq \|\vec{x}\|\|\vec{y}\|$$), there isn’t a direct inequality between their norms.

Footnote: The dot product is also an example of an inner product while the Hadamard product is not. This is important for certain forms of rotational invariance, but that is not really a ‘statistics’ SE point.