# What is the difference between entropy and deviance?

In terms of classification task using decision trees, the formula for these looks almost the same. So, how are they different/same? what is the purpose of each in terms of impurity measure?

$\text{Entropy}~(p_1,p_2) = -\sum p_i \log (p_i); i= 1,2;$

$p_i$ are fractions. Say, if I have 2 Yes and 3 No in a node, $p_1=2/5$, $p_2=3/5$.

$\text{Deviance}~D= – 2\sum n_k \log(p_k) ;~k$ is the class in each leaf.

Both are used as impurity measures. But I am not able to understand the difference between these.

They are same. It’s a nomenclature difference among authors. Gini is different though. Using your notation it would be $1 – \sum p_i^2$.