# How to compare observed vs. expected events?

Suppose I have one sample of frequencies of 4 possible events:

Event1 - 5
E2 - 1
E3 - 0
E4 - 12


and I have the expected probabilities of my events to occur:

p1 - 0.2
p2 - 0.1
p3 - 0.1
p4 - 0.6


With the sum of the observed frequencies of my four events (18) I can calculate the expected frequencies of the events right?

expectedE1 - 18 * 0.2 = 3.6
expectedE2 - 18 * 0.1 = 1.8
expectedE1 - 18 * 0.1 = 1.8
expectedE1 - 18 * 0.6 = 10.8


How can I compare observed values vs expected values? to test if my calculated probabilities are good predictors?

I thought of a chi-squared test, but the result change with the sample size (n=18), I mean, if I multiply observed values by 1342 and use the same method the result is different. Maybe a wilcox paired test works, but what do you suggest?

If can suggest in R, it would be better.

You mention that you get different results if you multiply all values by $1342$. This is not a problem. You should get very different results. If you flip a coin and it comes up heads, this doesn’t say very much. If you flip a coin $1342$ times and you get heads every time, you have much more information suggesting that the coin is not fair.
Usually you want to use alternatives to a $\chi^2$ test when the expected number of occurrences is so low (say, under $5$) in a large percentage of your categories (say, at least $20\%$). One possibility is Fisher’s exact test, which is implemented in R. You can view the $\chi^2$ test as an approximation to Fisher’s exact test, and the approximation is only good when more of the expected counts are large.