I am currently taking the PGM course by Daphne Koller on Coursera. In that, we generally model a Bayesian Network as a cause and effect directed graph of the variables which are part of the observed data. But on PyMC tutorials and examples I generally see that it not quite modeled in the same way as the PGM or atleast I am confused. In PyMC the parents of any observed real world variable are often the parameters of the distribution that you use to model the variable.

Now my question really is a practical one. Suppose I have 3 variables for which data is observed (A, B, C) (lets assume they are all continuous variables just for the sake of it). From some domain knowledge, one can say that A and B cause C. So we have a BN here – A, B are the parents and C is the children.

now from the BN equation P(A, B, C) = P(C | A, B) * P(A) * P(B)I can say A and B are some normal distributions with some mu and sigma, but how do I model P(C | A, B) ?

The general idea I want to learn, is how do I learn this BN using PyMC so that I can query the BN. Or do I have to augment the BN with parameters of the model in some fashion.Is this problem solvable using pymc? or have I got some fundamentals wrong?

Any help would be appreciated!

**Answer**

Take a look at a post in Healthy Algorithm:

http://healthyalgorithms.com/2011/11/23/causal-modeling-in-python-bayesian-networks-in-pymc/

also in PyMC’s totorial:

http://pymc-devs.github.io/pymc/tutorial.html

Maybe you would try the following code clip (assuming you have imported pymc as mc):

```
A = mc.Normal('A', mu_A, tau_A)
B = mc.Normal('B', mu_B, tau_B)
p_C = mc.Lambda('p_C', lambda A=A, B=B: <<dependency spec goes here>>, doc='Pr[C|AB]')
C = mc.Bernoulli('C', p_C)
```

**Attribution***Source : Link , Question Author : zubinmehta , Answer Author : George*