I am not sure if this is the right place to ask this but here goes:
Sometimes times two or more inputs of a neural networks can often be related to a single “real world” entity.
weightof a person to predict the probability of disease in population or
volumeof a stock to predict the market.
When a single training set contains data about a number of these entities, how can we make a neural network understand that two inputs (or more) are related to the same entity?
Amongst all the people I have asked, the general consensus seems to be:
- Neural Networks do not work this way
- It is not possible
- Such a grouping of data is not required
- Neural Networks are supposed to find the relationship amongst inputs, you are not supposed to feed it the relationships
- The training data set need to be reworked / reconfigured
- I have never heard of such a thing
So, obviously this is not in the mainstream.
Has anyone heard of any research in this direction?
P.S. If you agree with the above opinion (it can’t / shouldn’t be done) please provide a reason why.
Sometimes the correlation level between any of the two input variables are calculated, and then the input is partitioned into several independent sub-groups before the training starts, like what was implemented in this paper.
But generally, like @alto says, when you provide those inputs, the neurons will treat them as they correspond to the same entity. Each neuron at hidden layer will response different variables to different extent, reflected by its connection strength to the variables (i.e, the weights). And those responses are combined to generate a final response at the output layer (linear combination, or plus some activation functions). During training process the weights are adjusted to better learn the output they are given. And finally when the training is done, with the obtained strengths between each neuron and each input variable, the network can respond to any other inputs to different levels, and that is the prediction part.
Note that the neurons will reduce the connection strengths to some input variables if they learn that those variables do not contribute the final consequence very much.