Since the decision tree algorithm split on an attribute at every step, the maximum depth of a decision tree is equal to the number of attributes of the data.
Is this correct?
Answer
No, because the data can be split on the same attribute multiple times. And this characteristic of decision trees is important because it allows them to capture nonlinearities in individual attributes.
Edit: In support of the point above, here’s the first regression tree I created. Note that volatile acidity and alcohol appear multiple times:
Attribution
Source : Link , Question Author : Qwerto , Answer Author : mkt