# Classification XGBoost vs Logistic Regression

I have a binary classification problem where the classes are slightly unbalanced 25%-75% distribution. I have a total of around 35 features after some feature engineering and the features I have are mostly continuous variables. I tried fitting a Logistic Model, an RF model and and XGB Model. They all seem to give me the same performance. My understanding is that XGB Models generally fare a little better than Logistic Models for these kind of problems. But, in my case I have no improvements with the the boosting model over the logistic model even after tuning it a lot. I am wondering about the reasons why that could be the case?

There is no reason for us to expect that a particular type of model $$AA$$ has to be better in terms of performance from another type of model $$BB$$ in every possible use-case. This extends to what is observed here; while indeed XGBoost models tend to be successful and generally provide competitive results, they are not guaranteed to be better than a logistic regression model in every setting.