LikeLike

]]>LikeLike

]]>LikeLike

]]>I was just looking up xgboost implementation in Python. What’s the difference between ‘eta’ parameters in the params and the learning_rates argument in the train function?

LikeLike

]]>You can set the values of the parameters lambda and gamma for both tree boosting and also for linear boosting. In both the cases lamda is the L2 regularization term, and alpha is of L1. You can decide what to use depending on the data. Gamma is used to determine if a split would be made on a leaf. Larger the value of gamma, more conservative the tree would be. Personally I would not touch gamma unless I have to.

Hope this answers your query. You can find more info on XGBOOST parameters at:

http://xgboost.readthedocs.org/en/latest/parameter.html

For a indepth understanding of regularization check my other blog post:

https://chaoticsenses.wordpress.com/2016/01/20/taming-the-beast-with-regularization-3/

LikeLike

]]>I have a question – How do we define regularization parameters ‘lambda’ and ‘gamma’ for classification?

I am using Python version of xgboost and I see a parameter ‘gamma’ which is the minimum loss reduction required for a split. Is it related to the regularization?

Also, the parameters ‘lambda’ and ‘alpha’ are only for linear booster.

LikeLike

]]>LikeLike

]]>LikeLike

]]>LikeLike

]]>