-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training on GPU is not successful using XGBClassifier when training data is too large #10301
Comments
Hi, if you replace the |
@trivialfis, if
Data type of
|
That makes sense; thank you for sharing. Could you please share the type of input, such as whether it's a pandas dataframe or a cudf dataframe? |
@trivialfis , thank you for your quick reply. Here is what you requested:
|
Hi, I noticed that using the native interface, you are training a regression model with the default objective rmse, while it's a classification model when sklearn is used. Could you please fix that? |
@trivialfis, thanks for noticing that. I've done modified the code (as below), and it runs successfully on GPU.
|
I have
X_train
andy_train
with shapes(483903, 2897)
and(483903,)
respectively. Training XGBoost is successful on GPU using the following code:However, the following code does not run on GPU successfully:
It throws the error:
It is expected that if
xgb.train(fit_kwargs, dtrain)
runs on GPU successfully, then fitting usingXGBClassifier
should also run on GPU successfully.xgboost version = 2.0.3
The text was updated successfully, but these errors were encountered: