Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add PerpetualBooster #641

Open
deadsoul44 opened this issue Sep 21, 2024 · 5 comments
Open

Add PerpetualBooster #641

deadsoul44 opened this issue Sep 21, 2024 · 5 comments

Comments

@deadsoul44
Copy link

Add PerpetualBooster as an additional algorithm.

https://github.com/perpetual-ml/perpetual

It does not need hyperparameter tuning and supports multi-output and multi-class cases.

I can create a pull request if you are willing to review and accept.

@PGijsbers
Copy link
Collaborator

I think it's interesting, but I am planning to try and add a feature soon that allows having integration scripts in separate independent repositories. I propose I'll leave another message here when I have something experimental going. Perhaps it would be interesting to try out?

@deadsoul44
Copy link
Author

It will be really helpful to benchmark our algorithm. I am waiting for it.

@PGijsbers
Copy link
Collaborator

You can always do local integration for yourself if you just want to use the benchmark with your framework. There is no need to have it included in this codebase for that.

@deadsoul44
Copy link
Author

deadsoul44 commented Oct 16, 2024

I compared PerpetualBooster against AutGluon (BQ), which is the number one framework in the benchmark, and got some promising results in local tests on small and medium tasks. I have some questions.

  • All tasks are classification tasks in small, medium, large yml files. Where are regression tasks?
  • I want to run the benchmark with only PerpetualBooster on AWS to compare the results against the rest of the frameworks. What is the default EC2 instance type? What is the correct command to run on AWS? I don't want to make a mistake due to costs.
  • Are you willing to review and merge a pull request to include PerpetualBooster in the repo and website if the results are good enough?
  • The default metrics for classification are AUC and LogLoss. But I think F1 score is a better metric because frameworks can overfit to logloss especially. Is it possible to include F1 as a default metric or as an additional metric?

P.s. I checked the repo and website before asking these. Thanks in advance.

@deadsoul44
Copy link
Author

Answering my own first two questions after reading the paper :)
https://jmlr.org/papers/volume25/22-0493/22-0493.pdf

Correct me if I am wrong.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants