Deep Learning, i.e. deep neural networks (DNN), have become a key technology in recent years. However, the design of new, problem specific network topologies is still a time and compute intensive process. So far, the design of deep learning solutions for specific applications mostly follows a purely heuristic try and error process based on human expert knowledge and experience. Every network topology needs to be built from a large number of layer types and their configuration. Most layers themselves, as well as the employed training methods, have complex parameter spaces (so-called hyperparameters), whose impact on the final DNN performance is as large as the impact of the network topology itself.
In this project, we aim at facilitating a more efficient topology design process, rendering DNNs accessible to unexperienced users.
DeToL is funded by BMBF. Runtime: October 2018 - September 2021.
- Siems, Julien and Zimmer, Lucas and Zela, Arber and Lukasik, Jovita and Keuper, Margret and Hutter, Frank, NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search, In: arXiv:2008.09777 (2020) (also at NeurIPS 2020 workshop on meta-learning)
- Zaidi, Sheheryar and Zela, Arber and Elsken, Thomas and Holmes, Chris and Hutter, Frank and Teh, Yee Whye, Neural Ensemble Search for Performant and Calibrated Predictions, In: Workshop on Uncertainty and Robustness in Deep Learning (UDL@ICML`20) (2020), Oral Presentation
- Lukasik, Jovita and Friede, David and Zela, Arber and Stuckenschmidt, Heiner and Hutter, Frank and Keuper, Margret, Smooth Variational Graph Embeddings for Efficient Neural Architecture Search, In: arXiv:2010.04683 (2020)
- Zela, Arber and Siems, Julien and Hutter, Frank, NAS-Bench-1Shot1: Benchmarking and Dissecting One-shot Neural Architecture Search, In: International Conference on Learning Representations 2020
- Zela, Arber and Elsken, Thomas and Saikia, Tonmoy and Marrakchi, Yassine and Brox, Thomas and Hutter, Frank,, Understanding and Robustifying Differentiable Architecture Search, In: International Conference on Learning Representations 2020, Oral Presentation (Top 7%)
- Elsken, Thomas and Staffler, Benedikt and Metzen, Jan Hendrik and Hutter, Frank, Meta-Learning of Neural Architectures for Few-Shot Learning, In: The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2020) Oral Presentation (Top 6%)
- J. Lukasik, M. Keuper, M. Singh, J. Yarkony, A Benders Decomposition Approach to Correlation Clustering, SC20 workshop on Machine Learning in High Performance Computing Environments (MLHPC, oral presentation)
- Jovita Lukasik, David Friede, Heiner Stuckenschmidt, M. Keuper, Neural Architecture Performance PredictionUsing Graph Neural Networks, Proc. Of the German Conference of Pattern Recognition (GCPR), 2020.
- Ying, C., Klein, A., Real, E., Christiansen, E., Murphy, K., & Hutter, F. (2019). NAS-Bench-101: Towards Reproducible Neural Architecture Search. arXiv preprint arXiv:1902.09635.
- Ram, R., Müller, S., Pfreundt, F. J., Gauger, N. R., & Keuper, J. (2019, November). Scalable Hyperparameter Optimization with Lazy Gaussian Processes. In 2019 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC) (pp. 56-65). IEEE. - Source Code
- Habelitz, P. M., & Keuper, J. (2020). PHS: A Toolbox for Parellel Hyperparameter Search. arXiv preprint arXiv:2002.11429. - Source Code
- Zela, A., Elsken, T., Saikia, T., Marrakchi, Y., Brox, T., & Hutter, F. (2020). Understanding and Robustifying Differentiable Architecture Search. In International Conference on Learning Representations 2020. - Source Code
- Zela, A., Siems, J., & Hutter, F. (2020). NAS-Bench-1Shot1; Benchmarking and Dissecting One-shot Neural Architecture Search. In International Conference on Learning Representations 2020. - Source Code
- A Variational-Sequential Graph Autoencoder for Neural Architecture Performance Prediction, D Friede, J Lukasik, H Stuckenschmidt, M Keuper, arXiv preprint arXiv:1912.05317
- Massively parallel benders decomposition for correlation clustering, M Keuper, J Lukasik, M Singh, J Yarkony, arXiv preprint arXiv:1902.05659
- Tonmoy Saikia, Yassine Marrakchi, Arber Zela, Frank Hutter, Thomas Brox, AutoDispNet: Improving Disparity Estimation With AutoML, IEEE International Conference on Computer Vision (ICCV), 2019 - Source code
- Ram, R., Müller, S., Pfreundt, F. J., Gauger, N. R., & Keuper, J. . "Scalable Hyperparameter Optimization with Lazy Gaussian Processes." 2019 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC). IEEE, 2019.
- Chatzimichailidis, A., Keuper, J., Pfreundt, F. J., & Gauger, N. R. . "GradVis: Visualization and Second Order Analysis of Optimization Surfaces during the Training of Deep Neural Networks." 2019 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC). IEEE, 2019
- Y. Yang, Y. Yuan, A. Chatzimichailidis, R. JG van Sloun, L. Lei, S. Chatzinotas, "ProxSGD: Training Structured Neural Networks under Regularization and Constraints," in Proc. International Conference on Learning Representation Apr. 2020.
- Lucas Zimmer, Julien Siems, Arber Zela, Frank Hutter: “LCBench: A learning curve benchmark on OpenML data”
- D. Brayford, S. Vallecorsa, A. Atanasov, F. Baruffa and W. Riviera, "Deploying AI Frameworks on Secure HPC Systems with Containers.," 2019 IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, USA, 2019, pp. 1-6.
- D. Brayford, S. Vallecorsa, A. Atanasov, F. Baruffa and W. Riviera, "Deploying Scientific AI Networks at Petaflop Scale on Secure Large Scale HPC Production Systems with Containers." 2020 PASC, 2020.
- https://github.com/automl/nas_benchmarks/tree/master/experiment_scripts
- https://github.com/automl/LCBench
- https://github.com/lmb-freiburg/autodispnet
- NASLib (includes implementations of the Predictors paper, NB301, NB101, RobustDARTS): https://github.com/automl/NASLib
- NAS-Bench-301: https://github.com/automl/nasbench301
- RobustDARTS: https://github.com/automl/RobustDARTS
- NAS-Bench-1Shot1: https://github.com/automl/nasbench-1shot1
- Neural Ensemble Search: https://github.com/automl/nes
- https://github.com/automl/LCBench
- NAS-Bench-301: https://github.com/automl/nasbench301
- NAS-Bench-1Shot1: https://github.com/automl/nasbench-1shot1