locked
[Automated ML][Training]Many models computed during an experiment do not increase performance - possible optimization? RRS feed

  • Question

  • I see a pattern in Automated ML experiments, where one of the first 6 iterations is usually the best of all (up to 60 during one hour compute), and most of the following iterations have actually worse results (in my case, with R² as optimization target).

    Would it be possible to try and use the results of the previous iterations to better select parameters of next iterations during an experiment, so that we reduce the number of "useless" experiments?

    Monday, September 16, 2019 1:32 PM

All replies

  • Hi Michel, 

    Thank you for your feedback.You can Configure the automated machine learning parameters that determine how many iterations over different models, hyperparameter settings, advanced preprocessing/featurization, and what metrics to look at when determining the best model. You can configure the settings for automatic training experiment in Azure portalthe workspace landing page (preview), or with the SDK. Different iteration is for different feature/ algorithm/ parameter paris.

    For now, I don't think we are using the previous results to optimize the selection. But I would highly recommend you to post your feedback to the User Voice forum for engineering group to review. https://feedback.azure.com/forums/257792-machine-learning/filters/new

    Thank you very much.

    Regards,

    Yutong

    Tuesday, September 17, 2019 7:55 PM