none
[Automated ML][Training]Many models computed during an experiment do not increase performance - possible optimization? RRS feed

  • Question

  • I see a pattern in Automated ML experiments, where one of the first 6 iterations is usually the best of all (up to 60 during one hour compute), and most of the following iterations have actually worse results (in my case, with R² as optimization target).

    Would it be possible to try and use the results of the previous iterations to better select parameters of next iterations during an experiment, so that we reduce the number of "useless" experiments?

    Monday, September 16, 2019 1:32 PM

All replies