Setting RAM processing through local compute on ML Service SDK RRS feed

  • Question

  • I am running a dataset through the ML service using the Python SDK. A am running an experiment through a 'local' compute target and it is taking quite some time. A dataset like this typically takes a couple minutes (400k observations) as I have 16GB RAM.

    On a previous run, I did notice that the processing power was 3.5GB in the output logs. Does Azure put a limit on th elocal compute processing and is there an option to set the RAM of your local compute? I tried looking through the documentation, but only noticed a setting for nodes in AmlCompute. Thanks.

    Friday, May 15, 2020 6:26 PM

All replies