
dario.py: A benchmark script by Dario Radečić at the post above.ģ. It's said that, numpy installed in this way is optimized for Apple M1 and will be faster. Apple-TensorFlow: with python installed by miniforge, I directly install tensorflow, and numpy will also be installed. conda install numpy: numpy from original conda-forge channel, or pre-installed with anaconda. (Check from Activity Monitor, Kind of python process is Intel). Anaconda.: Then python is run via Rosseta. (Check from Activity Monitor, Kind of python process is Apple). Miniforge-arm64, so that python is natively run on M1 Max Chip. On M1 Max, why run in P圜harm IDE is constantly slower ~20% than run from terminal, which doesn't happen on my old Intel Mac.Įvidence supporting my questions is as follows:. On M1 Max and native run, why there isn't significant speed difference between conda installed Numpy and TensorFlow installed Numpy - which is supposed to be faster?.
On M1 Max, why there isn't significant speed difference between native run (by miniforge) and run via Rosetta (by anaconda) - which is supposed to be slower ~20%?.Why python run natively on M1 Max is greatly (~100%) slower than on my old MacBook Pro 2016 with Intel i5?.I've tried several combinational settings to test speed - now I'm quite confused. I just got my new MacBook Pro with M1 Max chip and am setting up Python.