M1 Max VS RTX3070 (Tensorflow Performance Tests)

M1 Max VS RTX3070 (Tensorflow Performance Tests)

Alex Ziskind

2 года назад

92,168 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

Geoffrey Rozinak
Geoffrey Rozinak - 30.10.2023 01:03

❤ You, you make the exact content that I want to watch... I know Im such a 🤓. Thank you!

Ответить
Mink Franchise
Mink Franchise - 28.10.2023 22:52

The AMD computer you're using is trash. Horrible test to much missing infor and tpp many missing steps. Not to mention you hanicapped the PC.

Ответить
Jolo
Jolo - 02.07.2023 14:58

the interesting part here is apple is using metal as their gpu instruction and nvidia is using cuda and cudnn 😅

Ответить
Jens Schneider
Jens Schneider - 30.06.2023 23:03

How much cheaper is an off-the-shelf 3070 Laptop 125W (assuming you benchmarked the top config) vs. the M1 Max for a 13% performance increase? I can get a 3070 125W for halt the price than the M1 Max and it's only "not that much faster" ?

Ответить
Avdhan Tyagi
Avdhan Tyagi - 24.03.2023 16:37

it would have been useful if you could install tensorflow on m1 T_T

Ответить
Zbysek Lipka
Zbysek Lipka - 05.02.2023 01:04

I really LOVE Apple M1, because nVidia support about CUDA and cuDNN is really Hell on Earth, this problem with installing nvidia CUDA, cuDNN and Tensorflow is here for many years, and now again - I have Ubuntu 22, then install CUDA 12, but nVidia don't release cuDNN compatible version! OMG!
Installation on Apple M1? 10 minutes and tensorflow-gpu working :) never again nvidia waste of time

Ответить
Reinhard Silaen
Reinhard Silaen - 29.01.2023 15:21

But, why were you using Docker for the RTX?

Ответить
Marc-Andre Renaud
Marc-Andre Renaud - 17.01.2023 08:53

From what I can see with the tests, the ASUS is the clear winner. In the first unoptimised test on the ROG, sample size from 587 vs 832 samples for seconds. Adding the --fp16 flag to optimise the run increase the samples per second to 1075 samples per second. Once you factor in that the ROG laptop costs roughly half as much as the Mac, while performing 20% faster, you can save a lot of money for a decent performance boost, or buy 2 ASUS laptops so a massive boost in workload. Running TensorFlow natively would get you even better results. Yes, the battery life on the Mac is better but knowing I can't upgrade storage solutions down the road without having to buy a whole new laptop makes the Mac a complete non-starter for serious work.

Ответить
Barium Lanthanum
Barium Lanthanum - 04.01.2023 21:34

Nvidia still holds its ground, beating even a machine that is twice the price...

Ответить
j2simpso
j2simpso - 10.11.2022 19:16

Yes but does your benchmark take advantage of the TPUs on the M1? 🤔

Ответить
sadan eduardo
sadan eduardo - 30.10.2022 01:35

But being practical, with the price of m1 macbook you can buy at least two 3070 and required hardware and use with sli, would be cool to see a test with "price per training", as always, itodlers btfo

Ответить
Gold Sharon
Gold Sharon - 28.10.2022 08:00

Is this key works with rtx 3060

Ответить
Andrey
Andrey - 26.10.2022 21:46

As soon as you said “docker container”, I consider your results to be void. Unless you’re running metal vs metal or docker vs docker, I wouldn’t trust this comparison. I get that some may argue that “you’re splitting hairs at that point” but no. You’re really not. An unfair comparison, regardless of how unfair, it’s still… unfair.

Ответить
G Gi
G Gi - 22.10.2022 05:35

Please open 1000 or 6000 etc. folders you'll see performance, heavy duty!

Ответить
mohamed hassan
mohamed hassan - 05.10.2022 21:12

Can u please tell me which cuda version and cudnn version ur using for the 3070 please 🙏 am having an issue 😔

Ответить
Kirill Berezin
Kirill Berezin - 26.08.2022 05:20

Soo... gtx faster and cheaper? Then what's the point?

Ответить
Omid SN
Omid SN - 17.08.2022 20:52

If you want to run your model using tensor cores in RTX GPUs, you should use TensorRT. Also, the model should meet a few requirements. Furthermore, training a model on fp16 configuration doesn't actually affect training considerably since quantization affects the forward path(and inference time). However, most of the computation is due to back propagation which is done in full precision.

Ответить
Peter Cheung
Peter Cheung - 07.08.2022 11:17

thanks for your video, one little suggestion, show a table to show the result of both :-)

Ответить
Oliver
Oliver - 01.08.2022 10:27

Runing in docker on the 3070 doesnt completly ruin performance ? Im not too familiar, but when I run soemthing in docker it like 3x-5x slower.

Ответить