ok i already know the answer that nvidia is probably the clear winner overall, however, when talking specifics. What would the approximate (we're talking ballpark here because of all the variables) difference in performance be if you had a guess a percentage.
If you pegged a 2080 against a Radeon VII how much of a difference would you expect to see in training the exact same model? Would it be night and day? Would the extra RAM help out at all?
Nvidia is an order of magnitude faster. There really is no competition when it comes to speed. Extra ram would be nice in letting you run a larger model, but it wont speed things up.
Somewhere I found a graph of machine learning benchmarks using different GPU (cant find it was looking for more plaid ML documentation).
Amd Rx580 (Cores:2304 TMUs: 144: ROPs:32)vs a GTX 1060-6gb(Cores: 1280 TMUs: 80 ROPs: 48) was the shocker to me.
Look at the specs for both those cards.
Now think, that 1060 Well outclassed the 580 on that graph (I can't find).
Not all cores are equal.
Best I figure, where they may likely be equivalent in gaming, in neural networks Nvidia will give you about 15-40% more.
This was done with a 1070
I dunno what I'm doing
2X RTX 3090 : RTX 3080 : RTX: 2060 : 2x RTX 2080 Super : Ghetto 1060