Which NVIDIA GPU pr...
 
Notifications
Clear all

Which NVIDIA GPU provides the most VRAM for machine learning?

5 Posts
6 Users
0 Reactions
62 Views
0
Topic starter

im really sorry if this is like a super basic thing but im totally lost trying to figure out which nvidia card to buy for this machine learning project i have to do. i keep hearing that VRAM is the most important thing for running models and stuff but the more i look at the specs the more my head hurts. my logic was just to go for the biggest number but then i saw some cards have like 24gb and then others that are way more expensive have like 48 or even 80? i think? i was looking at the 4090 because everyone says it is fast but then someone mentioned something called an A6000 and the price tag literally made me jump lol.

im trying to set up a small station for my sisters marketing business here in san francisco and she wants to run some local image generation stuff by next friday so im on a really tight deadline. i have about $2200 to spend total but if a card with more memory is actually way better i might have to ask her for more cash. should i just stick with the gaming cards or is there a specific one that has the absolute most vram for the money? i really dont want to buy the wrong thing and waste all that money because i dont know what im doing...


5 Answers
12

I would suggest being careful with your spending tbh. You might want to consider a used NVIDIA GeForce RTX 3090 24GB since it's safer for your budget and still has 24gb vram.


10

I remember being so happy when I finally got my NVIDIA GeForce RTX 3060 12GB.

  • Saved a lot
  • No complaints It runs my local models perfectly and didnt break the bank.


3

Works great for me


2

Man I wish I found this thread sooner. Would have saved me so much hassle.


2

Ive been building these rigs for a long time and the VRAM debate never really changes, it just gets more expensive. In my experience, you really have to decide if you want the raw speed of the consumer side or the massive memory pools of the workstation lines.

  • I remember when I first started out, I grabbed a NVIDIA TITAN RTX 24GB thinking it would make me a better engineer overnight.
  • What I actually learned was that my cooling setup was trash and I ended up melting a power connector.
  • It took me months to realize that I was bottlenecked by my data pipeline anyway, not the hardware. Actually, this reminds me of when I used to work near the Embarcadero. There was this little coffee shop that had the best espresso but their wifi was so bad I used to bring my own portable router just to check my training logs. I spent more time debugging their connection than my own code. I honestly spent more on caffeine that month than on my electricity bill. Kind of miss those frantic mornings tho. Anyway... but yeah.


Share: