so i am looking to get me a gpu in my "beast"(a 24core 128gb tower with to much pci-e) i thought i might buy a used 3090 but then it hit me most applications can work with multiple gpu's so i decided i was going to go with €600 to ebay and using techpowerup i figured out there performance by looking at the memory bandwidth and fp32 performance. So this brought me to the following cards for my own LLaMa, stable-difusion and Blender: 5 Tesla K80's, 3 Tesla P40's or 2 3060's but i cant figure out what would be better for performance and future proofing. the main difference i found is in cuda version but i cant really figure out why that matters. the other thing i found is that 5 k80's are way more power intensive than 3 p40's and that if memory size is really important the p40's are the way to go but then i couldn't figure out real performance numbers as i cant find benchmarks like this one for blender.
So if anyone has a nice source for stable-diffusion and LaMA benchmarks i would appreciate it if you could share it. And if you have one of these cards or multiple and can tel me which option is better i would appreciate it if you shared your opinion