7
Advice Wanted - Self Hosting Industrial GPUs
(kbin.social)
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
so i am looking to get a k80, p40 or 3060 regarding the support for cuda in future i see that it is possible to use a old gpu without the current cuda version even if a program requires it? or is it not usable in some programs today? compiling from scratch isnt a problem and drivers is something i can probably handle to but are there more real problems for future proofing?