I'm new to the field of large language models (LLMs) and I'm really interested in learning how to train and use my own models for qualitative analysis. However, I'm not sure where to start or what resources would be most helpful for a complete beginner....
No offense intended, but are you sure it's using your GPU? Twenty minutes is about how long my CPU-locked instance takes to run some 70B parameter models.
On my RTX 3060, I generally get responses in seconds.
Good luck! I'm definitely willing to spend a few minutes offering advice/double checking some configuration settings if things go awry again. Let me know how things go. :-)
I think there was a special process to get Nvidia working in WSL. Let me check... (I'm running natively on Linux, so my experience doing it with WSL is limited.)
https://docs.nvidia.com/cuda/wsl-user-guide/index.html - I'm sure you've followed this already, but according to this, it looks like you don't want to install the Nvidia drivers, and only want to install the cuda-toolkit metapackage. I'd follow the instructions from that link closely.
You may also run into performance issues within WSL due to the virtual machine overhead.
AFAICT, mastodon's decisions, which are arguably problematic (on which see: https://lemmy.ml/post/14973403) are literally trickling down to other platforms and infecting how they federate with each other as they dance around mastodon's quirks in different ways.
It seems like masto is ruining "the standard" with its gravity.
Advice - Getting started with LLMs
I'm new to the field of large language models (LLMs) and I'm really interested in learning how to train and use my own models for qualitative analysis. However, I'm not sure where to start or what resources would be most helpful for a complete beginner....