This profile is from a federated server and may be incomplete. View on remote instance

xcjs , to Technology in Advice - Getting started with LLMs
@xcjs@programming.dev avatar

We all mess up! I hope that helps - let me know if you see improvements!

xcjs , (edited ) to Technology in Advice - Getting started with LLMs
@xcjs@programming.dev avatar

I think there was a special process to get Nvidia working in WSL. Let me check... (I'm running natively on Linux, so my experience doing it with WSL is limited.)

https://docs.nvidia.com/cuda/wsl-user-guide/index.html - I'm sure you've followed this already, but according to this, it looks like you don't want to install the Nvidia drivers, and only want to install the cuda-toolkit metapackage. I'd follow the instructions from that link closely.

You may also run into performance issues within WSL due to the virtual machine overhead.

xcjs , to Technology in Advice - Getting started with LLMs
@xcjs@programming.dev avatar

Good luck! I'm definitely willing to spend a few minutes offering advice/double checking some configuration settings if things go awry again. Let me know how things go. :-)

xcjs , to Technology in Advice - Getting started with LLMs
@xcjs@programming.dev avatar

It should be split between VRAM and regular RAM, at least if it's a GGUF model. Maybe it's not, and that's what's wrong?

xcjs , to Technology in Advice - Getting started with LLMs
@xcjs@programming.dev avatar

Ok, so using my "older" 2070 Super, I was able to get a response from a 70B parameter model in 9-12 minutes. (Llama 3 in this case.)

I'm fairly certain that you're using your CPU or having another issue. Would you like to try and debug your configuration together?

xcjs , to Technology in Advice - Getting started with LLMs
@xcjs@programming.dev avatar

Unfortunately, I don't expect it to remain free forever.

xcjs , to Technology in Advice - Getting started with LLMs
@xcjs@programming.dev avatar

No offense intended, but are you sure it's using your GPU? Twenty minutes is about how long my CPU-locked instance takes to run some 70B parameter models.

On my RTX 3060, I generally get responses in seconds.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • kbinchat
  • All magazines