evo ,

Because people don't understand the difference between using ChatGPT running in a datacenter somewhere via API calls and having a tiny model actually running on device.

Even quantized down to 4bit the best 7B models barely run locally on the best mobile hardware that exists today. GPT4 is ~250x larger than that 7B model lol

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • kbinchat
  • All magazines