最后活跃于 1729169315

修订 99bb627fbe00eaa792ecf5e3169b71d5f1291371

GPT4All-2.py 原始文件
1from gpt4all import GPT4All
2model = GPT4All("Meta-Llama-3-8B-Instruct.Q4_0.gguf") # downloads / loads a 4.66GB LLM
3with model.chat_session():
4 print(model.generate("How can I run LLMs efficiently on my laptop?", max_tokens=1024))