A frontend for large language models like 🐨 Koala or 🦙 Vicuna running on CPU with llama.cpp, using the API server library provided by llama-cpp-python. NOTE: I had to discontinue this project because its maintenance takes more time than I can and want to invest. Feel free to fork :)
.gitignore | ||
LICENSE | ||
README.md |
Eucalyptus-Chat
A frontend for Koala running on CPU with llama.cpp, using the API server provided by llama-cpp-python.