A frontend for large language models like 🐨 Koala or 🦙 Vicuna running on CPU with llama.cpp, using the API server library provided by llama-cpp-python. NOTE: I had to discontinue this project because its maintenance takes more time than I can and want to invest. Feel free to fork :)
This repository has been archived on 2025-09-28. You can view files and clone it, but you cannot make any changes to it's state, such as pushing and creating new issues, pull requests or comments.
Find a file
2023-04-30 12:02:27 +02:00
.gitignore Initial commit 2023-04-30 12:02:27 +02:00
LICENSE Initial commit 2023-04-30 12:02:27 +02:00
README.md Initial commit 2023-04-30 12:02:27 +02:00

Eucalyptus-Chat

A frontend for Koala running on CPU with llama.cpp, using the API server provided by llama-cpp-python.