Merge pull request #7 from ChaoticByte/dependabot/pip/llama-cpp-python-server--0.1.54

Bump llama-cpp-python[server] from 0.1.50 to 0.1.54 (requires re-quantized models using ggml v3)
This commit is contained in:
Julian Müller 2023-05-24 21:06:08 +02:00 committed by GitHub
commit 060d522f6c
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -1,3 +1,3 @@
llama-cpp-python[server]==0.1.50 llama-cpp-python[server]==0.1.54
uvicorn==0.22.0 uvicorn==0.22.0
sanic==23.3.0 sanic==23.3.0