Merge pull request #7 from ChaoticByte/dependabot/pip/llama-cpp-python-server--0.1.54
Bump llama-cpp-python[server] from 0.1.50 to 0.1.54 (requires re-quantized models using ggml v3)
This commit is contained in:
commit
060d522f6c
1 changed files with 1 additions and 1 deletions
|
@ -1,3 +1,3 @@
|
||||||
llama-cpp-python[server]==0.1.50
|
llama-cpp-python[server]==0.1.54
|
||||||
uvicorn==0.22.0
|
uvicorn==0.22.0
|
||||||
sanic==23.3.0
|
sanic==23.3.0
|
Reference in a new issue