Bump llama-cpp-python[server] from 0.1.50 to 0.1.54 #7
No reviewers
Labels
No labels
bug
dependencies
documentation
duplicate
enhancement
good first issue
help wanted
invalid
question
security-related
wontfix
No milestone
No project
No assignees
1 participant
Due date
No due date set.
Dependencies
No dependencies set.
Reference: ChaoticByte/Eucalyptus-Chat#7
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "dependabot/pip/llama-cpp-python-server--0.1.54"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Bumps llama-cpp-python[server] from 0.1.50 to 0.1.54.
Commits
e5d596eBump versionc41b1ebUpdate llama.cppaa3d7a6Merge pull request #263 from abetlen/dependabot/pip/mkdocs-material-9.1.142240b94Bump mkdocs-material from 9.1.12 to 9.1.1401c79e7Merge pull request #258 from Pipboyguy/mainc3e80b1Merge pull request #262 from abetlen/dependabot/pip/httpx-0.24.18e41d72Bump httpx from 0.24.0 to 0.24.1e6639e6Change docker build dynamic param to image instead of cuda version4f7a6daMerge pull request #248 from localagi/main0adb9ecUse model_name and index in responseDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)llama.cpp changed their format again, models have to be requantized again. See https://github.com/ggerganov/llama.cpp/pull/1508
-> new models are in ggml3 format