Bump llama-cpp-python[server] from 0.1.48 to 0.1.50 #2

Merged
dependabot[bot] merged 1 commit from dependabot/pip/llama-cpp-python-server--0.1.50 into main 2023-05-18 09:16:30 +00:00
dependabot[bot] commented 2023-05-15 15:04:04 +00:00 (Migrated from github.com)

Bumps llama-cpp-python[server] from 0.1.48 to 0.1.50.

Commits
  • d90c9df Bump version
  • cdf5976 Update llama.cpp
  • 7a536e8 Allow model to tokenize strings longer than context length and set add_bos. C...
  • 8740ddc Only support generating one prompt at a time.
  • 8895b90 Revert "llama_cpp server: prompt is a string". Closes #187
  • 684d7c8 Fix docker command
  • fa1fc4e Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
  • e3d3c31 Bump version
  • 7be584f Add missing tfs_z paramter
  • 28ee2ad Update llama.cpp
  • Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Bumps [llama-cpp-python[server]](https://github.com/abetlen/llama-cpp-python) from 0.1.48 to 0.1.50. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/abetlen/llama-cpp-python/commit/d90c9df32639397078a439c86ac6a474043ce57d"><code>d90c9df</code></a> Bump version</li> <li><a href="https://github.com/abetlen/llama-cpp-python/commit/cdf59768f52cbf3e54bfe2877d0e5cd3049c04a6"><code>cdf5976</code></a> Update llama.cpp</li> <li><a href="https://github.com/abetlen/llama-cpp-python/commit/7a536e86c260872c0551e52df37ba8b45317068e"><code>7a536e8</code></a> Allow model to tokenize strings longer than context length and set add_bos. C...</li> <li><a href="https://github.com/abetlen/llama-cpp-python/commit/8740ddc58e750c243721ba56a7b0b73b8359fdef"><code>8740ddc</code></a> Only support generating one prompt at a time.</li> <li><a href="https://github.com/abetlen/llama-cpp-python/commit/8895b9002acefbccfee0cfc36f22ede7410b64e2"><code>8895b90</code></a> Revert &quot;llama_cpp server: prompt is a string&quot;. Closes <a href="https://redirect.github.com/abetlen/llama-cpp-python/issues/187">#187</a></li> <li><a href="https://github.com/abetlen/llama-cpp-python/commit/684d7c8c17a1c50cff2703ae5982390111c991dc"><code>684d7c8</code></a> Fix docker command</li> <li><a href="https://github.com/abetlen/llama-cpp-python/commit/fa1fc4ec427f92496e5f469fb799a18f877471b1"><code>fa1fc4e</code></a> Merge branch 'main' of github.com:abetlen/llama_cpp_python into main</li> <li><a href="https://github.com/abetlen/llama-cpp-python/commit/e3d3c31da24655d27992874f5917657e197587d8"><code>e3d3c31</code></a> Bump version</li> <li><a href="https://github.com/abetlen/llama-cpp-python/commit/7be584fe82c7c24f0aab9fc8e23b83820600ed52"><code>7be584f</code></a> Add missing tfs_z paramter</li> <li><a href="https://github.com/abetlen/llama-cpp-python/commit/28ee2adec2ef8a685f6a8d0ce73c6c3c7a136532"><code>28ee2ad</code></a> Update llama.cpp</li> <li>Additional commits viewable in <a href="https://github.com/abetlen/llama-cpp-python/compare/v0.1.48...v0.1.50">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=llama-cpp-python[server]&package-manager=pip&previous-version=0.1.48&new-version=0.1.50)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details>
ChaoticByte commented 2023-05-18 07:53:37 +00:00 (Migrated from github.com)
This will require a re-quantizied model. See https://github.com/ggerganov/llama.cpp/pull/1305 & https://github.com/ggerganov/llama.cpp/pull/1405
This repository is archived. You cannot comment on pull requests.
No description provided.