• v4.3 74e59a9fd4

    Version 4.3 Stable

    ChaoticByte released this 2023-05-30 21:12:32 +00:00 | 0 commits to main since this release

    • Redesigned the chat history
    • Renamed the profile names of vicuna-v0 and vicuna-v1.1
    • Updated the README
    Downloads
  • v4.2 f4abe93735

    Version 4.2 Stable

    ChaoticByte released this 2023-05-30 18:54:27 +00:00 | 2 commits to main since this release

    • Bumped llama-cpp-python[server] from 0.1.54 to 0.1.56
    • Changed the profile format
    • Added support for Vicuna v1.1
    • Added support for Manticore Chat
    Downloads
  • v4.1 8c29a31598

    Version 4.1 Stable

    ChaoticByte released this 2023-05-25 19:26:00 +00:00 | 10 commits to main since this release

    • Bugfix: Preserve whitespaces in messages #10
    • minor code improvements
    Downloads
  • v4.0 060d522f6c

    Version 4.0 Stable

    ChaoticByte released this 2023-05-24 19:06:08 +00:00 | 13 commits to main since this release

    BREAKING! Bumped llama-cpp-python[server] from 0.1.50 to 0.1.54

    This (again) requires re-quantized models. The new format is ggml v3. See https://github.com/ggerganov/llama.cpp/pull/1508

    Downloads
  • v3.1 22ba6239c7

    Version 3.1 Stable

    ChaoticByte released this 2023-05-18 22:19:27 +00:00 | 15 commits to main since this release

    • Added a toggle button for the sidebar
    • Implemented a responsive design (fixes #4)
    • Made more minor improvements to the frontend
    Downloads
  • v3.0 63706a3c64

    Version 3.0 Stable

    ChaoticByte released this 2023-05-18 14:39:20 +00:00 | 16 commits to main since this release

    Changed the frontend to support other LLMs and added support for Vicuna.

    Downloads
  • v2.1 02a142012b

    Version 2.1 Stable

    ChaoticByte released this 2023-05-18 10:01:30 +00:00 | 20 commits to main since this release

    Added more parameters to the sidebar: top_k, repeat_penalty, presence_penalty, frequency_penalty

    Downloads
  • v2.0 924721863d

    Version 2.0 Stable

    ChaoticByte released this 2023-05-18 09:16:30 +00:00 | 22 commits to main since this release

    (Breaking) Bumped llama-cpp-python[server] from 0.1.48 to 0.1.50
    You may have to re-quantize your models, see https://github.com/ggerganov/llama.cpp/pull/1405

    Downloads
  • v1.1 19b6162b57

    Version 1.1 Stable

    ChaoticByte released this 2023-05-18 08:32:31 +00:00 | 24 commits to main since this release

    Downloads
  • v1.0 3ef9099407

    Version 1.0 Stable

    ChaoticByte released this 2023-05-11 07:01:35 +00:00 | 25 commits to main since this release

    Downloads