Skip to content

Releases: oobabooga/text-generation-webui

snapshot-2023-11-05

05 Nov 20:19
e18a046
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: snapshot-2023-10-29...snapshot-2023-11-05

snapshot-2023-10-29

29 Oct 20:19
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: snapshot-2023-10-22...snapshot-2023-10-29

snapshot-2023-10-22

22 Oct 20:19
b818314
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: snapshot-2023-10-15...snapshot-2023-10-22

snapshot-2023-10-15

15 Oct 20:27
3bb4046
Compare
Choose a tag to compare

Switching to a rolling release model with weekly snapshots.

What's Changed

New Contributors

Full Changelog: v1.7...snapshot-2023-10-15

v1.7

08 Oct 20:26
2e47107
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 1.6.1...v1.7

1.6.1

26 Sep 03:37
Compare
Choose a tag to compare

What's Changed

  • Use call for conda deactivate in Windows installer by @jllllll in #4042
  • [extensions/openai] Fix error when preparing cache for embedding models by @wangcx18 in #3995
  • Create alternative requirements.txt with AMD and Metal wheels by @oobabooga in #4052
  • Add a grammar editor to the UI by @oobabooga in #4061
  • Avoid importing torch in one-click-installer by @jllllll in #4064

Full Changelog: v1.6...1.6.1

v1.6

22 Sep 22:17
Compare
Choose a tag to compare

The one-click-installers have been merged into the repository. Migration instructions can be found here.

The updated one-click install features an installation size several GB smaller and a more reliable update procedure.

What's Changed

Read more

v1.5

26 Jul 14:14
Compare
Choose a tag to compare

What's Changed

  • Add a detailed extension example and update the extension docs. The example can be found here: example/script.py.
  • Introduce a new chat_input_modifier extension function and deprecate the old input_hijack.
  • Change rms_norm_eps to 5e-6 for llama-2-70b ggml all llama-2 models -- this value reduces the perplexities of the models.
  • Remove FlexGen support. It has been made obsolete by the lack of Llama support and the emergence of llama.cpp and 4-bit quantization. I can add it back if it ever gets updated.
  • Use the dark theme by default.
  • Set the correct instruction template for the model when switching from default/notebook modes to chat mode.

Bug fixes

v1.4

24 Jul 19:42
a07d070
Compare
Choose a tag to compare

What's Changed

Bug fixes

  • Add checks for ROCm and unsupported architectures to llama_cpp_cuda loading by @jllllll in #3225

Extensions

  • [extensions/openai] Fixes for: embeddings, tokens, better errors. +Docs update, +Images, +logit_bias/logprobs, +more. by @matatonic in #3122

v1.3.1

19 Jul 14:22
Compare
Choose a tag to compare

Changes

  • Add missing EOS and BOS tokens to Llama-2 template
  • Bump transformers for better Llama-2 support
  • Bump llama-cpp-python for better unicode support (untested)