-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recent changes causing short low-token responses with little to no RP Text. #331
Comments
EDIT: Noticed that the OP is using Pygmalion. I'm using LLaMA 7b, so it might be a general issue rather than a model specific issue. Facing the same issue, but I'm using GPU ( 1060 6GB ). Responses earlier were much more verbose and lengthier but now it always feels like I'm talking to a rude person who gives short responses lol. I'm testing with the Chiharu Yamada inbuilt ' example ' bot. |
Super frustrating, as before the update this was the best bot I have ever tried for RP. With whatever happened, it has regressed severely. I wish I hadn't updated, or that there was full versioning available so I could roll back until they can fix this. |
That's #119 and I'm not happy with it either |
I played with this some more, after testing larger models like llama that generate and generate with the right preset; Yes, the dialog appears to do nothing now. Even when I see it in the chat settings it doesn't have much effect on the style of writing. The greeting message has more impact. On chars where that is long, they are more likely to write long sentences. Pygmalion is giving me one or 2 sentences and like what you said, <20 tokens. But isn't this the way tavern/kobold do it too? I thought that example dialog was sent with context every time.. and previously here it went into the chat history? Isn't all of that context? Unfortunately I can't see what happens behind the scenes here, unlike with kobold. |
Thing is, I have never used other/larger models. I have always been using Pygmalion 6B. The version that was available on Feb. 11 2023.
That is not how it used to be working, at least not for me. As you can see from this screenshot :
Previously, most responses were in the 30+ token range, with 60-80 token responses being common, thus responses were longer, and the model would roleplay properly, as in it wasn't a struggle to get it to output longer text.
I don't know, I have never used Tavern, and I have only done the smallest amount of experimentation with Kobold, and not locally as I did not see the option for pure-CPU operation in and Kobold setups I tried. I do know that the example dialog was never visible before this started happening. I also know that a while back, Oobabooga used to be able to send the entire chat history as context, but that was changed, and it has only sent 2048 tokens as context for quite some time (this change happens way before this problem started, though, so I don't think the two are related).
I wish that Oobabooga's Either way, something fundamental has changed within the last 2-3 days, and text generation no longer functions as it did prior to those changes. Perhaps it is a change made by Oobabooga1? Maybe he could shed some light on the situation. |
The simplest solution would be to find the commits that caused this and update only up to the prior one. 2048 is the limit for most of these models besides RWKV. I think that has been with us for a long time. Maybe @Xabab knows when the change happened because it sounds like it was 2 weeks ago. Can also just change the behavior once we know and see if it makes a difference. |
I would be okay with going back to an earlier commit, but then I would lose compatibility to run the LLaMA 7b model in 4 bit, which is the only way it would run on my 1060 6GB. |
Is this possible using the one-click, or would that be something that you would have to do on your end?
Fair. I think what happened was that how the UI referenced the max value being sent was changed.
Change that caused this was only a few days ago, not weeks. It was working fine on Saturday/Sunday, March 11th/12th. I was getting great responses then. Updated Monday March 13th, but only had a chance to do a quick couple of messages before I had to go to bed, which was when I initially noticed it. Did another update on March 14th, and that was when I saw that Monday's behavior wasn't just a one-off. And I don't change models - I always use Pygmalion6b, and I haven't updated that since I initially downloaded it on Feb. 11, so that isn't the issue. I update (by re-running |
Anyone manage to figure out how to roll back/what to roll back to? Again, I use the one-click, so i am unclear on how to do this myself. Hoping to maybe hear from the project staff at some point. |
I found the commit: e861e68 Like you said, 2 weeks ago but anyone else that wants to get rid of it can probably do a As to what else is doing that, who knows.. so many changes. You can browse the repo if you don't do git. Click a commit like this a95592f Then hit browse files and you can download a zip of the repo at that commit. That is your "backup" which you can then replace your files with. If you open that batch file here or in a text editor you can see what it's doing. |
Strange. I'm trying to find where I said it was 2 weeks ago. Sunday 13 March is not even 1 week ago. That's when it was last working properly as per my screenshot here : I just hope someone from the devteam can actually address the issue and fix it, or at least explain why such a serious regression in I will try to roll it back tomorrow, but I am hoping that they will properly restore the functionality that we had before Sunday, Mar. 13 2023. |
I meant that you said 2 weeks was too long ago. But other people here really want the chat history in a different place. Have to find out what actually did it. The dev "team" is ooba and that's it. |
Aah fair.
Yeah, hopefully @oobabooga can get this figured out.
…On Thu., Mar. 16, 2023, 4:58 a.m. Forkoz, ***@***.***> wrote:
I meant that you said 2 weeks was too long ago. But other people here
really want the chat history in a different place.
Have to find out what actually did it. The dev "team" is ooba and that's
it.
—
Reply to this email directly, view it on GitHub
<#331 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6I5UHOAP55BBKI7Y3MJQMTW4L54ZANCNFSM6AAAAAAV3ROBEI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Otherwise, I can't think of anything. Chiharu is passing the Hi test with the Debug preset with the same response that she gave 2 months ago
|
Why is your Try reducing this number to 200. |
|
That's odd, I have been using |
That doesn't mean you should me using max_new_tokens=1000 when your average reply size is less than 100 tokens. 900 tokens of history are being wasted. |
Fair. I can't remember if I had asked for this before, but would it be possible to add tooltips or an info panel to explain the various settings? Or at least add the info to the readme? I feel like that would be quite useful for more people than just myself. I will back up my models directory and do a fresh install to make sure I am.running the latest version, and update this issue tonight after I get home from work. |
+1 for the hover tooltips, would be neat |
Upgraded to newest version of WebUI. Still not getting the quantity and quality of RP dialog I had been getting with the previous install, although I have only been using Kawaii so far. I will load "Katie" in tomorrow and see if I can get her to produce any better results. I am currently using the default profile for Pygmailon, but have turned the generation attempts up to 3. |
Since you are mainly using pygmalion you can try running an old version side by side and compare. |
I don't think there is an issue. |
commit 0cbe2dd Author: oobabooga <[email protected]> Date: Sat Mar 18 12:24:54 2023 -0300 Update README.md commit 36ac7be Merge: d2a7fac 705f513 Author: oobabooga <[email protected]> Date: Sat Mar 18 11:57:10 2023 -0300 Merge pull request oobabooga#407 from ThisIsPIRI/gitignore Add loras to .gitignore commit d2a7fac Author: oobabooga <[email protected]> Date: Sat Mar 18 11:56:04 2023 -0300 Use pip instead of conda for pytorch commit 705f513 Author: ThisIsPIRI <[email protected]> Date: Sat Mar 18 23:33:24 2023 +0900 Add loras to .gitignore commit a0b1a30 Author: oobabooga <[email protected]> Date: Sat Mar 18 11:23:56 2023 -0300 Specify torchvision/torchaudio versions commit c753261 Author: oobabooga <[email protected]> Date: Sat Mar 18 10:55:57 2023 -0300 Disable stop_at_newline by default commit 7c945cf Author: oobabooga <[email protected]> Date: Sat Mar 18 10:55:24 2023 -0300 Don't include PeftModel every time commit 86b9900 Author: oobabooga <[email protected]> Date: Sat Mar 18 10:27:52 2023 -0300 Remove rwkv dependency commit a163807 Author: oobabooga <[email protected]> Date: Sat Mar 18 03:07:27 2023 -0300 Update README.md commit a7acfa4 Author: oobabooga <[email protected]> Date: Fri Mar 17 22:57:46 2023 -0300 Update README.md commit bcd8afd Merge: dc35861 e26763a Author: oobabooga <[email protected]> Date: Fri Mar 17 22:57:28 2023 -0300 Merge pull request oobabooga#393 from WojtekKowaluk/mps_support Fix for MPS support on Apple Silicon commit e26763a Author: oobabooga <[email protected]> Date: Fri Mar 17 22:56:46 2023 -0300 Minor changes commit 7994b58 Author: Wojtek Kowaluk <[email protected]> Date: Sat Mar 18 02:27:26 2023 +0100 clean up duplicated code commit dc35861 Author: oobabooga <[email protected]> Date: Fri Mar 17 21:05:17 2023 -0300 Update README.md commit 30939e2 Author: Wojtek Kowaluk <[email protected]> Date: Sat Mar 18 00:56:23 2023 +0100 add mps support on apple silicon commit 7d97da1 Author: Wojtek Kowaluk <[email protected]> Date: Sat Mar 18 00:17:05 2023 +0100 add venv paths to gitignore commit f2a5ca7 Author: oobabooga <[email protected]> Date: Fri Mar 17 20:50:27 2023 -0300 Update README.md commit 8c8286b Author: oobabooga <[email protected]> Date: Fri Mar 17 20:49:40 2023 -0300 Update README.md commit 0c05e65 Author: oobabooga <[email protected]> Date: Fri Mar 17 20:25:42 2023 -0300 Update README.md commit adc2003 Merge: 20f5b45 66e8d12 Author: oobabooga <[email protected]> Date: Fri Mar 17 20:19:33 2023 -0300 Merge branch 'main' of github.com:oobabooga/text-generation-webui commit 20f5b45 Author: oobabooga <[email protected]> Date: Fri Mar 17 20:19:04 2023 -0300 Add parameters reference oobabooga#386 oobabooga#331 commit 66e8d12 Author: oobabooga <[email protected]> Date: Fri Mar 17 19:59:37 2023 -0300 Update README.md commit 9a87111 Author: oobabooga <[email protected]> Date: Fri Mar 17 19:52:22 2023 -0300 Update README.md commit d4f38b6 Author: oobabooga <[email protected]> Date: Fri Mar 17 18:57:48 2023 -0300 Update README.md commit ad7c829 Author: oobabooga <[email protected]> Date: Fri Mar 17 18:55:01 2023 -0300 Update README.md commit 4426f94 Author: oobabooga <[email protected]> Date: Fri Mar 17 18:51:07 2023 -0300 Update the installation instructions. Tldr use WSL commit 9256e93 Author: oobabooga <[email protected]> Date: Fri Mar 17 17:45:28 2023 -0300 Add some LoRA params commit 9ed2c45 Author: oobabooga <[email protected]> Date: Fri Mar 17 16:06:11 2023 -0300 Use markdown in the "HTML" tab commit f0b2645 Author: oobabooga <[email protected]> Date: Fri Mar 17 13:07:17 2023 -0300 Add a comment commit 7da742e Merge: ebef4a5 02e1113 Author: oobabooga <[email protected]> Date: Fri Mar 17 12:37:23 2023 -0300 Merge pull request oobabooga#207 from EliasVincent/stt-extension Extension: Whisper Speech-To-Text Input commit ebef4a5 Author: oobabooga <[email protected]> Date: Fri Mar 17 11:58:45 2023 -0300 Update README commit cdfa787 Author: oobabooga <[email protected]> Date: Fri Mar 17 11:53:28 2023 -0300 Update README commit 3bda907 Merge: 4c13067 614dad0 Author: oobabooga <[email protected]> Date: Fri Mar 17 11:48:48 2023 -0300 Merge pull request oobabooga#366 from oobabooga/lora Add LoRA support commit 614dad0 Author: oobabooga <[email protected]> Date: Fri Mar 17 11:43:11 2023 -0300 Remove unused import commit a717fd7 Author: oobabooga <[email protected]> Date: Fri Mar 17 11:42:25 2023 -0300 Sort the imports commit 7d97287 Author: oobabooga <[email protected]> Date: Fri Mar 17 11:41:12 2023 -0300 Update settings-template.json commit 29fe7b1 Author: oobabooga <[email protected]> Date: Fri Mar 17 11:39:48 2023 -0300 Remove LoRA tab, move it into the Parameters menu commit 214dc68 Author: oobabooga <[email protected]> Date: Fri Mar 17 11:24:52 2023 -0300 Several QoL changes related to LoRA commit 4c13067 Merge: ee164d1 53b6a66 Author: oobabooga <[email protected]> Date: Fri Mar 17 09:47:57 2023 -0300 Merge pull request oobabooga#377 from askmyteapot/Fix-Multi-gpu-GPTQ-Llama-no-tokens Update GPTQ_Loader.py commit 53b6a66 Author: askmyteapot <[email protected]> Date: Fri Mar 17 18:34:13 2023 +1000 Update GPTQ_Loader.py Correcting decoder layer for renamed class. commit 0cecfc6 Author: oobabooga <[email protected]> Date: Thu Mar 16 21:35:53 2023 -0300 Add files commit 104293f Author: oobabooga <[email protected]> Date: Thu Mar 16 21:31:39 2023 -0300 Add LoRA support commit ee164d1 Author: oobabooga <[email protected]> Date: Thu Mar 16 18:22:16 2023 -0300 Don't split the layers in 8-bit mode by default commit 0a2aa79 Merge: dd1c596 e085cb4 Author: oobabooga <[email protected]> Date: Thu Mar 16 17:27:03 2023 -0300 Merge pull request oobabooga#358 from mayaeary/8bit-offload Add support for memory maps with --load-in-8bit commit e085cb4 Author: oobabooga <[email protected]> Date: Thu Mar 16 13:34:23 2023 -0300 Small changes commit dd1c596 Author: oobabooga <[email protected]> Date: Thu Mar 16 12:45:27 2023 -0300 Update README commit 38d7017 Author: oobabooga <[email protected]> Date: Thu Mar 16 12:44:03 2023 -0300 Add all command-line flags to "Interface mode" commit 83cb20a Author: awoo <awoo@awoo> Date: Thu Mar 16 18:42:53 2023 +0300 Add support for --gpu-memory witn --load-in-8bit commit 23a5e88 Author: oobabooga <[email protected]> Date: Thu Mar 16 11:16:17 2023 -0300 The LLaMA PR has been merged into transformers huggingface/transformers#21955 The tokenizer class has been changed from "LLaMATokenizer" to "LlamaTokenizer" It is necessary to edit this change in every tokenizer_config.json that you had for LLaMA so far. commit d54f3f4 Author: oobabooga <[email protected]> Date: Thu Mar 16 10:19:00 2023 -0300 Add no-stream checkbox to the interface commit 1c37896 Author: oobabooga <[email protected]> Date: Thu Mar 16 10:18:34 2023 -0300 Remove unused imports commit a577fb1 Author: oobabooga <[email protected]> Date: Thu Mar 16 00:46:59 2023 -0300 Keep GALACTICA special tokens (oobabooga#300) commit 25a00ea Author: oobabooga <[email protected]> Date: Wed Mar 15 23:43:35 2023 -0300 Add "Experimental" warning commit 599d313 Author: oobabooga <[email protected]> Date: Wed Mar 15 23:34:08 2023 -0300 Increase the reload timeout a bit commit 4d64a57 Author: oobabooga <[email protected]> Date: Wed Mar 15 23:29:56 2023 -0300 Add Interface mode tab commit b501722 Merge: ffb8986 d3a280e Author: oobabooga <[email protected]> Date: Wed Mar 15 20:46:04 2023 -0300 Merge branch 'main' of github.com:oobabooga/text-generation-webui commit ffb8986 Author: oobabooga <[email protected]> Date: Wed Mar 15 20:44:34 2023 -0300 Mini refactor commit d3a280e Merge: 445ebf0 0552ab2 Author: oobabooga <[email protected]> Date: Wed Mar 15 20:22:08 2023 -0300 Merge pull request oobabooga#348 from mayaeary/feature/koboldai-api-share flask_cloudflared for shared tunnels commit 445ebf0 Author: oobabooga <[email protected]> Date: Wed Mar 15 20:06:46 2023 -0300 Update README.md commit 0552ab2 Author: awoo <awoo@awoo> Date: Thu Mar 16 02:00:16 2023 +0300 flask_cloudflared for shared tunnels commit e9e76bb Author: oobabooga <[email protected]> Date: Wed Mar 15 19:42:29 2023 -0300 Delete WSL.md commit 09045e4 Author: oobabooga <[email protected]> Date: Wed Mar 15 19:42:06 2023 -0300 Add WSL guide commit 9ff5033 Merge: 66256ac 055edc7 Author: oobabooga <[email protected]> Date: Wed Mar 15 19:37:26 2023 -0300 Merge pull request oobabooga#345 from jfryton/main Guide for Windows Subsystem for Linux commit 66256ac Author: oobabooga <[email protected]> Date: Wed Mar 15 19:31:27 2023 -0300 Make the "no GPU has been detected" message more descriptive commit 055edc7 Author: jfryton <[email protected]> Date: Wed Mar 15 18:21:14 2023 -0400 Update WSL.md commit 89883a3 Author: jfryton <[email protected]> Date: Wed Mar 15 18:20:21 2023 -0400 Create WSL.md guide for setting up WSL Ubuntu Quick start guide for Windows Subsystem for Linux (Ubuntu), including port forwarding to enable local network webui access. commit 67d6247 Author: oobabooga <[email protected]> Date: Wed Mar 15 18:56:26 2023 -0300 Further reorganize chat UI commit ab12a17 Merge: 6a1787a 3028112 Author: oobabooga <[email protected]> Date: Wed Mar 15 18:31:39 2023 -0300 Merge pull request oobabooga#342 from mayaeary/koboldai-api Extension: KoboldAI api commit 3028112 Author: awoo <awoo@awoo> Date: Wed Mar 15 23:52:46 2023 +0300 KoboldAI api commit 6a1787a Author: oobabooga <[email protected]> Date: Wed Mar 15 16:55:40 2023 -0300 CSS fixes commit 3047ed8 Author: oobabooga <[email protected]> Date: Wed Mar 15 16:41:38 2023 -0300 CSS fix commit 87b84d2 Author: oobabooga <[email protected]> Date: Wed Mar 15 16:39:59 2023 -0300 CSS fix commit c1959c2 Author: oobabooga <[email protected]> Date: Wed Mar 15 16:34:31 2023 -0300 Show/hide the extensions block using javascript commit 348596f Author: oobabooga <[email protected]> Date: Wed Mar 15 15:11:16 2023 -0300 Fix broken extensions commit c5f14fb Author: oobabooga <[email protected]> Date: Wed Mar 15 14:19:28 2023 -0300 Optimize the HTML generation speed commit bf812c4 Author: oobabooga <[email protected]> Date: Wed Mar 15 14:05:35 2023 -0300 Minor fix commit 658849d Author: oobabooga <[email protected]> Date: Wed Mar 15 13:29:00 2023 -0300 Move a checkbutton commit 05ee323 Author: oobabooga <[email protected]> Date: Wed Mar 15 13:26:32 2023 -0300 Rename a file commit 40c9e46 Author: oobabooga <[email protected]> Date: Wed Mar 15 13:25:28 2023 -0300 Add file commit d30a140 Author: oobabooga <[email protected]> Date: Wed Mar 15 13:24:54 2023 -0300 Further reorganize the UI commit ffc6cb3 Merge: cf2da86 3b62bd1 Author: oobabooga <[email protected]> Date: Wed Mar 15 12:56:21 2023 -0300 Merge pull request oobabooga#325 from Ph0rk0z/fix-RWKV-Names Fix rwkv names commit cf2da86 Author: oobabooga <[email protected]> Date: Wed Mar 15 12:51:13 2023 -0300 Prevent *Is typing* from disappearing instantly while streaming commit 4146ac4 Merge: 1413931 29b7c5a Author: oobabooga <[email protected]> Date: Wed Mar 15 12:47:41 2023 -0300 Merge pull request oobabooga#266 from HideLord/main Adding markdown support and slight refactoring. commit 29b7c5a Author: oobabooga <[email protected]> Date: Wed Mar 15 12:40:03 2023 -0300 Sort the requirements commit ec972b8 Author: oobabooga <[email protected]> Date: Wed Mar 15 12:33:26 2023 -0300 Move all css/js into separate files commit 693b53d Merge: 63c5a13 1413931 Author: oobabooga <[email protected]> Date: Wed Mar 15 12:08:56 2023 -0300 Merge branch 'main' into HideLord-main commit 1413931 Author: oobabooga <[email protected]> Date: Wed Mar 15 12:01:32 2023 -0300 Add a header bar and redesign the interface (oobabooga#293) commit 9d6a625 Author: oobabooga <[email protected]> Date: Wed Mar 15 11:04:30 2023 -0300 Add 'hallucinations' filter oobabooga#326 This breaks the API since a new parameter has been added. It should be a one-line fix. See api-example.py. commit 3b62bd1 Author: Forkoz <[email protected]> Date: Tue Mar 14 21:23:39 2023 +0000 Remove PTH extension from RWKV When loading the current model was blank unless you typed it out. commit f0f325e Author: Forkoz <[email protected]> Date: Tue Mar 14 21:21:47 2023 +0000 Remove Json from loading no more 20b tokenizer commit 128d18e Author: oobabooga <[email protected]> Date: Tue Mar 14 17:57:25 2023 -0300 Update README.md commit 1236c7f Author: oobabooga <[email protected]> Date: Tue Mar 14 17:56:15 2023 -0300 Update README.md commit b419dff Author: oobabooga <[email protected]> Date: Tue Mar 14 17:55:35 2023 -0300 Update README.md commit 72d207c Author: oobabooga <[email protected]> Date: Tue Mar 14 16:31:27 2023 -0300 Remove the chat API It is not implemented, has not been tested, and this is causing confusion. commit afc5339 Author: oobabooga <[email protected]> Date: Tue Mar 14 16:04:17 2023 -0300 Remove "eval" statements from text generation functions commit 5c05223 Merge: b327554 87192e2 Author: oobabooga <[email protected]> Date: Tue Mar 14 08:05:24 2023 -0300 Merge pull request oobabooga#295 from Zerogoki00/opt4-bit Add support for quantized OPT models commit 87192e2 Author: oobabooga <[email protected]> Date: Tue Mar 14 08:02:21 2023 -0300 Update README commit 265ba38 Author: oobabooga <[email protected]> Date: Tue Mar 14 07:56:31 2023 -0300 Rename a file, add deprecation warning for --load-in-4bit commit 3da73e4 Merge: 518e5c4 b327554 Author: oobabooga <[email protected]> Date: Tue Mar 14 07:50:36 2023 -0300 Merge branch 'main' into Zerogoki00-opt4-bit commit b327554 Author: oobabooga <[email protected]> Date: Tue Mar 14 00:18:13 2023 -0300 Update bug_report_template.yml commit 33b9a15 Author: oobabooga <[email protected]> Date: Mon Mar 13 23:03:16 2023 -0300 Delete config.yml commit b5e0d3c Author: oobabooga <[email protected]> Date: Mon Mar 13 23:02:25 2023 -0300 Create config.yml commit 7f301fd Merge: d685332 02d4075 Author: oobabooga <[email protected]> Date: Mon Mar 13 22:41:21 2023 -0300 Merge pull request oobabooga#305 from oobabooga/dependabot/pip/accelerate-0.17.1 Bump accelerate from 0.17.0 to 0.17.1 commit 02d4075 Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Date: Tue Mar 14 01:40:42 2023 +0000 Bump accelerate from 0.17.0 to 0.17.1 Bumps [accelerate](https://github.com/huggingface/accelerate) from 0.17.0 to 0.17.1. - [Release notes](https://github.com/huggingface/accelerate/releases) - [Commits](huggingface/accelerate@v0.17.0...v0.17.1) --- updated-dependencies: - dependency-name: accelerate dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> commit d685332 Merge: 481ef3c df83088 Author: oobabooga <[email protected]> Date: Mon Mar 13 22:39:59 2023 -0300 Merge pull request oobabooga#307 from oobabooga/dependabot/pip/bitsandbytes-0.37.1 Bump bitsandbytes from 0.37.0 to 0.37.1 commit 481ef3c Merge: a0ef82c 715c3ec Author: oobabooga <[email protected]> Date: Mon Mar 13 22:39:22 2023 -0300 Merge pull request oobabooga#304 from oobabooga/dependabot/pip/rwkv-0.4.2 Bump rwkv from 0.3.1 to 0.4.2 commit df83088 Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Date: Tue Mar 14 01:36:18 2023 +0000 Bump bitsandbytes from 0.37.0 to 0.37.1 Bumps [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) from 0.37.0 to 0.37.1. - [Release notes](https://github.com/TimDettmers/bitsandbytes/releases) - [Changelog](https://github.com/TimDettmers/bitsandbytes/blob/main/CHANGELOG.md) - [Commits](https://github.com/TimDettmers/bitsandbytes/commits) --- updated-dependencies: - dependency-name: bitsandbytes dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> commit 715c3ec Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Date: Tue Mar 14 01:36:02 2023 +0000 Bump rwkv from 0.3.1 to 0.4.2 Bumps [rwkv](https://github.com/BlinkDL/ChatRWKV) from 0.3.1 to 0.4.2. - [Release notes](https://github.com/BlinkDL/ChatRWKV/releases) - [Commits](https://github.com/BlinkDL/ChatRWKV/commits) --- updated-dependencies: - dependency-name: rwkv dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> commit a0ef82c Author: oobabooga <[email protected]> Date: Mon Mar 13 22:35:28 2023 -0300 Activate dependabot commit 3fb8196 Author: oobabooga <[email protected]> Date: Mon Mar 13 22:28:00 2023 -0300 Implement "*Is recording a voice message...*" for TTS oobabooga#303 commit 0dab2c5 Author: oobabooga <[email protected]> Date: Mon Mar 13 22:18:03 2023 -0300 Update feature_request.md commit 79e519c Author: oobabooga <[email protected]> Date: Mon Mar 13 20:03:08 2023 -0300 Update stale.yml commit 1571458 Author: oobabooga <[email protected]> Date: Mon Mar 13 19:39:21 2023 -0300 Update stale.yml commit bad0b0a Author: oobabooga <[email protected]> Date: Mon Mar 13 19:20:18 2023 -0300 Update stale.yml commit c805843 Author: oobabooga <[email protected]> Date: Mon Mar 13 19:09:06 2023 -0300 Update stale.yml commit 60cc7d3 Author: oobabooga <[email protected]> Date: Mon Mar 13 18:53:11 2023 -0300 Update stale.yml commit 7c17613 Author: oobabooga <[email protected]> Date: Mon Mar 13 18:47:31 2023 -0300 Update and rename .github/workflow/stale.yml to .github/workflows/stale.yml commit 47c941c Author: oobabooga <[email protected]> Date: Mon Mar 13 18:37:35 2023 -0300 Create stale.yml commit 511b136 Author: oobabooga <[email protected]> Date: Mon Mar 13 18:29:38 2023 -0300 Update bug_report_template.yml commit d6763a6 Author: oobabooga <[email protected]> Date: Mon Mar 13 18:27:24 2023 -0300 Update feature_request.md commit c6ecb35 Author: oobabooga <[email protected]> Date: Mon Mar 13 18:26:28 2023 -0300 Update feature_request.md commit 6846427 Author: oobabooga <[email protected]> Date: Mon Mar 13 18:19:07 2023 -0300 Update feature_request.md commit bcfb7d7 Author: oobabooga <[email protected]> Date: Mon Mar 13 18:16:18 2023 -0300 Update bug_report_template.yml commit ed30bd3 Author: oobabooga <[email protected]> Date: Mon Mar 13 18:14:54 2023 -0300 Update bug_report_template.yml commit aee3b53 Author: oobabooga <[email protected]> Date: Mon Mar 13 18:14:31 2023 -0300 Update bug_report_template.yml commit 7dbc071 Author: oobabooga <[email protected]> Date: Mon Mar 13 18:09:58 2023 -0300 Delete bug_report.md commit 69d4b81 Author: oobabooga <[email protected]> Date: Mon Mar 13 18:09:37 2023 -0300 Create bug_report_template.yml commit 0a75584 Author: oobabooga <[email protected]> Date: Mon Mar 13 18:07:08 2023 -0300 Create issue templates commit 02e1113 Author: EliasVincent <[email protected]> Date: Mon Mar 13 21:41:19 2023 +0100 add auto-transcribe option commit 518e5c4 Author: oobabooga <[email protected]> Date: Mon Mar 13 16:45:08 2023 -0300 Some minor fixes to the GPTQ loader commit 8778b75 Author: Ayanami Rei <[email protected]> Date: Mon Mar 13 22:11:40 2023 +0300 use updated load_quantized commit a6a6522 Author: Ayanami Rei <[email protected]> Date: Mon Mar 13 22:11:32 2023 +0300 determine model type from model name commit b6c5c57 Author: Ayanami Rei <[email protected]> Date: Mon Mar 13 22:11:08 2023 +0300 remove default value from argument commit 63c5a13 Merge: 683556f 7ab45fb Author: Alexander Hristov Hristov <[email protected]> Date: Mon Mar 13 19:50:08 2023 +0200 Merge branch 'main' into main commit e1c952c Author: Ayanami Rei <[email protected]> Date: Mon Mar 13 20:22:38 2023 +0300 make argument non case-sensitive commit b746250 Author: Ayanami Rei <[email protected]> Date: Mon Mar 13 20:18:56 2023 +0300 Update README commit 3c9afd5 Author: Ayanami Rei <[email protected]> Date: Mon Mar 13 20:14:40 2023 +0300 rename method commit 1b99ed6 Author: Ayanami Rei <[email protected]> Date: Mon Mar 13 20:01:34 2023 +0300 add argument --gptq-model-type and remove duplicate arguments commit edbc611 Author: Ayanami Rei <[email protected]> Date: Mon Mar 13 20:00:38 2023 +0300 use new quant loader commit 345b6de Author: Ayanami Rei <[email protected]> Date: Mon Mar 13 19:59:57 2023 +0300 refactor quant models loader and add support of OPT commit 48aa528 Author: EliasVincent <[email protected]> Date: Sun Mar 12 21:03:07 2023 +0100 use Gradio microphone input instead commit 683556f Author: HideLord <[email protected]> Date: Sun Mar 12 21:34:09 2023 +0200 Adding markdown support and slight refactoring. commit 3b41459 Merge: 1c0bda3 3375eae Author: Elias Vincent Simon <[email protected]> Date: Sun Mar 12 19:19:43 2023 +0100 Merge branch 'oobabooga:main' into stt-extension commit 1c0bda3 Author: EliasVincent <[email protected]> Date: Fri Mar 10 11:47:16 2023 +0100 added installation instructions commit a24fa78 Author: EliasVincent <[email protected]> Date: Thu Mar 9 21:18:46 2023 +0100 tweaked Whisper parameters commit d5efc06 Merge: 00359ba 3341447 Author: Elias Vincent Simon <[email protected]> Date: Thu Mar 9 21:05:34 2023 +0100 Merge branch 'oobabooga:main' into stt-extension commit 00359ba Author: EliasVincent <[email protected]> Date: Thu Mar 9 21:03:49 2023 +0100 interactive preview window commit 7a03d0b Author: EliasVincent <[email protected]> Date: Thu Mar 9 20:33:00 2023 +0100 cleanup commit 4c72e43 Author: EliasVincent <[email protected]> Date: Thu Mar 9 12:46:50 2023 +0100 first implementation
Describe the bug
On Sunday, March 12, 2023, I was able to have good roleplay with my bot, receiving long responses with a high number of tokens per response. I am running on CPU, so I was receiving a response within 60-350 seconds (average about 150 seconds). The bot would use
*roleplay*
tags in its responses, and was generating responses containing up to 6 lines of text.I updated to the latest version of the one-click installer using the install.bat script on 14 March 2023, and updated again on 15 March, 2023. Continuing on with the same chat as before (which has never given me problems up until now, and contained a large number of
*roleplay*
-rich responses.After the recent updates, the generation times have dropped massively, but I am only receiving short, one-line responses with absolutely no
*roleplay*
whatsoever, or a very small amount. Character is now also confusing roles and not responding correctly. Tokens generated per message has dropped. Before update, 30-90 tokens were being generated. Now, only shorter (generally 9-20 token) responses are generated.Would it be possible to return to the old method of response generation, or fix the response generation so that it returns to being more roleplay-capable?
I am using Pygmalion6b as downloaded on Feb. 11 2023 using the old download script by selecting PygmalionAI/Pygmalion-6b)
Is there an existing issue for this?
Reproduction
This simply started by using any version past 12 March 2023, using my old chat log and a character which had been working very well up until this point.
Screenshot
There are more generations than responses shown here as I tried several times to regenerate responses.
I am RPing as a caregiver for a disabled bot, so please excuse the strange subject matter.
My generation settings, which had been giving me amazing, high-quality, long responses with plenty of in-context
*roleplay*
from the bot up until this point.The quality of responses I was able to get pre-update. The
*roleplay*
aspect was much better before the recent changes.Logs
System Info
The text was updated successfully, but these errors were encountered: