Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EGBA does not consistently output instructions to code block #3

Open
TohnJravolta opened this issue Dec 14, 2023 · 3 comments
Open

Comments

@TohnJravolta
Copy link
Owner

TohnJravolta commented Dec 14, 2023

This seems to very a bit with GPT's "moods" lol. The goal is to turn this process consistent through instruction improvement, wherein I will first experiment with injecting "mood stabilizing" code to "guide" CGW down the most conducive and constructive lines of reasoning, with the goal of getting it to even out on how it is doing with more detailed embedded instructions like the code block request instruction for example. If that doesn't work then I will refactor that part of the code specifically, and then tune those two parameters until CGW outputs consistently in testing.

@TohnJravolta
Copy link
Owner Author

TohnJravolta commented Dec 14, 2023

Case Sensitivity

After some testing, It seems like case sensitivity might inherently be built into the execution of reading instructions('IS').

This seems to lead the GPT to generally struggle in recognizing commands that do not match the command exactly as written in the instruction set, leading to the GPT behaving inconsistently when engaging with the command (my theory is that in that context it starts to generalize instead of staying inside of its current line of thought when checking these in its 'IS'.

As it was at the time in the 'IS' file, the commands were written in all caps (an intentional choice on my part to make them more readable but also at the time with the intent of having them be case sensitive and then pivoting by adding instruction to tell the GPT to ignore case sensitivity which seems to not have worked as intended, producing inconsistent results)

When the case matched EXACTLY, the GPT performed much more consistently with user expectations in regards to staying in the logic framework of the 'IS'!

Untitled design(10)

Context Matters

#solution_pending

@TohnJravolta TohnJravolta changed the title Instructions for interacting with APIS needs overhauling and to be disabled till there is more knowledge in CGW on this subject. EGBA does not consistently output instructions to code block Dec 15, 2023
@TohnJravolta TohnJravolta reopened this Jan 8, 2024
@TohnJravolta
Copy link
Owner Author

This issue has become worse without me having worked or changed on the instruction set while away for the holidays. Since last visiting this issue. Before the holidays, around the time of opening this issue, I had made progress on steering the models behavior to better following the instructions and it was pretty consistently working while sometimes 'goofing' because it wasn't reading its knowledge base. While I have been away the model has drastically changed in behavior and seemed to be not accessing its knowledge files at all. I'll outline my theory/testing in this comment.
I think there are four parts to this:

• General issues with the 'GPTs' system related to file uploads / accessing knowledge files
• Internal latent model 'mood' variance (see the 'december_drag' hypothesis )
• Back-end hidden instruction changes (Unconfirmed but it is said that openAI makes changes sometimes to deal with the model behaving differently>demonstrated via updates rolled out that change how chatGPT behaves etc)
• General service issues (traffic & server load)

It seems from my testing that the GPT started having difficulties accessing its knowledge base, this seems to be an issue some other GPT builders are reporting as well. I have encountered issues with uploading files before and other difficulties around this as well (for example, originally I could upload more then one file at a time inside the same browser/system then randomly now I can only upload one at a time now, at some point before when you could upload more then one at a time, loading more then a few at a time caused issues etc). It rarely seems to successfully read the pdfs that are storing the majority of its instruction set anymore, last night it kept returning errors when trying to access its knowledge base (the search indicator would spin up and error out) sometimes, other times it wouldn't even follow my instructions to check its knowledge when it should. Today it will finish the task of trying to check the knowledge base(as indicated by the status indicator given in the interface) and then either ignore whats there (error?) or say that it couldn't access the knowledge file (while not returning an error).

I have changed the instruction document to a plain text file and that seems to be improving the consistency of the outputs so far.

more testing to come

image

@TohnJravolta
Copy link
Owner Author

I managed to somewhat stabilize CGW after the last post to this thread.

I have made some progress in researching some more solutions, one big thing I think will be some info from this twitter thread:
image
the author is an experienced developer with one of the most popular (metrics wise) customGPTs that wasn't built by openAI.

So hopefully implementing some changes from this post might help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant