-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Version 2.0 #21
Version 2.0 #21
Conversation
Oh yeah. And should you for some weird reason want an aarch64 binary of this, here it is. Built on:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Starting with the fact that I developed it entirely on my Android phone as the final project of a personal challenge to see how complicated of a patch I could actually write this way:
I like that a lot. Must have been very difficult to do.
Regarding the PR overall, some things are not in-line with my idea of where this project should go and some things are in-line. I want to soon sit down and figure out a way to implement the new ChatGPT API's since they are better according to Greg Brockman and also they are 10 times as cheap.
How about I implement that and cherry pick some parts of your PR into it? I will, of course, put your email into the co-author field so that GitHub acknowledges your contribution
|
||
[profile.release.package.ata] | ||
strip = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why was this removed? This it cause errors on Android?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually not sure. It may be a bug in cargo add, it's also possible that I messed something up when editing these files in vim on Android 😂😂
impl Default for Config { | ||
fn default() -> Self { | ||
Self { | ||
model: "text-davinci-003".into(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Things move quickly in OpenAI API's and since the release of the ChatGPT models, I think those should be the primary. I think that v2 of ata
should just drop support for non-chat models for simplicity. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also wonder whether people actually would use the settings such as top_p
. For people who want to use that, there is https://platform.openai.com/playground. ata
aims more at productivity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well that's why I added the ability to also specify configuration files by name. That way you can load an alternate one for a certain project.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I wanted to make it so that every single configuration argument could just as well be specified via the command line or the operating system environment and then in that case you don't even need to use configuration files. So the order of loading would be, configuration files if any, environment, arguments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that v2 of ata should just drop support for non-chat models for simplicity. What do you think?
Sorry that's really basically the opposite of what I had in mind. So the reason that this project was done on Android (part of the idea was that I would only actually work on things that would make using a rooted Android phone better) is that the playground website is extremely broken in the Android browsers. The streaming feature barely works, it always loses your text and doesn't get it back through undo etc.
I also wanted to be able to pipe to ata. I did not quite finish that yet but I will.
As I see it, ata is most useful as a generic console application which supports all of the GPT APIs. And much like most other console applications, that means that it has an interactive mode and a batch mode.
I think that if you want to use ChatGPT, even by default, that's no problem. But I would not drop support for the existing API, I would just put it behind an option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for giving more details. I have been thinking about it for a bit more and I see your point. For me, the main goal of ata
was also to provide a more productive interface to ChatGPT than is currently available at chat.openai.com.
As I see it, ata is most useful as a generic console application which supports all of the GPT APIs. And much like most other console applications, that means that it has an interactive mode and a batch mode.
I think that if you want to use ChatGPT, even by default, that's no problem. But I would not drop support for the existing API, I would just put it behind an option.
Yes I agree that that would be best. The problem for me is that I do not have time to support all GPT APIs. In the ideal case, I would start adding tests for example. I currently do not have time for that since my PhD project also has a lot of deadlines currently. At the same time, I do want to try to keep up with OpenAI while they are moving at lightning speed. So, the most useful middle ground there in my opinion is then to drop support for older models and only support the, much cheaper, chatgpt models. That way, Android and terminal users can have productive interactions with at least one GPT model
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the ideal case, I would start adding tests for example.
More specifically, the complexity of the project is nearing my own capabilities. Without tests, I cannot increase complexity much more or I would start introducing more and more bugs. However, tests are complex to make too since it would require a mock stdout (for example, like kkawakam/rustyline#652 (comment)) and a mock OpenAI server. Both are possible, but take time which I don't have at this moment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's fine, I think we should just fork. I'm not really going to add test because I don't really care if it works or not for every user in every use case... They can report issues and then I'll fix those. My plan certainly is not to have a mock server and a mock standard out.
suffix: None, | ||
top_p: 1.0, | ||
n: 1, | ||
stream: false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stream has to be enabled to enable streaming. That's one of the main features of ata
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I don't want always want streaming and want to pipe to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I don't want always want streaming and want to pipe to it.
How about a Bash script wrapper around curl
? Something like
#!/usr/bin/env bash
RESPONSE=$(curl --silent https://api.openai.com/v1/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <API KEY>" \
-d "{\"model\": \"text-davinci-003\", \"prompt\": \"$@\", \"temperature\": 0, \"max_tokens\": 7}")
echo $RESPONSE | jq '.choices[0].text'
For example, this gives:
$ bash tmp.sh 'hi there'
"\n\nHello! How are you"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know that's possible but wanted an application to do it.
Quite all right with me. And if the comments I left above are very at odds with your idea for the project it's really no problem for us to go in different directions. |
If you want, can you review #23? I've implemented parts of this PR in that one. Let me know if the solution proposed there works for you too |
This is going to be a bit of a strange pull request and I do apologize for that.
There are many very good reasons you may not want this massive patch.
Starting with the fact that I developed it entirely on my Android phone as the final project of a personal challenge to see how complicated of a patch I could actually write this way:
I consider the test over now because there's really nothing to do beyond this; all it does is just take longer but I feel like I've proven you can do everything. Also wrote several websites and patched ffmpeg.
Nevertheless, a lot of time did go into it. And I think my version is better because it has many features that yours or not:
Either way have fun 😊