Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Usability issues with the inline completer #16

Closed
jtpio opened this issue Nov 6, 2024 · 4 comments · Fixed by #18
Closed

Usability issues with the inline completer #16

jtpio opened this issue Nov 6, 2024 · 4 comments · Fixed by #18

Comments

@jtpio
Copy link
Member

jtpio commented Nov 6, 2024

When trying #15 locally with MistralAI, the inline completer seems to be "off" most of the time, making it difficult to use:

image

I believe this was already noticed in #8 and may be the same issue.

It's not clear yet whether it's because the Mistral API is slow to respond, because of its 1 req/s limit or how the inline completer is currently set up.

@brichet
Copy link
Collaborator

brichet commented Nov 6, 2024

I think it's a mix of all of these.

  • the Mistral API seems to take around 2s to respond on my hand (and sometime the first requests never ends).
  • currently we trigger the API as soon as a change occurs, and the following changes during the next second are not taken into account.

Maybe we could make it more reliable with a better use of the inline completer and the throttler.
I'm thinking about:

  • not triggering the API on each change. We could wait a small timeout (200ms) to expect the user is not typing anymore.
  • after having trigger the API, keeping track of the following changes would allow us to cancel the next suggestion (which is probably no longer valid) and trigger another as soon as possible.

@jtpio
Copy link
Member Author

jtpio commented Nov 6, 2024

the Mistral API seems to take around 2s to respond on my hand (and sometime the first requests never ends).

2s sounds like a lot. Wondering if we can already see some improvements with other providers like in #17 for example. If that's the case maybe there isn't much we can do about the Mistral provider.

@brichet
Copy link
Collaborator

brichet commented Nov 6, 2024

the Mistral API seems to take around 2s to respond on my hand (and sometime the first requests never ends).

2s sounds like a lot. Wondering if we can already see some improvements with other providers like in #17 for example. If that's the case maybe there isn't much we can do about the Mistral provider.

As far as I know, Groq does not provide a completion model out of the box (not with langchain at least).

@jtpio
Copy link
Member Author

jtpio commented Nov 7, 2024

OK, we could try with OpenAI then.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants