Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Provider] Added Support for MonsterAPI Integration #280

Merged
merged 22 commits into from
May 2, 2024

Conversation

DheerajQblocks
Copy link
Contributor

Description:
This PR adds support for the MonsterAPI provider to our project. It includes the necessary configurations, request/response structures, and transformation logic to integrate MonsterAPI into our project.

Changes:

  • Added MonsterAPI configuration to provider/index.ts.
  • Defined MonsterAPI request and response structures in types.ts.
  • Implemented request transformation logic in provider/transform.ts.
  • Implemented response transformation logic in provider/transform.ts.

Additional Information:

  • Documentation for MonsterAPI can be found at MonsterAPI Documentation.
  • The MonsterAPI API endpoint is https://llm.monsterapi.ai/v1/generate.

Testing:

  • Tested the integration with MonsterAPI using sample requests.
  • Verified the correctness of response transformation logic.

Screenshots:
image

Related Issue: #279 (Add MonsterAPI provider)

@VisargD
Copy link
Collaborator

VisargD commented Apr 1, 2024

Hey @DheerajQblocks . Thanks for the PR! Can you please update this with latest main branch changes. Its missing a lot of major changes that were pushed last week. And the integration does not seem to follow OpenAI transform convention that we follow for other providers. All the response are still in monster API format which is not correct. Please let me know if I am missing something or if you need anymore details on this.

@DheerajQblocks
Copy link
Contributor Author

Hi @VisargD ,

Thank you for reviewing the PR. I appreciate your feedback.

I have updated the PR with the latest changes from the main branch and ensured that it aligns with the OpenAI transform convention as per our standards for other providers. The integration now follows the required format for responses, adhering to the guidelines.

Regarding the response format, I've revised the implementation to conform to the expected structure for MonsterAPI, ensuring that it meets the requirements outlined in the project specifications.

Please review the changes and let me know if there are any further adjustments needed. I'm here to address any additional concerns or questions you may have.

Thank you for your attention to this matter.

Best regards,
@DheerajQblocks

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please remove these changes to package-lock.json? Seems to be unrelated to the PR and are also causing npm install issues.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@DheerajQblocks
Copy link
Contributor Author

@VisargD Please review / Approve this PR.

@VisargD
Copy link
Collaborator

VisargD commented Apr 6, 2024

@Dheeraj-Bhandari - Is this correct doc for the API that you have integrated? https://developer.monsterapi.ai/reference/generate_v1_generate_post

@DheerajQblocks
Copy link
Contributor Author

Is

Yes it's correct

@DheerajQblocks
Copy link
Contributor Author

@VisargD Please Take a Look

@VisargD
Copy link
Collaborator

VisargD commented Apr 9, 2024

Hey @DheerajQblocks - Sorry for the delay here. I will start the review today in sometime.

api: MonsterAPIApiConfig,
generate: MonsterAPIChatCompleteConfig,
responseTransforms: {
generate: MonsterAPIChatCompleteResponseTransform,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of generate, the name should be chatComplete for this.

if ('response' in response) {
return {
id: '',
object: '',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The object value can be chat.completion as we are following OpenAI specs

return {
id: '',
object: '',
created: 0,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For created, you can use Math.floor(Date.now() / 1000) as value.


if ('response' in response) {
return {
id: '',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To have the same response structure across, you can generate a random id and send it in response so that none of the existing SDKs break. We currently use Date.now().toString() in some providers as response id.

generateInvalidProviderResponseError,
} from '../utils';

export const MonsterAPIChatCompleteConfig: ProviderConfig = {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per the API docs, some of the mappings are missing here. Can you please check this once?
If any param is similar to an existing OpenAI param, you can use the OpenAI param name as input and map it to Monster's param. If there is no OpenAI compatible param, then you can add them as a new mapping as it is.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a part of naming convention that we follow in gateway, please use chatComplete instead of generate.

@VisargD
Copy link
Collaborator

VisargD commented Apr 13, 2024

Hey @Dheeraj-Bhandari - Just checking up on this. Please let me know if there any blockers.

@DheerajQblocks
Copy link
Contributor Author

Hey @Dheeraj-Bhandari - Just checking up on this. Please let me know if there any blockers.

Sure I'm on with the required changes I will update you once it done

@DheerajQblocks
Copy link
Contributor Author

@VisargD Added the Required changes as per the suggestion provided by you.

Please review/merge this PR ASAP.

},
getEndpoint: ({ fn }) => {
switch (fn) {
case 'generate':
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This case should also use chatComplete instead of generate. And the default should return an empty string. Basically this:

switch (fn) {
      case 'chatComplete':
        return '/generate';
      default:
        return '';
    }

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

min: 0,
max: 1,
},
temp: {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The openAI compatible name is temperature and not temp. So the input will be temperature which can then be mapped to MonsterAPI specific param name

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can remove the comments from the file as they are not required and they are a part of github PR review so they should not be a part of the actual code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@DheerajQblocks
Copy link
Contributor Author

@VisargD Required changes made. Please review PR

@VisargD
Copy link
Collaborator

VisargD commented Apr 29, 2024

Looks good to me. There is one problem that I found. I am not sure if its expected. The API error response structure that you have used in the chatComplete transformer is not aligned with monster API docs. The docs shows 2 types of error response:

{
  "detail": "\n    \"Invalid input payload, could be following.1 The specified model was not found, \"\n    1. Invalid Model\n    2. Bad payload\n"
}
{
  "detail": [
    {
      "loc": [
        "string",
        0
      ],
      "msg": "string",
      "type": "string"
    }
  ]
}

But the code that is written in this PR does not handle any of the above.

@VisargD
Copy link
Collaborator

VisargD commented May 1, 2024

Hey @DheerajQblocks - Can you please check the last comment and the merge conflicts. Everything else looks good to me.

@DheerajQblocks
Copy link
Contributor Author

Hey @DheerajQblocks - Can you please check the last comment and the merge conflicts. Everything else looks good to me.

Sure

@DheerajQblocks
Copy link
Contributor Author

Looks good to me. There is one problem that I found. I am not sure if its expected. The API error response structure that you have used in the chatComplete transformer is not aligned with monster API docs. The docs shows 2 types of error response:

{
  "detail": "\n    \"Invalid input payload, could be following.1 The specified model was not found, \"\n    1. Invalid Model\n    2. Bad payload\n"
}
{
  "detail": [
    {
      "loc": [
        "string",
        0
      ],
      "msg": "string",
      "type": "string"
    }
  ]
}

But the code that is written in this PR does not handle any of the above.

Hi @VisargD
I think our docs are not updated so you can ignore this.
Although I am sending the exact same error response which I am getting from API.

@DheerajQblocks
Copy link
Contributor Author

Hey @DheerajQblocks - Can you please check the last comment and the merge conflicts. Everything else looks good to me.

@VisargD I have resolved the conflict. Please Review / Merge it on periority.

@VisargD VisargD merged commit c1c714b into Portkey-AI:main May 2, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants