-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deployed the ml model seperately #196
Conversation
@BhoomiAgrawal12 is attempting to deploy a commit to the Pratik0112's projects Team on Vercel. A member of the Team first needs to authorize it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @BhoomiAgrawal12, Welcome to 💖TelMedSphere !!! 🎊
Thanks for raising a PR! Your effort makes this project better. 🙌
Please wait for the PR to be reviewed.
Happy Coding!! ✨
@BhoomiAgrawal12 Thank you so much !!!! |
@BhoomiAgrawal12 Congrats, Your pull request has been successfully merged 🥳🎉 |
@BhoomiAgrawal12, thank you for efforts
|
Sure @PratikMane0112 , I will work on the next steps and also with time open new issues or changes with respect to the model. |
Yes, I see. @BhoomiAgrawal12 is working on it. Thanks for adding up & your suggestions @RajKhanke |
Fixes Issue🛠️
Closes #169
Description👨💻
I have deployed the model seperately on onrender and then made the api call in frontend,As checked in local setup the model is working fine and deployed properly.There is another app.py file being created under models and made the setup accordingly.
the hosted url is already being added to the frontend file,however if an env variable will be used it would be better so that future contributors can run it locally as well.
Type of Change📄
Checklist✅
Screenshots/GIF📷