You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If only one or the other variable is present, we should raise an exception.
If both of these variables are present, we need to create a new llama client using the Azure AI Inference package.
Use of the client looks something like this:
# pip install azure-ai-inferenceimportosfromazure.ai.inferenceimportChatCompletionsClientfromazure.core.credentialsimportAzureKeyCredentialapi_key=os.getenv("<API key>", '')
ifnotapi_key:
raiseException("A key should be provided to invoke the endpoint")
client=ChatCompletionsClient(
endpoint='<endpoint URL>',
credential=AzureKeyCredential(api_key)
)
The behavior of this client is completely orthogonal to the OpenAI/Azure OpenAI client. One or the other or both can be configured at the same time. The client should live on the CodemodExecutionContext. Implementing this may lead to some breaking API changes.
The text was updated successfully, but these errors were encountered:
We would like to support codemods that use Llama models hosted in Azure.
We have proposed a spec update here that introduces two new environment variables: pixee/codemodder-specs#39
If only one or the other variable is present, we should raise an exception.
If both of these variables are present, we need to create a new llama client using the Azure AI Inference package.
Use of the client looks something like this:
The behavior of this client is completely orthogonal to the OpenAI/Azure OpenAI client. One or the other or both can be configured at the same time. The client should live on the
CodemodExecutionContext
. Implementing this may lead to some breaking API changes.The text was updated successfully, but these errors were encountered: