Skip to content

Commit

Permalink
[SDK]improve error message when connection has scrubbed secrets & add…
Browse files Browse the repository at this point in the history
… CI for flow-as-func (#1088)

# Description

Please add an informative description that covers that changes made by
the pull request and link all relevant issues.

This PR added sample CI for flow as func and improved error message when
connection has scrubbed secrets.

# All Promptflow Contribution checklist:
- [ ] **The pull request does not introduce [breaking changes].**
- [ ] **CHANGELOG is updated for new features, bug fixes or other
significant changes.**
- [ ] **I have read the [contribution guidelines](../CONTRIBUTING.md).**
- [ ] **Create an issue and link to the pull request to get dedicated
review from promptflow team. Learn more: [suggested
workflow](../CONTRIBUTING.md#suggested-workflow).**

## General Guidelines and Best Practices
- [ ] Title of the pull request is clear and informative.
- [ ] There are a small number of commits, each of which have an
informative message. This means that previously merged commits do not
appear in the history of the PR. For more information on cleaning up the
commits in your PR, [see this
page](https://github.com/Azure/azure-powershell/blob/master/documentation/development-docs/cleaning-up-commits.md).

### Testing Guidelines
- [ ] Pull request includes test coverage for the included changes.
  • Loading branch information
D-W- authored Nov 13, 2023
1 parent ae76b8d commit 01bb286
Show file tree
Hide file tree
Showing 9 changed files with 108 additions and 54 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/samples_getstarted_flowasfunction.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ jobs:
- name: Test Notebook
working-directory: examples/tutorials/get-started
run: |
papermill -k python flow-as-function.ipynb flow-as-function.output.ipynb
papermill -k python flow-as-function.ipynb flow-as-function.output.ipynb -p api_key ${{ secrets.AOAI_API_KEY_TEST }} -p api_base ${{ secrets.AOAI_API_ENDPOINT_TEST }} -p api_version 2023-07-01-preview
- name: Upload artifact
if: ${{ always() }}
uses: actions/upload-artifact@v3
Expand Down
12 changes: 3 additions & 9 deletions examples/flows/standard/web-classification/flow.dag.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
environment:
python_requirements_txt: requirements.txt
inputs:
url:
type: string
Expand Down Expand Up @@ -32,8 +34,6 @@ nodes:
type: code
path: classify_with_llm.jinja2
inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: 128
Expand All @@ -52,6 +52,7 @@ nodes:
input_str: ${classify_with_llm.output}
node_variants:
summarize_text_content:
default_variant_id: variant_0
variants:
variant_0:
node:
Expand All @@ -60,8 +61,6 @@ node_variants:
type: code
path: summarize_text_content.jinja2
inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: 128
Expand All @@ -76,15 +75,10 @@ node_variants:
type: code
path: summarize_text_content__variant_1.jinja2
inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: 256
temperature: 0.3
text: ${fetch_text_content_from_url.output}
connection: open_ai_connection
api: chat
default_variant_id: variant_0
environment:
python_requirements_txt: requirements.txt
87 changes: 57 additions & 30 deletions examples/tutorials/get-started/flow-as-function.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,26 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Example1: Load flow as a function with inputs"
"# Execute flow as a function\n",
"\n",
"**Requirements** - In order to benefit from this tutorial, you will need:\n",
"- A python environment\n",
"- Installed PromptFlow SDK\n",
"\n",
"**Learning Objectives** - By the end of this tutorial, you should be able to:\n",
"- Execute a flow as a function\n",
"- Execute a flow function with in-memort connection object override\n",
"- Execute a flow function with fields override\n",
"- Execute a flow function with streaming output\n",
"\n",
"**Motivations** - This guide will walk you through the main scenarios of executing flow as a function. You will learn how to consume flow as a function in different scenarios for more pythonnic usage."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example1: Load flow as a function with inputs"
]
},
{
Expand All @@ -29,7 +48,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Example2: Load flow as a function with connection overwrite"
"## Example2: Load flow as a function with in-memory connection override"
]
},
{
Expand All @@ -39,6 +58,24 @@
"You will need to have a connection named \"new_ai_connection\" to run flow with new connection."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"parameters"
]
},
"outputs": [],
"source": [
"# provide parameters to create connection\n",
"\n",
"conn_name = \"new_ai_connection\"\n",
"api_key = \"<user-input>\"\n",
"api_base = \"<user-input>\"\n",
"api_version = \"<user-input>\""
]
},
{
"cell_type": "code",
"execution_count": null,
Expand All @@ -49,31 +86,21 @@
"import promptflow\n",
"from promptflow.entities import AzureOpenAIConnection, OpenAIConnection\n",
"\n",
"pf = promptflow.PFClient()\n",
"\n",
"try:\n",
" conn_name = \"new_ai_connection\"\n",
" conn = pf.connections.get(name=conn_name)\n",
" print(\"using existing connection\")\n",
"except:\n",
" # Follow https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal to create an Azure Open AI resource.\n",
" connection = AzureOpenAIConnection(\n",
" name=conn_name,\n",
" api_key=\"<user-input>\",\n",
" api_base=\"<test_base>\",\n",
" api_type=\"azure\",\n",
" api_version=\"<test_version>\",\n",
" )\n",
"\n",
" # use this if you have an existing OpenAI account\n",
" # connection = OpenAIConnection(\n",
" # name=conn_name,\n",
" # api_key=\"<user-input>\",\n",
" # )\n",
" conn = pf.connections.create_or_update(connection)\n",
" print(\"successfully created connection\")\n",
"\n",
"print(conn)"
"\n",
"# Follow https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal to create an Azure Open AI resource.\n",
"connection = AzureOpenAIConnection(\n",
" name=conn_name,\n",
" api_key=api_key,\n",
" api_base=api_base,\n",
" api_type=\"azure\",\n",
" api_version=api_version,\n",
")\n",
"\n",
"# use this if you have an existing OpenAI account\n",
"# connection = OpenAIConnection(\n",
"# name=conn_name,\n",
"# api_key=api_key,\n",
"# )\n"
]
},
{
Expand All @@ -85,8 +112,8 @@
"f = load_flow(\n",
" source=flow_path,\n",
")\n",
"# need to create the connection\n",
"f.context.connections = {\"classify_with_llm\": {\"connection\": \"new_ai_connection\"}}\n",
"# directly use connection created above\n",
"f.context.connections={\"classify_with_llm\": {\"connection\": connection}}\n",
"\n",
"result = f(url=sample_url)\n",
"\n",
Expand All @@ -97,7 +124,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example 3: Local flow as a function with flow inputs overwrite"
"## Example 3: Local flow as a function with flow inputs override"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ steps:
- name: Test Notebook
working-directory: {{ gh_working_dir }}
run: |
papermill -k python {{ name }}.ipynb {{ name }}.output.ipynb
papermill -k python {{ name }}.ipynb {{ name }}.output.ipynb -p api_key ${{ '{{' }} secrets.AOAI_API_KEY_TEST }} -p api_base ${{ '{{' }} secrets.AOAI_API_ENDPOINT_TEST }} -p api_version 2023-07-01-preview
- name: Upload artifact
if: ${{ '{{' }} always() }}
uses: actions/upload-artifact@v3
Expand Down
8 changes: 0 additions & 8 deletions src/promptflow/promptflow/_core/connection_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@
from promptflow._constants import CONNECTION_NAME_PROPERTY, CONNECTION_SECRET_KEYS, PROMPTFLOW_CONNECTIONS
from promptflow._sdk._constants import CustomStrongTypeConnectionConfigs
from promptflow._utils.utils import try_import
from promptflow.connections import _Connection
from promptflow.contracts.tool import ConnectionType
from promptflow.contracts.types import Secret

Expand Down Expand Up @@ -42,10 +41,6 @@ def _build_connections(cls, _dict: Dict[str, dict]):
cls.import_requisites(_dict)
connections = {} # key to connection object
for key, connection_dict in _dict.items():
if isinstance(connection_dict, _Connection):
# support directly pass connection object to executor
connections[key] = connection_dict
continue
typ = connection_dict.get("type")
if typ not in cls_mapping:
supported = [key for key in cls_mapping.keys() if not key.startswith("_")]
Expand Down Expand Up @@ -114,9 +109,6 @@ def import_requisites(cls, _dict: Dict[str, dict]):
"""Import connection required modules."""
modules = set()
for key, connection_dict in _dict.items():
if isinstance(connection_dict, _Connection):
# support directly pass connection object to executor
continue
module = connection_dict.get("module")
if module:
modules.add(module)
Expand Down
4 changes: 4 additions & 0 deletions src/promptflow/promptflow/_sdk/entities/_connection.py
Original file line number Diff line number Diff line change
Expand Up @@ -271,6 +271,10 @@ def _from_execution_connection_dict(cls, name, data) -> "_Connection":
return CustomConnection(name=name, configs=configs, secrets=secrets)
return type_cls(name=name, **value_dict)

def _get_scrubbed_secrets(self):
"""Return the scrubbed secrets of connection."""
return {key: val for key, val in self.secrets.items() if self._is_scrubbed_value(val)}


class _StrongTypeConnection(_Connection):
def _to_orm_object(self):
Expand Down
4 changes: 2 additions & 2 deletions src/promptflow/promptflow/_sdk/entities/_flow.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,9 +71,9 @@ def __init__(
self.environment_variables = environment_variables or {}
self.overrides = overrides or {}
self.streaming = streaming
# self.connection_provider = connection_provider
# TODO: introduce connection provider support

def resolve_connections(self):
def _resolve_connections(self):
# resolve connections and create placeholder for connection objects
for _, v in self.connections.items():
if isinstance(v, dict):
Expand Down
15 changes: 12 additions & 3 deletions src/promptflow/promptflow/_sdk/operations/_test_submitter.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ def init(self):
tuning_node, node_variant = parse_variant(self.flow_context.variant)
else:
tuning_node, node_variant = None, None
self.flow_context.resolve_connections()
self.flow_context._resolve_connections()
with variant_overwrite_context(
flow_path=self._origin_flow.code,
tuning_node=tuning_node,
Expand Down Expand Up @@ -231,14 +231,23 @@ def node_test(

def exec_with_inputs(self, inputs):
# TODO: unify all exec_line calls here

from promptflow.executor.flow_executor import FlowExecutor

# validate connection objs
connection_obj_dict = {}
for key, connection_obj in self.flow_context.connection_objs.items():
scrubbed_secrets = connection_obj._get_scrubbed_secrets()
if scrubbed_secrets:
raise UserErrorException(
f"Connection {connection_obj} contains scrubbed secrets with key {scrubbed_secrets.keys()}, "
"please make sure connection has decrypted secrets to use in flow execution. "
)
connection_obj_dict[key] = connection_obj._to_execution_connection_dict()
connections = SubmitterHelper.resolve_connections(
flow=self.flow, client=self._client, connections_to_ignore=self.flow_context.connection_objs.keys()
)
# update connections with connection objs
connections.update(self.flow_context.connection_objs)
connections.update(connection_obj_dict)
# resolve environment variables
SubmitterHelper.resolve_environment_variables(
environment_variables=self.flow_context.environment_variables, client=self._client
Expand Down
28 changes: 28 additions & 0 deletions src/promptflow/tests/sdk_cli_test/e2etests/test_flow_as_func.py
Original file line number Diff line number Diff line change
Expand Up @@ -116,3 +116,31 @@ def test_flow_as_a_func_with_variant(self):
with pytest.raises(InvalidFlowError) as e:
f(key="a")
assert "Variant variant_2 not found for node print_val" in str(e.value)

def test_non_scrubbed_connection(self):
f = load_flow(f"{FLOWS_DIR}/flow_with_custom_connection")
f.context.connections = {"hello_node": {"connection": CustomConnection(secrets={"k": "*****"})}}

with pytest.raises(UserErrorException) as e:
f(text="hello")
assert "please make sure connection has decrypted secrets to use in flow execution." in str(e)

def test_local_connection_object(self, pf, azure_open_ai_connection):
f = load_flow(f"{FLOWS_DIR}/web_classification")
f.context.connections = {"classify_with_llm": {"connection": azure_open_ai_connection}}
f()

# local connection without secret will lead to error
connection = pf.connections.get("azure_open_ai_connection", with_secrets=False)
f.context.connections = {"classify_with_llm": {"connection": connection}}
with pytest.raises(UserErrorException) as e:
f()
assert "please make sure connection has decrypted secrets to use in flow execution." in str(e)

@pytest.mark.skipif(RecordStorage.is_replaying_mode(), reason="Returning dict is not supported for now.")
def test_non_secret_connection(self):
f = load_flow(f"{FLOWS_DIR}/flow_with_custom_connection")
# execute connection without secrets won't get error since the connection doesn't have scrubbed secrets
# we only raise error when there are scrubbed secrets in connection
f.context.connections = {"hello_node": {"connection": CustomConnection(secrets={})}}
f(text="hello")

0 comments on commit 01bb286

Please sign in to comment.