Skip to content

Commit

Permalink
Merge branch 'main' into jiazeng/meta_icon
Browse files Browse the repository at this point in the history
  • Loading branch information
jiazengcindy authored Oct 18, 2023
2 parents a243dca + 48904a3 commit a9e78e7
Show file tree
Hide file tree
Showing 57 changed files with 1,441 additions and 274 deletions.
1 change: 1 addition & 0 deletions .cspell.json
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
"scripts/docs/_build/**",
"src/promptflow/promptflow/azure/_restclient/flow/**",
"src/promptflow/tests/**",
"src/promptflow-tools/tests/**",
"**/flow.dag.yaml",
"**/setup.py"
],
Expand Down
2 changes: 1 addition & 1 deletion .github/pipelines/compliance_check.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ trigger:
- releases/*

pool:
vmImage: windows-latest
name: promptflow-1ES-win

steps:
- checkout: self
Expand Down
1 change: 1 addition & 0 deletions docs/how-to-guides/develop-a-tool/add-a-tool-icon.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ Adding a custom tool icon is optional. If you do not provide one, the system use

## Prerequisites

- Please ensure that your [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) is updated to version 1.0.10 or a more recent version.
- Create a tool package as described in [Create and Use Tool Package](create-and-use-tool-package.md).
- Prepare custom icon image that meets these requirements:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -162,5 +162,4 @@ Alternatively, you can test your tool package using the script below to ensure t
* If you encounter a `403 Forbidden Error`, it's likely due to a naming conflict with an existing package. You will need to choose a different name. Package names must be unique on PyPI to avoid confusion and conflicts among users. Before creating a new package, it's recommended to search PyPI (https://pypi.org/) to verify that your chosen name is not already taken. If the name you want is unavailable, consider selecting an alternative name or a variation that clearly differentiates your package from the existing one.
## Advanced features
[Customize your tool icon](add-a-tool-icon.md)
[Use file path as tool input](use-file-path-as-tool-input.md)
[Customize your tool icon](add-a-tool-icon.md)
1 change: 0 additions & 1 deletion docs/how-to-guides/develop-a-tool/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,4 @@ We provide guides on how to develop a tool and use it.
create-and-use-tool-package
add-a-tool-icon
use-file-path-as-tool-input
```
77 changes: 0 additions & 77 deletions docs/how-to-guides/develop-a-tool/use-file-path-as-tool-input.md

This file was deleted.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 5 additions & 1 deletion docs/reference/tools-reference/open_source_llm_tool.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,11 @@ The Open Source LLM tool has a number of parameters, some of which are required.
| Name | Type | Description | Required |
|------|------|-------------|----------|
| api | string | This is the API mode and will depend on the model used and the scenario selected. *Supported values: (Completion \| Chat)* | Yes |
| connection | CustomConnection | This is the name of the connection which points to the Online Inferencing endpoint. | Yes |
| endpoint_name | string | Name of an Online Inferencing Endpoint with a supported model deployed on it. Takes priority over connection. | No |
| connection | CustomConnection | This is the name of the connection which points to the Online Inferencing endpoint. | No |
| temperature | float | The randomness of the generated text. Default is 1. | No |
| max_new_tokens | integer | The maximum number of tokens to generate in the completion. Default is 500. | No |
| top_p | float | The probability of using the top choice from the generated tokens. Default is 1. | No |
| model_kwargs | dictionary | This input is used to provide configuration specific to the model used. For example, the Llama-02 model may use {\"temperature\":0.4}. *Default: {}* | No |
| deployment_name | string | The name of the deployment to target on the Online Inferencing endpoint. If no value is passed, the Inferencing load balancer traffic settings will be used. | No |
| prompt | string | The text prompt that the language model will use to generate it's response. | Yes |
Expand Down
2 changes: 1 addition & 1 deletion examples/tools/tool-package-quickstart/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

setup(
name=PACKAGE_NAME,
version="0.0.1",
version="0.0.2",
description="This is my tools package",
packages=find_packages(),
entry_points={
Expand Down
12 changes: 12 additions & 0 deletions src/promptflow-tools/connections.json.example
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,18 @@
"endpoint_api_key"
]
},
"llama_chat_connection": {
"type": "CustomConnection",
"value": {
"endpoint_url": "llama-chat-endpoint-url",
"model_family": "LLAMA",
"endpoint_api_key": "llama-chat-endpoint-api-key"
},
"module": "promptflow.connections",
"secret_keys": [
"endpoint_api_key"
]
},
"open_ai_connection": {
"type": "OpenAIConnection",
"value": {
Expand Down
12 changes: 9 additions & 3 deletions src/promptflow-tools/promptflow/tools/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,9 @@ def validate_functions(functions):


def parse_function_role_prompt(function_str):
pattern = r"\n*name:\n\s*(\S+)\s*\n*content:\n(.*)"
# customer can add ## in front of name/content for markdown highlight.
# and we still support name/content without ## prefix for backward compatibility.
pattern = r"\n*#{0,2}\s*name:\n\s*(\S+)\s*\n*#{0,2}\s*content:\n(.*)"
match = re.search(pattern, function_str, re.DOTALL)
if match:
return match.group(1), match.group(2)
Expand All @@ -98,8 +100,12 @@ def parse_function_role_prompt(function_str):

def parse_chat(chat_str):
# openai chat api only supports below roles.
separator = r"(?i)\n*(system|user|assistant|function)\s*:\s*\n"
chunks = re.split(separator, chat_str)
# customer can add single # in front of role name for markdown highlight.
# and we still support role name without # prefix for backward compatibility.
separator = r"(?i)\n+\s*#?\s*(system|user|assistant|function)\s*:\s*\n"
# Add a newline at the beginning to ensure consistent formatting of role lines.
# extra new line is removed when appending to the chat list.
chunks = re.split(separator, '\n'+chat_str)
chat_list = []
for chunk in chunks:
last_message = chat_list[-1] if len(chat_list) > 0 else None
Expand Down
Loading

0 comments on commit a9e78e7

Please sign in to comment.