Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updating doc strings for evaluate API #3283

Merged
merged 7 commits into from
May 16, 2024
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 21 additions & 6 deletions src/promptflow-evals/promptflow/evals/evaluate/_evaluate.py
Original file line number Diff line number Diff line change
Expand Up @@ -269,12 +269,13 @@ def evaluate(
target: Optional[Callable] = None,
data: Optional[str] = None,
evaluators: Optional[Dict[str, Callable]] = None,
evaluator_config: Optional[Dict[str, Dict[str, str]]] = {},
evaluator_config: Optional[Dict[str, Dict[str, str]]] = None,
azure_ai_project: Optional[Dict] = None,
output_path: Optional[str] = None,
**kwargs,
):
"""Evaluates target or data with built-in evaluation metrics
"""Evaluates target or data with built-in or custom evaluators. If both target and data are provided,
data will be run through target function and then results will be evaluated.

:keyword evaluation_name: Display name of the evaluation.
:paramtype evaluation_name: Optional[str]
Expand All @@ -283,21 +284,35 @@ def evaluate(
:keyword data: Path to the data to be evaluated or passed to target if target is set.
Only .jsonl format files are supported. `target` and `data` both cannot be None
:paramtype data: Optional[str]
:keyword evaluator_config: Configuration for evaluators.
:keyword evaluators: Evaluators to be used for evaluation. It should be a dictionary with key as alias for evaluator
and value as the evaluator function.
:paramtype evaluators: Optional[Dict[str, Callable]
:keyword evaluator_config: Configuration for evaluators. The configuration should be a dictionary with evaluator
names as keys and a dictionary of column mappings as values. The column mappings should be a dictionary with
keys as the column names in the evaluator input and values as the column names in the input data or data
generated by target.
:paramtype evaluator_config: Optional[Dict[str, Dict[str, str]]
:keyword output_path: The local folder path to save evaluation artifacts to if set
:keyword output_path: The local folder or file path to save evaluation results to if set. If folder path is provided
the results will be saved to a file named `evaluation_results.json` in the folder.
:paramtype output_path: Optional[str]
:keyword azure_ai_project: Logs evaluation results to AI Studio
Example: {
"subscription_id": "<subscription_id>",
"resource_group_name": "<resource_group_name>",
"project_name": "<project_name>"
}
:paramtype azure_ai_project: Optional[Dict]
:return: A EvaluationResult object.
:rtype: ~azure.ai.generative.evaluate.EvaluationResult
:return: Evaluation results.
:rtype: dict
"""
singankit marked this conversation as resolved.
Show resolved Hide resolved

trace_destination = _trace_destination_from_project_scope(azure_ai_project) if azure_ai_project else None

input_data_df = _validate_and_load_data(target, data, evaluators, output_path, azure_ai_project, evaluation_name)

# Process evaluator config to replace ${target.} with ${data.}
if evaluator_config is None:
evaluator_config = {}
evaluator_config = _process_evaluator_config(evaluator_config)
_validate_columns(input_data_df, evaluators, target, evaluator_config)

Expand Down
Loading