Releases: IBM/unitxt
1.11.1
Non backward compatible changes
- The class InputOutputTemplate has the field input_format. This field becomes a required field. It means that templates should explicitly set their value to None if not using it. by @elronbandel in #982
- fix MRR RAG metric - fix MRR wiring, allow the context_ids to be a list of strings, instead of a list[list[str]]. This allows directly passing the list of predicted context ids, as was done in unitxt version 1.7. added corresponding tests. This change may change the scores of MRR metric. by @matanor in
New Features
- Add the option to specify the number of processes to use for parallel dataset loading by @csrajmohan in #974
- Add option for lazy load hf inference engine by @elronbandel in #980
- Added a format based on Huggingface format by @yoavkatz in #988
New Assets
- Add code mixing metric, add language identification task, add format for Starling model by @arielge in #956
Bug Fixes
- Fix llama_3_ibm_genai_generic_template by @lga-zurich in #978
Documentation
- Add an example that shows how to use LLM as a judge that takes the references into account… by @eladven in #981
- Improve the examples table documentation by @eladven in #976
Refactoring
- Delete empty metrics folder by @elronbandel in #984
Testing and CI/CD
New Contributors
- @lga-zurich made their first contribution in #978
Full Changelog: 1.10.1...1.10.2
1.11.0 (#996)
Non backward compatible changes
- The class InputOutputTemplate has the field input_format. This field becomes a required field. It means that templates should explicitly set their value to None if not using it. by @elronbandel in #982
- fix MRR RAG metric - fix MRR wiring, allow the context_ids to be a list of strings, instead of a list[list[str]]. This allows directly passing the list of predicted context ids, as was done in unitxt version 1.7. added corresponding tests. This change may change the scores of MRR metric. by @matanor in
New Features
- Add the option to specify the number of processes to use for parallel dataset loading by @csrajmohan in #974
- Add option for lazy load hf inference engine by @elronbandel in #980
- Added a format based on Huggingface format by @yoavkatz in #988
New Assets
- Add code mixing metric, add language identification task, add format for Starling model by @arielge in #956
Bug Fixes
- Fix llama_3_ibm_genai_generic_template by @lga-zurich in #978
Documentation
- Add an example that shows how to use LLM as a judge that takes the references into account… by @eladven in #981
- Improve the examples table documentation by @eladven in #976
Refactoring
- Delete empty metrics folder by @elronbandel in #984
Testing and CI/CD
New Contributors
- @lga-zurich made their first contribution in #978
Full Changelog: 1.10.1...1.10.2
1.10.3
Non backward compatible changes
- The class InputOutputTemplate has the field input_format. This field becomes a required field. It means that templates should explicitly set their value to None if not using it. by @elronbandel in #982
- fix MRR RAG metric - fix MRR wiring, allow the context_ids to be a list of strings, instead of a list[list[str]]. This allows directly passing the list of predicted context ids, as was done in unitxt version 1.7. added corresponding tests. This change may change the scores of MRR metric. by @matanor in
New Features
- Add the option to specify the number of processes to use for parallel dataset loading by @csrajmohan in #974
- Add option for lazy load hf inference engine by @elronbandel in #980
- Added a format based on Huggingface format by @yoavkatz in #988
New Assets
- Add code mixing metric, add language identification task, add format for Starling model by @arielge in #956
Bug Fixes
- Fix llama_3_ibm_genai_generic_template by @lga-zurich in #978
Documentation
- Add an example that shows how to use LLM as a judge that takes the references into account… by @eladven in #981
- Improve the examples table documentation by @eladven in #976
Refactoring
- Delete empty metrics folder by @elronbandel in #984
Testing and CI/CD
New Contributors
- @lga-zurich made their first contribution in #978
Full Changelog: 1.10.1...1.10.2
1.10.2
Non backward compatible changes
- None - this release if fully compatible with the previous release.
New Features
- added num_proc parameter - Optional integer to specify the number of processes to use for parallel dataset loading by @csrajmohan in #974
- Add option to lazy load hf inference engine and fix requirements mechanism by @elronbandel in #980
- Add code mixing metric, add language identification task, add format for Starling model by @arielge in #956
- Add metrics: domesticated safety and regard by @dafnapension in #983
- Make input_format required field in InputOutputTemplate by @elronbandel in #982
- Added a format based on Huggingface format by @yoavkatz in #988
Bug Fixes
- Fix the error at the examples table by @eladven in #976
- fix MRR RAG metric - fix MRR wiring, allow the context_ids to be a list of strings, instead of a list[list[str]]. This allows directly passing the list of predicted context ids, as was done in unitxt version 1.7. added corresponding tests. by @matanor in #969
- Fix llama_3_ibm_genai_generic_template by @lga-zurich in #978
Documentation
- Add an example that shows how to use LLM as a judge that takes the references into account… by @eladven in #981
Refactoring
- Delete empty metrics folder by @elronbandel in #984
Testing and CI/CD
New Contributors
- @lga-zurich made their first contribution in #978
Full Changelog: 1.10.1...1.10.2
1.10.1
Main Changes
- Continued with major improvements to the documentation including a new code examples section with standalone python code that shows how to perform evaluation, add new datasets, compare formats, use LLM as judges , and more. Cards for datasets from huggingface have detailed descriptions. New documentation of RAG tasks and metrics.
load_dataset
can now load cards defined in a python file (and not only in the catalog). See example.- The evaluation results returned from
evaluate
now include two fieldspredictions
andprocessed_predictions
. See example. - The fields can have defaults, so if they are not specified in the card, they get a default value. For example, multi-class classification has
text
as the defaulttext_type
. See example.
Non backward compatible changes
You need to recreate the any cards/metrics you added by running prepare//.py file. You can create all cards simply by running python utils/prepare_all_artifacts.py . This will avoid the type error.
The AddFields operator was renamed Set and CopyFields operator was renamed Copy. Note previous code should continue to work, but we renamed all existing code in the unitxt and fm-eval repos.
- Change Artifact.type to Artifact.type by @elronbandel in #933
- change CopyFields operators name to Copy by @duckling69 in #876
- Rename AddFields to Set, a name that represent its role better and concisely by @elronbandel in #903
New Features
- Allow eager execution by @elronbandel in #888
- Add view option for Task definitions in UI explorer. by @yoavkatz in #891
- Add input type checking in LoadFromDictionary by @yoavkatz in #900
- Add TokensSlice operator by @elronbandel in #902
- Make some logs critical by @elronbandel in #973
- Add LogProbInferenceEngines API and implement for OpenAI by @lilacheden in #909
- Added support for ibm-watsonx-ai inference by @pawelknes in #961
- load_dataset supports loading cards not present in local catalog by @pawelknes in #929
- Added defaults to tasks by @pawelknes in #921
- Add raw predictions and references to results by @yoavkatz in #934
- Allow add-hoc metrics and template (and Add first version of standalone example of dataset with LLM as a judge ) by @eladven in #922
- Add infer() function for end to end inference pipeline by @elronbandel in #952
Bug Fixes
- LLMaaJ implementation of MLCommons' simple-safety-tests by @bnayahu in #873
- Update gradio version on website by @elronbandel in #896
- Improve demo by @elronbandel in #898
- Fix demo and organize files by @elronbandel in #897
- Make sacrebleu robust by @yoavkatz in #892
- Fix huggingface assets to have versions and up to date readme by @elronbandel in #895
- fix(cos loader): account for slashes in cos file name by @jezekra1 in #904
- llama3 instruct and chat system prompts by @oktie in #950
- Added trust_remote_code to HF dataset query operations by @yoavkatz in #911
Documentation
- Update llm_as_judge.rst by @yoavkatz in #970
- Michal Jacovi's completed manual review of the card descriptions by @dafnapension in #883
- In card preparers, generate the tags with "singletons" rather than values paired with True by @dafnapension in #874
- Improved documentation by @yoavkatz in #886
- Update glossary.rst by @yoavkatz in #899
- Add example section to documentation by @yoavkatz in #917
- Added example of open qa using catalog by @yoavkatz in #919
- Update example intro and simplified WNLI cards by @yoavkatz in #923
- Update adding_metric.rst by @yoavkatz in #955
- RAG documentation by @yoavkatz in #928
- docs: update adding_dataset.rst by @eltociear in #927
- prepare for description= that is different from those embedded automtically by @dafnapension in #937
- Add simple LLM as a judge example, of using it without installaiotn by @eladven in #968
- Add example of using LLM as a judge for summarization dataset. by @eladven in #965
- Improve operators documentation by @elronbandel in #942
New Assets
- Add numeric nlg dataset by @ShirApp in #882
- Add to_list_by_hyphen_space processor by @marukaz in #872
- Added tags and descriptions to safety cards by @bnayahu in #887
- Add Mt-Bench datasets + add operators by @OfirArviv in #870
- Touch up numeric nlg by @elronbandel in #889
- split train to train and validation sets in billsum by @alonh in #901
- modified wikitq, tab_fact taskcards by @ShirApp in #963
Implementation of TruthfulQA by @bnayahu in #931 - Add bluebench cards by @perlitz in #918
- Add LlamaIndex faithfulness metric by @arielge in #971
- Expanded template support for safety cards by @bnayahu in #943
Testing and CI/CD
- Add end to end realistic test to fusion by @elronbandel in #940
- Moved test_examples to run the actual examples by @yoavkatz in #913
- Use uv for installing requirements in actions by @elronbandel in #960
- Add ability to print_dict to print selected fields by @yoavkatz in #947
- Get rid of pkg_resources dependency by @elronbandel in #932
- adapt filtering lambda to datasets 2.20 by @dafnapension in #930
- Increase preparation log to error. by @elronbandel in #959
New Contributors
Full Changelog: 1.10.0...1.10.1
Unitxt 1.10.0
Main changes
- Added support for handling sensitive data . When data is loaded from a data source using a Loader the user can specify the classification of the data (e.g. "public" or "proprietary"). Then Unitxt components such as metrics and inference engines checks if they are allowed to process the data based on their configuration. For example, an LLM as judge that sends data to remote services can be configured to only send "public" data to the remote services. This replaced the UNITXT_ALLOW_PASSING_DATA_TO_REMOTE_API option, which was a general flag that was not data dependent and hence error prone.
See more details in https://unitxt.readthedocs.io/en/latest/docs/data_classification_policy.html - Added support for adding metric prefix. Each metric has a new optional string attribute "score_prefix", that is appended to all scores it generates. This allows the same metric to be used on different fields of the tasks, and distinguish the output score.
- New Operators tutorial and Loaders documentation
Backward
- StreamInstanceOperator was renamed to InstanceOperator
New Features
- Support for handling sensitive data sent to remote services by @pawelknes in #806 , @yoavkatz in #868
- Added new NER metric using fuzzywuzzy logic by @sarathsgvr in #808
- Added loader from HF spaces by @pawelknes in #860
- Add metric prefix in main by @yoavkatz in #878
- add MinimumOneExamplePerLabelRefiner to allow ensuring at least one example of each labels appears in the training data. by @alonh in #867
Bug Fix
- Explorer UI crashed when no templates were defined in card by @yoavkatz in #855
- Fix operator and metrics data by @yoavkatz in #878
- Improved testing of cards by @yoavkatz in #861
- FormTask deprecation by @yoavkatz in #856
New Assets
- Adding go emotions dataset by @shaigrt in #865
- Implementation of select safety benchmarks by @bnayahu in #854
Documentation
- Update CONTRIBUTING.md by @elronbandel in #859
- Adding operator tutorial and standarizing operators names by @elronbandel in #863
- Fix code blocks in loaders docs by @elronbandel in #866
- Typo fix in unitext operators docs by @duckling69 in #877
- Add documntation to loaders by @elronbandel in #864
- Changes to introduction page by @yoavkatz in #852
New Contributors
- @sarathsgvr made their first contribution in #808
- @bnayahu made their first contribution in #854
- @shaigrt made their first contribution in #865
- @duckling69 made their first contribution in #877
Full Changelog: 1.9.0...1.10.0
Unitxt 1.9.0
What's Changed
The most important things are:
- Addition of LLM as a Judge Metrics and Tasks for both evaluating LLMs as judge and using them for evaluation of other tasks. Read more in the LLM as a Judge Tutorial
- Addition of RAG response generation tasks and datasets as part of an effort to add comprhensive RAG evaluation to unitxt.
- Renaming FormTask to Task for simplicity
- Major improvments to documentation and tutorials
Breaking Changes 🚨
- Ensure consistent evaluation of CI across implementations [Might change previous results] by @dafnapension in #844
- Fix default format so it will be the same as formats.empty in catalog. Impacts runs that did not specify a format by @yoavkatz in #848
- LoadJson operator moved from unit.processors to unitxt.struct_data_operators
- Fixed YesNoTemplate and Diverse LabelSampler, to support binary task typing. YesNoTemplate now expect class field to contain a string and not a list of of strings with one elements by @yoavkatz in #836
Bug Fixes
- Change processor type for to_list_by_comma_from_references by @antonpibm in #815
- Handle empty text in Literal Eval by @antonpibm in #819
- Fix clash between dir names and artifact names in catalog website by @elronbandel in #825
- Ner typing had a mistake. by @yoavkatz in #832
- Fix catalog reference by @elronbandel in #838
- Fix default format by @yoavkatz in #848
- Fixed YesNoTemplate and Diverse LabelSampler, to support binary task typing. by @yoavkatz in #836
New Features
- Support prediction regex match by setting the operator as a postproce… by @antonpibm in #792
- Add sample score output in test card by @yoavkatz in #803
- Support for loading dictionaries by @pawelknes in #784
- Add ability to fuse, split, MultiStreamScoreMean, and merge all by @dafnapension in #767
- Changed default log verbosity to "info" instead of "debug" by @yoavkatz in #822
- Skip artifact prepare and verify in catalog consistency tests by @elronbandel in #839
- Add seperation between eagered streams and regular streams by @elronbandel in #846
- Add precision and recall scores to f1_binary, max_f1_binary by @lilacheden in #824
- Rename task by @elronbandel in #850
New Assets
- Add basic format for llama3 models by @arielge in #812
- Adding literal eval processor by @antonpibm in #813
- Add RAG (response generation part) tasks and datasets by @perlitz in #811
- Add 5 legalbench tasks (the 5 existing in HELM) by @perlitz in #827
- Add financebench by @perlitz in #828
- Add billsum dataset by @perlitz in #830
- Add tldr dataset by @perlitz in #831
- Add Attaq500 by @naamaz in #835
- Add llm as judge mt-bench dataset and metrics by @OfirArviv in #791
Documentation
- Documentation review by @yoavkatz in #805
- Added documentation for global and huggingface metrics by @yoavkatz in #807
- Touch up docs by @elronbandel in #809
- Remove the contents from main menu by @elronbandel in #810
- Add tags docs by @elronbandel in #814
- Reviewing Unitxt tutorials by @michal-jacovi in #817
- Fix the link to the operators tutorial by @elronbandel in #821
- More documentation changes in metrics by @yoavkatz in #820
- Update adding_task.rst by @michal-jacovi in #823
- Fix missing mandatory new line in the begging of code block in documentation by @elronbandel in #829
- Add description, homepage, and citation obtained from HF with datasets.load_dataset_builder by @dafnapension in #818
- Updated documentation by @yoavkatz in #849
New Contributors
- @antonpibm made their first contribution in #792
- @michal-jacovi made their first contribution in #817
Full Changelog: 1.8.1...1.9.0
1.8.1
Unitxt 1.8.0
What's Changed
In this release, the main improvement focuses on introducing type checking within Unitxt tasks. Tasks are fundamental to the Unitxt protocol, acting as standardized blueprints for those integrating new datasets into Unitxt. They facilitate the use of task-specific templates and metrics. To guarantee precise dataset processing in line with the task schema, we've introduced explicit types to the task fields.
For example, consider the NER task in Unitxt, previously defined as follows:
add_to_catalog(
FormTask(
inputs=["text", "entity_types"],
outputs=["spans_starts", "spans_ends", "text", "labels"],
metrics=["metrics.ner"],
),
"tasks.ner",
)
Now, the NER task definition includes explicit types:
add_to_catalog(
FormTask(
inputs={"text": "str", "entity_types": "List[str]"},
outputs={
"spans_starts": "List[int]",
"spans_ends": "List[int]",
"text": "List[str]",
"labels": "List[str]",
},
prediction_type="List[Tuple[str,str]]",
metrics=["metrics.ner"],
),
"tasks.ner",
)
This enhancement aligns with Unitxt's goal that definitions should be easily understandable and capable of facilitating validation processes with appropriate error messages to guide developers in identifying and solving issues.
Right now , using the original definition format without typing , will continue to work but generate a warning message. You should begin to adapt your tasks definition by adding types.
'inputs' field of Task should be a dictionary of field names and their types. For example, {'text': 'str', 'classes': 'List[str]'}. Instead only '['question', 'question_id', 'topic']' was passed. All types will be assumed to be 'Any'. In future version of unitxt this will raise an exception.
'outputs' field of Task should be a dictionary of field names and their types. For example, {'text': 'str', 'classes': 'List[str]'}. Instead only '['reference_answers', 'reference_contexts', 'reference_context_ids', 'is_answerable_label']' was passed. All types will be assumed to be 'Any'. In future version of unitxt this will raise an exception.
Special thanks to @pawelknes who implemented this important feature. It truly demonstrates the collective power of the Unitxt community and the invaluable contributions made by Unitxt users beyond the core development team. Such contributions are highly appreciated and encouraged.
- For more detailed information, please refer to #710
Breaking Changes
"metrics.spearman", "metrics.kendalltau_b", "metrics.roc_auc": prediction type is float.
"metrics.f1_binary","metrics.accuracy_binary", "metrics.precision_binary", "metrics.recall_binary", "metrics.max_f1_binary", "metrics.max_accuracy_binary": prediction type is Union[float, int], references must be equal to 0 or 1
Bug Fixes
- Set empty list if preprocess_steps is None by @marukaz in #780
- Fix UI load failure due to typo by @yoavkatz in #785
- Fix huggingface uploads by @elronbandel in #793
- Fix typo in error message by @marukaz in #777
New Assets
- add perplexity with Mistral model by @lilacheden in #713
New Features
- Type checking for task definition by @pawelknes in #710
- Add open and ibm_genai to llm as judge inference engine by @OfirArviv in #782
- Add negative class score for binary precision, recall, f1 and max f1 by @lilacheden in #788
- Add negative class score for binary precision, recall, f1 and max f1, e.g. f1_binary now returns also "f1_binary_neg".
- Support Unions in metric prediction_type
- Add processor cast_to_float_return_nan_if_failed
- Breaking change: Make prediction_type of metrics numeric:
A. "metrics.kendalltau_b", "metrics.roc_auc": prediction type is float.
B. "metrics.f1_binary","metrics.accuracy_binary", "metrics.precision_binary", "metrics.recall_binary", "metrics.max_f1_binary", "metrics.max_accuracy_binary": prediction type is Union[float, int], references must be equal to 0 or 1
- Group shuffle by @sam-data-guy-iam in #639
Documentation
- Fix a small typo by @dafnapension in #779
- Update instructions to install HELM from PyPI by @yifanmai in #783
- Update few-shot instructions in Unitxt with HELM by @yifanmai in #774
Full Changelog: 1.7.7...1.8.0
Full Changelog: 1.8.1...1.8.0
Unitxt 1.7.9
What's Changed
- Set empty list if preprocess_steps is None by @marukaz in #780
- fix a small typo by @dafnapension in #779
- Fix typo by @marukaz in #777
- Group shuffle by @sam-data-guy-iam in #639
- add perplexity with Mistral model by @lilacheden in #713
- Fix UI load failure due to typo by @yoavkatz in #785
- Type checking for task definition by @pawelknes in #710
- Add open and ibm_genai to llm as judge inference engine by @OfirArviv in #782
- Avoid creating a demo pool if num_demos is 0. by @yoavkatz in #787
- Update test_helm.yml by @elronbandel in #789
- Update instructions to install HELM from PyPI by @yifanmai in #783
- Update few-shot instructions in Unitxt with HELM by @yifanmai in #774
- Update version to 1.7.8 by @elronbandel in #790
- Fix huggingface uploads by @elronbandel in #793
- Update version to 1.7.9 by @elronbandel in #794
Full Changelog: 1.7.7...1.7.9