-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
12 changed files
with
13,316 additions
and
4 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,25 @@ | ||
# Metadata to KG Track Round 2 Datasets | ||
|
||
In this round, a JSONL file is provided with each line representing a column in a table, along with table name, column name, and other columns in the same table. The goal is to map each such column to one "business glossary" item. We have also provided the metadata as well as the glossary in the form of an OWL ontology, to facilitate the mapping using ontology matching tools. | ||
|
||
Sample data: | ||
- [Sample Metadata File in JSONL](r2_sample_metadata.jsonl) | ||
- [Sample Metadata Ontology in OWL](r2_sample_metadata.owl) | ||
- [Sample Ground Truth](r2_sample_GT.csv) | ||
- [Sample Output of Mapping](r2_sample_output.jsonl) | ||
- Note: The mappings array to be sorted in descending order by score. | ||
|
||
Test data (what we expect you to map to the glossary): | ||
- [Metadata File in JSONL](r2_test_metadata.jsonl) | ||
- [Metadata Ontology in OWL](r2_test_metadata.owl) | ||
|
||
Glossary: | ||
- [Glossary in JSONL](r2_glossary.jsonl) | ||
- [Glossary in OWL](r2_glossary.owl) | ||
- Note that unliked Round 1 data, this is a custom glossary and not derived from an existing publicly available KG. | ||
|
||
Evaluation script: | ||
- To try the evaluation script over the provided sample input/output, go to the data folder and run: | ||
``` | ||
python evaluate.py -m r2_sample_output.jsonl -g r2_sample_metadata_GT.csv | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,71 @@ | ||
import argparse | ||
import json | ||
import csv | ||
|
||
def read_jsonl_file(file_path): | ||
""" | ||
Reads a JSONL file and returns a list of dictionaries (one dictionary per line). | ||
""" | ||
data = [] | ||
with open(file_path, 'r') as f: | ||
for line in f: | ||
data.append(json.loads(line)) | ||
return data | ||
|
||
def read_csv_file(file_path): | ||
""" | ||
Reads a CSV file and returns a dictionary mapping IDs. | ||
Assumes the CSV has two columns: 'source_id' and 'target_id'. | ||
""" | ||
id_mapping = {} | ||
with open(file_path, 'r') as f: | ||
reader = csv.reader(f) | ||
for row in reader: | ||
id_mapping[row[0]] = row[1] | ||
return id_mapping | ||
|
||
def calculate_score(mapping_data, ground_truth_mapping, k=5): | ||
""" | ||
Calculates Hit@k scores. | ||
""" | ||
total_objects = len(mapping_data) | ||
correct_top_k = 0 | ||
correct_top_1 = 0 | ||
|
||
for obj in mapping_data: | ||
# Sort mappings by score (highest to lowest) | ||
sorted_mappings = sorted(obj['mappings'], key=lambda x: x['score'], reverse=True) | ||
mapping_id = obj['id'] | ||
ground_truth_id = ground_truth_mapping.get(mapping_id) | ||
|
||
if ground_truth_id: | ||
# Check if ground truth ID is in the top-n mappings | ||
top_k_ids = [m['id'] for m in sorted_mappings[:k]] | ||
if ground_truth_id in top_k_ids: | ||
correct_top_k += 1 | ||
if top_k_ids.index(ground_truth_id) == 0: | ||
correct_top_1 += 1 | ||
|
||
hit_at_k = correct_top_k / total_objects | ||
hit_at_1 = correct_top_1 / total_objects | ||
|
||
return hit_at_1, hit_at_k | ||
|
||
def main(): | ||
parser = argparse.ArgumentParser(description="Calculate accuracy and hit scores for mapping_file given the ground_truth file.") | ||
parser.add_argument("-m", "--mapping_file", required=True, help="Path to the mapping JSONL file") | ||
parser.add_argument("-g", "--ground_truth", required=True, help="Path to the ground truth CSV file") | ||
args = parser.parse_args() | ||
|
||
# Read the JSONL and CSV files | ||
mapping_data = read_jsonl_file(args.mapping_file) | ||
ground_truth_mapping = read_csv_file(args.ground_truth) | ||
|
||
# Calculate scores | ||
hit_at_1, hit_at_5 = calculate_score(mapping_data, ground_truth_mapping) | ||
|
||
print(f"Hit@1: {hit_at_1:.2f}") | ||
print(f"Hit@5: {hit_at_5:.2f}") | ||
|
||
if __name__ == "__main__": | ||
main() |
Oops, something went wrong.