Skip to content

Commit

Permalink
Metadata2KG Round 2 dataset
Browse files Browse the repository at this point in the history
  • Loading branch information
oktie committed May 24, 2024
1 parent 17b8ca7 commit ba06709
Show file tree
Hide file tree
Showing 12 changed files with 13,316 additions and 4 deletions.
6 changes: 2 additions & 4 deletions data/metadata2kg/round1/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
# Metadata to KG Track Datasets
# Metadata to KG Track Round 1 Datasets

## Round 1

In this round, a JSONL file is provided with each line representing a column in a table, along with table name, column name, and other columns in the same table. The goal is to map each such column to one DBpedia ontology property. We have also provided the metadata in the form of an OWL ontology, to facilitate the mapping using ontology matching tools.
In this round, a JSONL file is provided with each line representing a column in a table, along with table name, column name, and other columns in the same table. The goal is to map each such column to one DBpedia ontology property (which we also refer to as a "glossary" item). We have also provided the metadata and the glossary in the form of an OWL ontology, to facilitate the mapping using ontology matching tools.

Sample data:
- [Sample Metadata File in JSONL](r1_sample_metadata.jsonl)
Expand Down
25 changes: 25 additions & 0 deletions data/metadata2kg/round2/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Metadata to KG Track Round 2 Datasets

In this round, a JSONL file is provided with each line representing a column in a table, along with table name, column name, and other columns in the same table. The goal is to map each such column to one "business glossary" item. We have also provided the metadata as well as the glossary in the form of an OWL ontology, to facilitate the mapping using ontology matching tools.

Sample data:
- [Sample Metadata File in JSONL](r2_sample_metadata.jsonl)
- [Sample Metadata Ontology in OWL](r2_sample_metadata.owl)
- [Sample Ground Truth](r2_sample_GT.csv)
- [Sample Output of Mapping](r2_sample_output.jsonl)
- Note: The mappings array to be sorted in descending order by score.

Test data (what we expect you to map to the glossary):
- [Metadata File in JSONL](r2_test_metadata.jsonl)
- [Metadata Ontology in OWL](r2_test_metadata.owl)

Glossary:
- [Glossary in JSONL](r2_glossary.jsonl)
- [Glossary in OWL](r2_glossary.owl)
- Note that unliked Round 1 data, this is a custom glossary and not derived from an existing publicly available KG.

Evaluation script:
- To try the evaluation script over the provided sample input/output, go to the data folder and run:
```
python evaluate.py -m r2_sample_output.jsonl -g r2_sample_metadata_GT.csv
```
71 changes: 71 additions & 0 deletions data/metadata2kg/round2/evaluate.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
import argparse
import json
import csv

def read_jsonl_file(file_path):
"""
Reads a JSONL file and returns a list of dictionaries (one dictionary per line).
"""
data = []
with open(file_path, 'r') as f:
for line in f:
data.append(json.loads(line))
return data

def read_csv_file(file_path):
"""
Reads a CSV file and returns a dictionary mapping IDs.
Assumes the CSV has two columns: 'source_id' and 'target_id'.
"""
id_mapping = {}
with open(file_path, 'r') as f:
reader = csv.reader(f)
for row in reader:
id_mapping[row[0]] = row[1]
return id_mapping

def calculate_score(mapping_data, ground_truth_mapping, k=5):
"""
Calculates Hit@k scores.
"""
total_objects = len(mapping_data)
correct_top_k = 0
correct_top_1 = 0

for obj in mapping_data:
# Sort mappings by score (highest to lowest)
sorted_mappings = sorted(obj['mappings'], key=lambda x: x['score'], reverse=True)
mapping_id = obj['id']
ground_truth_id = ground_truth_mapping.get(mapping_id)

if ground_truth_id:
# Check if ground truth ID is in the top-n mappings
top_k_ids = [m['id'] for m in sorted_mappings[:k]]
if ground_truth_id in top_k_ids:
correct_top_k += 1
if top_k_ids.index(ground_truth_id) == 0:
correct_top_1 += 1

hit_at_k = correct_top_k / total_objects
hit_at_1 = correct_top_1 / total_objects

return hit_at_1, hit_at_k

def main():
parser = argparse.ArgumentParser(description="Calculate accuracy and hit scores for mapping_file given the ground_truth file.")
parser.add_argument("-m", "--mapping_file", required=True, help="Path to the mapping JSONL file")
parser.add_argument("-g", "--ground_truth", required=True, help="Path to the ground truth CSV file")
args = parser.parse_args()

# Read the JSONL and CSV files
mapping_data = read_jsonl_file(args.mapping_file)
ground_truth_mapping = read_csv_file(args.ground_truth)

# Calculate scores
hit_at_1, hit_at_5 = calculate_score(mapping_data, ground_truth_mapping)

print(f"Hit@1: {hit_at_1:.2f}")
print(f"Hit@5: {hit_at_5:.2f}")

if __name__ == "__main__":
main()
Loading

0 comments on commit ba06709

Please sign in to comment.