Skip to content

Commit

Permalink
docs: Always print sample predictions when computing metrics
Browse files Browse the repository at this point in the history
  • Loading branch information
saattrupdan committed Dec 5, 2023
1 parent bc07ea0 commit e6f3f43
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion src/coral_models/compute_metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ def compute_wer_metrics(pred: EvalPrediction, processor: Processor) -> dict[str,
# Decode the ground truth labels
labels_str = tokenizer.batch_decode(sequences=labels, group_tokens=False)

# TEMP: Log both the predictions and the ground truth labels
# Log both the predictions and the ground truth labels
is_main_process = os.getenv("RANK", "0") == "0"
if is_main_process:
random_idx = np.random.randint(0, len(predictions_str))
Expand Down

0 comments on commit e6f3f43

Please sign in to comment.