Skip to content

Commit

Permalink
[Minor] Fix max new tokens error in video llava (#237)
Browse files Browse the repository at this point in the history
  • Loading branch information
kcz358 authored Sep 10, 2024
1 parent 35d221f commit 1618dc7
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion lmms_eval/models/video_llava.py
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ def generate_until(self, requests) -> List[str]:
if "num_beams" not in gen_kwargs:
gen_kwargs["num_beams"] = 1

generate_ids = self.model.generate(**inputs, max_length=gen_kwargs["max_new_tokens"], temperature=gen_kwargs["temperature"])
generate_ids = self.model.generate(**inputs, max_new_tokens=gen_kwargs["max_new_tokens"], temperature=gen_kwargs["temperature"])

outputs = self._processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0].split("ASSISTANT:")[-1].strip()
res.append(outputs)
Expand Down

0 comments on commit 1618dc7

Please sign in to comment.