Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After installing lmms-eval,i can not evaluate videoxl directly. #18

Open
SplendidYuan opened this issue Nov 28, 2024 · 4 comments
Open

Comments

@SplendidYuan
Copy link

Error during evaluation: No module named 'lmms_eval.models.videoxl'.I did see longva in the list of models supported by lmms-eval, but currently it doesn't seem to be able to directly evaluate videoxl.

@shuyansy
Copy link
Collaborator

Hi, Please check the latest readme. If you have further issues, please let me know.

@SplendidYuan
Copy link
Author

Hi, Please check the latest readme. If you have further issues, please let me know.

Traceback (most recent call last):
File "/home/shengy/miniconda3/envs/videoxl/lib/python3.10/site-packages/lmms_eval/main.py", line 329, in cli_evaluate
results, samples = cli_evaluate_single(args)
File "/home/shengy/miniconda3/envs/videoxl/lib/python3.10/site-packages/lmms_eval/main.py", line 470, in cli_evaluate_single
results = evaluator.simple_evaluate(
File "/home/shengy/miniconda3/envs/videoxl/lib/python3.10/site-packages/lmms_eval/utils.py", line 533, in _wrapper
return fn(*args, **kwargs)
File "/home/shengy/miniconda3/envs/videoxl/lib/python3.10/site-packages/lmms_eval/evaluator.py", line 169, in simple_evaluate
lm = ModelClass.create_from_arg_string(
File "/home/shengy/miniconda3/envs/videoxl/lib/python3.10/site-packages/lmms_eval/api/model.py", line 110, in create_from_arg_string
return cls(**args, **args2)
TypeError: Can't instantiate abstract class Videoxl with abstract method generate_until_multi_round

2024-11-29 16:05:34.047 | ERROR | main:cli_evaluate:348 - Error during evaluation: Can't instantiate abstract class Videoxl with abstract method generate_until_multi_round. Please set --verbosity=DEBUG to get more information.

Thank you for your reply and improvement. But now it seems like there are still some issues

@shuyansy
Copy link
Collaborator

shuyansy commented Dec 2, 2024

Currently I have no idea about that. It is OK for me. Maybe you can try to run other models and figure out what is wrong.

@SplendidYuan
Copy link
Author

This seems to be a problem with the code, but I don't know how to modify it specifically? Can you try it in the new conda environment to see if there will be such results? Thank you very much

Currently I have no idea about that. It is OK for me. Maybe you can try to run other models and figure out what is wrong.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants