You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ValueError: The global train batch size (1 x 1) must be evenly divisible by the number of generations per prompt (4). Given the current train batch size, the valid values for the number of generations are: [].
This is the output of trl env
INFO 02-16 21:48:30 __init__.py:190] Automatically detected platform cuda.
Copy-paste the following information when reporting an issue:
- Platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
- Python version: 3.11.10
- PyTorch version: 2.5.1
- CUDA device(s): NVIDIA RTX A6000
- Transformers version: 4.48.3
- Accelerate version: 1.3.0
- Accelerate config: not found
- Datasets version: 3.3.0
- HF Hub version: 0.28.1
- TRL version: 0.15.0
- bitsandbytes version: 0.45.2
- DeepSpeed version: not installed
- Diffusers version: 0.32.2
- Liger-Kernel version: not installed
- LLM-Blender version: not installed
- OpenAI version: 1.63.0
- PEFT version: 0.14.0
This error didn't happen with me couple of days ago and i didn't change anything in my code.
The text was updated successfully, but these errors were encountered:
@leonardtang The problem is when i made per_device_batch_size equal to 4 so it can be divisiable to the number of generations it gave me another shape error
@leonardtang The problem is when i made per_device_batch_size equal to 4 so it can be divisiable to the number of generations it gave me another shape error
I have same problem, I dont have per_device_batch_size in GRPOConfig.init() TypeError: GRPOConfig.__init__() got an unexpected keyword argument 'per_device_batch_size'
Edit: After reading error message one more time I changed per_device_train_batch_size to same value as num_generations and this fixed problem
This is the output of
trl env
This error didn't happen with me couple of days ago and i didn't change anything in my code.
The text was updated successfully, but these errors were encountered: