-
-
Notifications
You must be signed in to change notification settings - Fork 383
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scalene compatibility with DeepSpeed Python scripts #770
Comments
I have no experience with DeepSpeed but I do not know why you are getting this error. I would suggest putting |
Hi. Thank you for the quick response. Can you elaborate with an example? |
You can't use Scalene to profile Bash scripts. It has to precede or replace invocations of Python. As for an example, I mean:
|
Hi.
I wanted to know if it is possible to use scalene with deepspeed optimized python scripts. If yes, please let me know the procedure that needs to be followed to use it. I have tried using scalene within my deepspeed python command but it was not working. The error screenshot is attached.
Command used:
scalene --gpu deepspeed --num_nodes 1 --num_gpus 1 main.py --data_path databricks/databricks-dolly-15k --data_split 2,4,4 --model_name_or_path meta-llama/Llama-2-7b-chat-hf --per_device_train_batch_size 32 --per_device_eval_batch_size 32 --max_seq_len 512 --learning_rate 9.65e-6 --weight_decay 0. --num_train_epochs 2 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --num_warmup_steps 0 --seed 1234 --gradient_checkpointing --zero_stage 2 --deepspeed --offload --lora_dim 128 --lora_module_name "layers." --output_dir ./output_LLaMa2_scalene
Output:
The text was updated successfully, but these errors were encountered: