Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] FAILED: multi_tensor_adam.cuda.o with #6912

Open
XueruiSu opened this issue Dec 24, 2024 · 7 comments
Open

[BUG] FAILED: multi_tensor_adam.cuda.o with #6912

XueruiSu opened this issue Dec 24, 2024 · 7 comments
Labels
bug Something isn't working training

Comments

@XueruiSu
Copy link

Describe the bug
When I using deepspeed to train llama3 use stage 2. After loaded checkpoint shards, there will get an error about 'fused_adam'. When I using stage 3 with the 'offload' of 'optimizer', there will no error after loaded checkpoint shards but will stack before 'self.actor_model.backward(loss)' and self.actor_model.optimizer.device=cpu.

Actually I'm describe two issue. The first is the error under stage 2, the following Expected behavior is about this error. The second is the stack under stage 3 with the 'offload' of 'optimizer'.

To Reproduce
the backward code is below:
print("loss-------", loss)
print(self.actor_model.optimizer.device)
self.actor_model.backward(loss)
print("loss backward-------", loss)

Expected behavior
I got this error:
Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00, 4.90it/s]
Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00, 5.07it/s]
Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00, 4.29it/s]
Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00, 4.22it/s]
Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00, 5.46it/s]
Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00, 5.14it/s]
Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00, 4.31it/s]
Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00, 4.21it/s]
Using /home/msrai4srl4s/.cache/torch_extensions/py310_cu118 as PyTorch extensions root...Using /home/msrai4srl4s/.cache/torch_extensions/py310_cu118 as PyTorch extensions root...

Using /home/msrai4srl4s/.cache/torch_extensions/py310_cu118 as PyTorch extensions root...
Using /home/msrai4srl4s/.cache/torch_extensions/py310_cu118 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/msrai4srl4s/.cache/torch_extensions/py310_cu118/fused_adam/build.ninja...
/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py:1964: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
warnings.warn(
Building extension module fused_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] /home/msrai4srl4s/miniconda3/envs/mcts-dpo/bin/nvcc --generate-dependencies-with-compile --dependency-output multi_tensor_adam.cuda.o.d -ccbin /home/msrai4srl4s/miniconda3/bin/x86_64-conda-linux-gnu-cc -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -I/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/adam -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/TH -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/THC -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/include -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -lineinfo --use_fast_math -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_80,code=compute_80 -DBF16_AVAILABLE -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT16_CONVERSIONS__ -std=c++17 -c /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu -o multi_tensor_adam.cuda.o
FAILED: multi_tensor_adam.cuda.o
/home/msrai4srl4s/miniconda3/envs/mcts-dpo/bin/nvcc --generate-dependencies-with-compile --dependency-output multi_tensor_adam.cuda.o.d -ccbin /home/msrai4srl4s/miniconda3/bin/x86_64-conda-linux-gnu-cc -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -I/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/adam -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/TH -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/THC -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/include -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -lineinfo --use_fast_math -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_80,code=compute_80 -DBF16_AVAILABLE -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT16_CONVERSIONS__ -std=c++17 -c /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu -o multi_tensor_adam.cuda.o
x86_64-conda-linux-gnu-cc: fatal error: cannot execute 'cc1plus': execvp: No such file or directory
compilation terminated.
nvcc fatal : Failed to preprocess host compiler properties.
[2/3] c++ -MMD -MF fused_adam_frontend.o.d -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -I/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/adam -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/TH -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/THC -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/include -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++17 -g -Wno-reorder -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -DBF16_AVAILABLE -c /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/fused_adam_frontend.cpp -o fused_adam_frontend.o
ninja: build stopped: subcommand failed.
Loading extension module fused_adam...
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2104, in _run_ninja_build
[rank0]: subprocess.run(
[rank0]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/subprocess.py", line 526, in run
[rank0]: raise CalledProcessError(retcode, process.args,
[rank0]: subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

[rank0]: The above exception was the direct cause of the following exception:

[rank0]: Traceback (most recent call last):
[rank0]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/runpy.py", line 196, in _run_module_as_main
[rank0]: return _run_code(code, main_globals, None,
[rank0]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/runpy.py", line 86, in _run_code
[rank0]: exec(code, run_globals)
[rank0]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/main.py", line 9, in
[rank0]: sys.exit(main())
[rank0]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/main.py", line 601, in main
[rank0]: trainer = MCTSTrainer(args, ds_train_config, ds_eval_config)
[rank0]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/tools/trainers/tsrl_trainer.py", line 83, in init
[rank0]: self.init_engines()
[rank0]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/tools/trainers/tsrl_trainer.py", line 307, in init_engines
[rank0]: self.actor_model = self._init_train_engine(
[rank0]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/tools/trainers/tsrl_trainer.py", line 259, in _init_train_engine
[rank0]: optimizer = FusedAdam(optimizer_grouped_parameters, lr=lr, betas=ADAM_BETAS)
[rank0]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/adam/fused_adam.py", line 94, in init
[rank0]: fused_adam_cuda = FusedAdamBuilder().load()
[rank0]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 531, in load
[rank0]: return self.jit_load(verbose)
[rank0]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 578, in jit_load
[rank0]: op_module = load(name=self.name,
[rank0]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1314, in load
[rank0]: return _jit_compile(
[rank0]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1721, in _jit_compile
[rank0]: _write_ninja_file_and_build_library(
[rank0]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1833, in _write_ninja_file_and_build_library
[rank0]: _run_ninja_build(
[rank0]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2120, in _run_ninja_build
[rank0]: raise RuntimeError(message) from e
[rank0]: RuntimeError: Error building extension 'fused_adam'
[rank2]: Traceback (most recent call last):
[rank2]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/runpy.py", line 196, in _run_module_as_main
[rank2]: return _run_code(code, main_globals, None,
[rank2]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/runpy.py", line 86, in _run_code
[rank2]: exec(code, run_globals)
[rank2]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/main.py", line 9, in
[rank2]: sys.exit(main())
[rank2]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/main.py", line 601, in main
[rank2]: trainer = MCTSTrainer(args, ds_train_config, ds_eval_config)
[rank2]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/tools/trainers/tsrl_trainer.py", line 83, in init
[rank2]: self.init_engines()
[rank2]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/tools/trainers/tsrl_trainer.py", line 307, in init_engines
[rank2]: self.actor_model = self._init_train_engine(
[rank2]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/tools/trainers/tsrl_trainer.py", line 259, in _init_train_engine
[rank2]: optimizer = FusedAdam(optimizer_grouped_parameters, lr=lr, betas=ADAM_BETAS)
[rank2]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/adam/fused_adam.py", line 94, in init
[rank2]: fused_adam_cuda = FusedAdamBuilder().load()
[rank2]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 531, in load
[rank2]: return self.jit_load(verbose)
[rank2]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 578, in jit_load
[rank2]: op_module = load(name=self.name,
[rank2]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1314, in load
[rank2]: return _jit_compile(
[rank2]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1746, in _jit_compile
[rank2]: return _import_module_from_library(name, build_directory, is_python_module)
[rank2]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2140, in _import_module_from_library
[rank2]: module = importlib.util.module_from_spec(spec)
[rank2]: File "", line 571, in module_from_spec
[rank2]: File "", line 1176, in create_module
[rank2]: File "", line 241, in _call_with_frames_removed
[rank2]: ImportError: /home/msrai4srl4s/.cache/torch_extensions/py310_cu118/fused_adam/fused_adam.so: cannot open shared object file: No such file or directory
Loading extension module fused_adam...
Loading extension module fused_adam...
[rank3]: Traceback (most recent call last):
[rank3]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/runpy.py", line 196, in _run_module_as_main
[rank3]: return _run_code(code, main_globals, None,
[rank3]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/runpy.py", line 86, in _run_code
[rank3]: exec(code, run_globals)
[rank3]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/main.py", line 9, in
[rank3]: sys.exit(main())
[rank3]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/main.py", line 601, in main
[rank3]: trainer = MCTSTrainer(args, ds_train_config, ds_eval_config)
[rank3]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/tools/trainers/tsrl_trainer.py", line 83, in init
[rank3]: self.init_engines()
[rank3]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/tools/trainers/tsrl_trainer.py", line 307, in init_engines
[rank3]: self.actor_model = self._init_train_engine(
[rank3]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/tools/trainers/tsrl_trainer.py", line 259, in _init_train_engine
[rank3]: optimizer = FusedAdam(optimizer_grouped_parameters, lr=lr, betas=ADAM_BETAS)
[rank3]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/adam/fused_adam.py", line 94, in init
[rank3]: fused_adam_cuda = FusedAdamBuilder().load()
[rank3]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 531, in load
[rank3]: return self.jit_load(verbose)
[rank3]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 578, in jit_load
[rank3]: op_module = load(name=self.name,
[rank3]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1314, in load
[rank3]: return _jit_compile(
[rank3]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1746, in _jit_compile
[rank3]: return _import_module_from_library(name, build_directory, is_python_module)
[rank3]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2140, in _import_module_from_library
[rank3]: module = importlib.util.module_from_spec(spec)
[rank3]: File "", line 571, in module_from_spec
[rank3]: File "", line 1176, in create_module
[rank3]: File "", line 241, in _call_with_frames_removed
[rank3]: ImportError: /home/msrai4srl4s/.cache/torch_extensions/py310_cu118/fused_adam/fused_adam.so: cannot open shared object file: No such file or directory
[rank1]: Traceback (most recent call last):
[rank1]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/runpy.py", line 196, in _run_module_as_main
[rank1]: return _run_code(code, main_globals, None,
[rank1]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/runpy.py", line 86, in _run_code
[rank1]: exec(code, run_globals)
[rank1]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/main.py", line 9, in
[rank1]: sys.exit(main())
[rank1]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/main.py", line 601, in main
[rank1]: trainer = MCTSTrainer(args, ds_train_config, ds_eval_config)
[rank1]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/tools/trainers/tsrl_trainer.py", line 83, in init
[rank1]: self.init_engines()
[rank1]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/tools/trainers/tsrl_trainer.py", line 307, in init_engines
[rank1]: self.actor_model = self._init_train_engine(
[rank1]: File "/home/msrai4srl4s/xuerui/LLM_Reasoning/reasoning/mcts/tools/trainers/tsrl_trainer.py", line 259, in _init_train_engine
[rank1]: optimizer = FusedAdam(optimizer_grouped_parameters, lr=lr, betas=ADAM_BETAS)
[rank1]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/adam/fused_adam.py", line 94, in init
[rank1]: fused_adam_cuda = FusedAdamBuilder().load()
[rank1]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 531, in load
[rank1]: return self.jit_load(verbose)
[rank1]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 578, in jit_load
[rank1]: op_module = load(name=self.name,
[rank1]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1314, in load
[rank1]: return _jit_compile(
[rank1]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1746, in _jit_compile
[rank1]: return _import_module_from_library(name, build_directory, is_python_module)
[rank1]: File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2140, in _import_module_from_library
[rank1]: module = importlib.util.module_from_spec(spec)
[rank1]: File "", line 571, in module_from_spec
[rank1]: File "", line 1176, in create_module
[rank1]: File "", line 241, in _call_with_frames_removed
[rank1]: ImportError: /home/msrai4srl4s/.cache/torch_extensions/py310_cu118/fused_adam/fused_adam.so: cannot open shared object file: No such file or directory
[rank0]:[W1224 08:55:42.050474378 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())
[2024-12-24 08:55:44,055] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 83558
[2024-12-24 08:55:44,110] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 83559
[2024-12-24 08:55:44,154] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 83560
[2024-12-24 08:55:44,155] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 83561

ds_report output

[2024-12-24 09:10:09,687] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)

DeepSpeed C++/CUDA extension op report

NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.

JIT compiled ops requires ninja
ninja .................. [OKAY]

op name ................ installed .. compatible

[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
[WARNING] FP Quantizer is using an untested triton version (3.1.0), only 2.3.(0, 1) and 3.0.0 are known to be compatible with these kernels
fp_quantizer ........... [NO] ....... [NO]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
/home/msrai4srl4s/miniconda3/bin/../lib/gcc/x86_64-conda-linux-gnu/11.2.0/../../../../x86_64-conda-linux-gnu/bin/ld: cannot find -lcufile: No such file or directory
collect2: error: ld returned 1 exit status
gds .................... [NO] ....... [NO]
transformer_inference .. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.5
[WARNING] using untested triton version (3.1.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]

DeepSpeed general environment info:
torch install path ............... ['/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch']
torch version .................... 2.5.1+cu118
deepspeed install path ........... ['/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed']
deepspeed info ................... 0.16.2, unknown, unknown
torch cuda version ............... 11.8
torch hip version ................ None
nvcc version ..................... 11.7
deepspeed wheel compiled w. ...... torch 2.5, cuda 11.8
shared memory (/dev/shm) size .... 433.03 GB

System info (please complete the following information):

  • OS: [Ubuntu 20.04.6 LTS]
  • GPU count and types [4 machines with 80GB A100s each]
  • Interconnects : {
    GPU0 GPU1 GPU2 GPU3 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID
    GPU0 X NV12 SYS SYS NODE 0-23 0 N/A
    GPU1 NV12 X SYS SYS SYS 24-47 1 N/A
    GPU2 SYS SYS X NV12 SYS 48-71 2 N/A
    GPU3 SYS SYS NV12 X SYS 72-95 3 N/A
    NIC0 NODE SYS SYS SYS X
    Legend:
    X = Self
    SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
    NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
    PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
    PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
    PIX = Connection traversing at most a single PCIe bridge
    NV# = Connection traversing a bonded set of # NVLinks
    NIC Legend:
    NIC0: mlx5_0 }
  • Python version 3.10.16
  • deepspeed 0.16.2
  • torch 2.5.1+cu118
  • transformers 4.45.2
  • gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
    Copyright (C) 2019 Free Software Foundation, Inc.
    This is free software; see the source for copying conditions. There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
  • nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2022 NVIDIA Corporation
    Built on Wed_Jun__8_16:49:14_PDT_2022
    Cuda compilation tools, release 11.7, V11.7.99
    Build cuda_11.7.r11.7/compiler.31442593_0

Could anyone help me to solver this problem?

@XueruiSu XueruiSu added bug Something isn't working training labels Dec 24, 2024
@XueruiSu
Copy link
Author

When I run the following code from #6892 (comment), same error given.

Code

import torch, deepspeed
from deepspeed.ops.adam.fused_adam import FusedAdam
x = FusedAdam([torch.empty(100)])

Error

python test_fuse_adam.py 
[2024-12-24 09:47:23,577] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Using /home/user/.cache/torch_extensions/py310_cu118 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/user/.cache/torch_extensions/py310_cu118/fused_adam/build.ninja...
/home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py:1964: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation. 
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
  warnings.warn(
Building extension module fused_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] /home/user/miniconda3/envs/mcts-dpo/bin/nvcc --generate-dependencies-with-compile --dependency-output multi_tensor_adam.cuda.o.d -ccbin /home/user/miniconda3/bin/x86_64-conda-linux-gnu-cc -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/adam -isystem /home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include -isystem /home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/TH -isystem /home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/THC -isystem /home/user/miniconda3/envs/mcts-dpo/include -isystem /home/user/miniconda3/envs/mcts-dpo/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -lineinfo --use_fast_math -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_80,code=compute_80 -DBF16_AVAILABLE -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT16_CONVERSIONS__ -std=c++17 -c /home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu -o multi_tensor_adam.cuda.o 
FAILED: multi_tensor_adam.cuda.o 
/home/user/miniconda3/envs/mcts-dpo/bin/nvcc --generate-dependencies-with-compile --dependency-output multi_tensor_adam.cuda.o.d -ccbin /home/user/miniconda3/bin/x86_64-conda-linux-gnu-cc -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/adam -isystem /home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include -isystem /home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/TH -isystem /home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/THC -isystem /home/user/miniconda3/envs/mcts-dpo/include -isystem /home/user/miniconda3/envs/mcts-dpo/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -lineinfo --use_fast_math -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_80,code=compute_80 -DBF16_AVAILABLE -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT16_CONVERSIONS__ -std=c++17 -c /home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu -o multi_tensor_adam.cuda.o 
x86_64-conda-linux-gnu-cc: fatal error: cannot execute 'cc1plus': execvp: No such file or directory
compilation terminated.
nvcc fatal   : Failed to preprocess host compiler properties.
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2104, in _run_ninja_build
    subprocess.run(
  File "/home/user/miniconda3/envs/mcts-dpo/lib/python3.10/subprocess.py", line 526, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/user/xuerui/LLM_Reasoning/test_fuse_adam.py", line 3, in <module>
    x = FusedAdam([torch.empty(100)])
  File "/home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/adam/fused_adam.py", line 94, in __init__
    fused_adam_cuda = FusedAdamBuilder().load()
  File "/home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 531, in load
    return self.jit_load(verbose)
  File "/home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 578, in jit_load
    op_module = load(name=self.name,
  File "/home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1314, in load
    return _jit_compile(
  File "/home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1721, in _jit_compile
    _write_ninja_file_and_build_library(
  File "/home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1833, in _write_ninja_file_and_build_library
    _run_ninja_build(
  File "/home/user/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2120, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error building extension 'fused_adam'

@tjruwase
Copy link
Contributor

@XueruiSu, there seems to be a problem with the c++ compiler in your environment, as highlighted below:

Image

@XueruiSu
Copy link
Author

XueruiSu commented Dec 25, 2024

@tjruwase Thanks for your response. I changed my g++ & gcc version into 11.2.0. Before that, my gcc & g++ version is 9.4.0.

g++ --version

g++ (conda-forge gcc 11.2.0-16) 11.2.0
Copyright (C) 2021 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

gcc --version

gcc (conda-forge gcc 11.2.0-16) 11.2.0
Copyright (C) 2021 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

rerun the code:

import torch, deepspeed
from deepspeed.ops.adam.fused_adam import FusedAdam
x = FusedAdam([torch.empty(100)])

I got another error below:

[2024-12-25 05:16:48,739] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Using /home/msrai4srl4s/.cache/torch_extensions/py310_cu118 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/msrai4srl4s/.cache/torch_extensions/py310_cu118/fused_adam/build.ninja...
/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py:1964: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation. 
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
  warnings.warn(
Building extension module fused_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] /home/msrai4srl4s/miniconda3/envs/mcts-dpo/bin/nvcc --generate-dependencies-with-compile --dependency-output multi_tensor_adam.cuda.o.d -ccbin /home/msrai4srl4s/miniconda3/envs/mcts-dpo/bin/x86_64-conda-linux-gnu-cc -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/adam -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/TH -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/include/THC -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/include -isystem /home/msrai4srl4s/miniconda3/envs/mcts-dpo/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -lineinfo --use_fast_math -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_80,code=compute_80 -DBF16_AVAILABLE -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT16_CONVERSIONS__ -std=c++17 -c /home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu -o multi_tensor_adam.cuda.o 
[2/2] /home/msrai4srl4s/miniconda3/envs/mcts-dpo/bin/x86_64-conda-linux-gnu-c++ fused_adam_frontend.o multi_tensor_adam.cuda.o -shared -L/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib -lcudart -o fused_adam.so
FAILED: fused_adam.so 
/home/msrai4srl4s/miniconda3/envs/mcts-dpo/bin/x86_64-conda-linux-gnu-c++ fused_adam_frontend.o multi_tensor_adam.cuda.o -shared -L/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib -lcudart -o fused_adam.so
/home/msrai4srl4s/miniconda3/envs/mcts-dpo/bin/../lib/gcc/x86_64-conda-linux-gnu/11.2.0/../../../../x86_64-conda-linux-gnu/bin/ld: cannot find -lcudart
collect2: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2104, in _run_ninja_build
    subprocess.run(
  File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/subprocess.py", line 526, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/msrai4srl4s/xuerui/LLM_Reasoning/test_fuse_adam.py", line 3, in <module>
    x = FusedAdam([torch.empty(100)])
  File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/adam/fused_adam.py", line 94, in __init__
    fused_adam_cuda = FusedAdamBuilder().load()
  File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 531, in load
    return self.jit_load(verbose)
  File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 578, in jit_load
    op_module = load(name=self.name,
  File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1314, in load
    return _jit_compile(
  File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1721, in _jit_compile
    _write_ninja_file_and_build_library(
  File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1833, in _write_ninja_file_and_build_library
    _run_ninja_build(
  File "/home/msrai4srl4s/miniconda3/envs/mcts-dpo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2120, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error building extension 'fused_adam'

I install my gcc&g++ by the following codes. Seems like there are another errors I do not understand.

# Install the corresponding version
conda install -c conda-forge gxx_linux-64=11.2.0
# Create a symbolic link to the path of your installed gcc\g++
ln -s /home/ms/anaconda3/envs/env_name/bin/x86_64-conda-linux-gnu-g++ g++
ln -s /home/ms/anaconda3/envs/env_name/bin/x86_64-conda-linux-gnu-gcc gcc

@XueruiSu
Copy link
Author

Oh I see. I should find the right 'libcudart.so' file.

@XueruiSu
Copy link
Author

However, the second problem that "The second is the stack under stage 3 with the 'offload' of 'optimizer'" is still underdebug. upset, upset, upset.

@XueruiSu
Copy link
Author

The backward code is below:

actor_model.backward(loss)
actor_model.step()

where actor_model is a 'DeepSpeedEngine' class, and loss is 'tensor(0.6945, device='cuda:1', grad_fn=)'. actor_model:

DeepSpeedEngine(
  (module): LlamaForCausalLM(
    (model): LlamaModel(
      (embed_tokens): Embedding(128256, 4096)
      (layers): ModuleList(
        (0-31): 32 x LlamaDecoderLayer(
          (self_attn): LlamaSdpaAttention(
            (q_proj): Linear(in_features=4096, out_features=4096, bias=False)
            (k_proj): Linear(in_features=4096, out_features=1024, bias=False)
            (v_proj): Linear(in_features=4096, out_features=1024, bias=False)
            (o_proj): Linear(in_features=4096, out_features=4096, bias=False)
            (rotary_emb): LlamaRotaryEmbedding()
          )
          (mlp): LlamaMLP(
            (gate_proj): Linear(in_features=4096, out_features=14336, bias=False)
            (up_proj): Linear(in_features=4096, out_features=14336, bias=False)
            (down_proj): Linear(in_features=14336, out_features=4096, bias=False)
            (act_fn): SiLU()
          )
          (input_layernorm): LlamaRMSNorm((4096,), eps=1e-05)
          (post_attention_layernorm): LlamaRMSNorm((4096,), eps=1e-05)
        )
      )
      (norm): LlamaRMSNorm((4096,), eps=1e-05)
      (rotary_emb): LlamaRotaryEmbedding()
    )
    (lm_head): Linear(in_features=4096, out_features=128256, bias=False)
  )
)

When the program reaches line 'actor_model.backward(loss)', it will stop and will not exit until the multi-threaded timeout mechanism is triggered.

@XueruiSu
Copy link
Author

The GPU utilization is below:
Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working training
Projects
None yet
Development

No branches or pull requests

2 participants