Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

a Python script error #140

Open
bilallamal07 opened this issue Aug 6, 2024 · 5 comments
Open

a Python script error #140

bilallamal07 opened this issue Aug 6, 2024 · 5 comments

Comments

@bilallamal07
Copy link

bilallamal07 commented Aug 6, 2024

experiencing an error with a Python script. The error message indicates that a subprocess call to conda run failed with a non-zero exit status of 1. why command that failed is trying to run a Python script (train.py) within a conda environment named praison_env
virtual environment, it's activated

this is the error ERROR conda.cli.main_run:execute(125): conda run python -u /usr/local/lib/python3.10/dist-packages/praisonai/train.py train failed. (See above for error)

subprocess.CalledProcessError: Command '['conda', 'run', '--no-capture-output', '--name', 'praison_env', 'python', '-u', '/usr/local/lib/python3.10/dist-packages/praisonai/train.py', 'train']' returned non-zero exit status 1.

@VoTranThi
Copy link

VoTranThi commented Aug 7, 2024

Try installing Miniconda instead of anaconda.

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh

@bilallamal07
Copy link
Author

Miniconda instead of anaconda. already installed , it looks like l have found the conda executable in multiple locations.
l have added to correct location on my PATH environment variable
need to try try PraisonAI Train: now and let you know , thanks for your support

@bilallamal07
Copy link
Author

l have run praisonai train etc....... this is the error l got

ERROR conda.cli.main_run:execute(125): conda run python -u /usr/local/lib/python3.10/dist-packages/praisonai/train.py train failed. (See above for error)
Traceback (most recent call last):
File "/usr/local/bin/praisonai", line 8, in
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/praisonai/main.py", line 7, in main
praison_ai.main()
File "/usr/local/lib/python3.10/dist-packages/praisonai/cli.py", line 180, in main
stream_subprocess(['conda', 'run', '--no-capture-output', '--name', 'praison_env', 'python', '-u', train_script_path, 'train'], env=env)
File "/usr/local/lib/python3.10/dist-packages/praisonai/cli.py", line 59, in stream_subprocess
raise subprocess.CalledProcessError(return_code, command)
subprocess.CalledProcessError: Command '['conda', 'run', '--no-capture-output', '--name', 'praison_env', 'python', '-u', '/usr/local/lib/python3.10/dist-packages/praisonai/train.py', 'train']' returned non-zero exit status 1.

@VoTranThi
Copy link

I had the same situation, and in my case it was missing the protobuf package. try installing:
pip install google protobuf google-cloud

@bilallamal07
Copy link
Author

OK Thanks, pip install google protobuf google-cloud didn`tt solve the issue, anyhow it was conda env problem when you run it from runpod., no need to create an env just install miniconda and run it from root.

Now l`m on this situation the process taking long time, to upload model to ollama , is this normal ?
please help lm nearly there :-)

This is where is stop for quit sometimes <<--------

Saved GGUF to https://huggingface.co/MLShare/Meta-Llama-3.1-8B-Instruct
2024/08/08 06:42:07 routes.go:1108: INFO server config env="map[CUDA_VISIBLE_DEVICES:0 GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES:
HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false
OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models
OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS: OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false
OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-08T06:42:07.954Z level=INFO source=images.go:781 msg="total blobs: 0"
time=2024-08-08T06:42:07.954Z level=INFO source=images.go:788 msg="total unused blobs removed: 0"
time=2024-08-08T06:42:07.955Z level=INFO source=routes.go:1155 msg="Listening on [::]:11434 (version 0.3.4)"
time=2024-08-08T06:42:07.955Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3933888161/runners
time=2024-08-08T06:42:12.089Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries "
time=2024-08-08T06:42:12.089Z level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-08T06:42:12.241Z level=INFO source=types.go:105 msg="inference compute" id=GPU-5c1fc2b2-80d1-42be-5acf-61d17c94f065
library=cuda compute=8.6 driver=12.2 name="NVIDIA RTX A6000" total="47.5 GiB" available="40.8 GiB"

#### Overall understanding

Based on the my search analysis, it appears that this log output is from a system that is:

Configuring its environment and settings
Managing blobs (binary large objects)
Starting up and listening on a specific port
Using Large Language Models (LLMs) and dynamically loading libraries
Detecting and utilizing GPUs for computation
The system seems to be preparing for some kind of computation or processing task, possibly related to natural language processing or machine learning.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants