code | Notes |
---|---|
src/ML-try-007.ipynb | Was the final submission code |
Sets up the notebook to use tensorboard. | |
I doubt tensorboard works in watsonx - remove it | |
Uses ./csv/usgs_gsvb_v2.csv for data | |
The input has 9K rows and 8 columns | |
Should break this up into a single file to generate the clean data including a yaml data contract | |
adds new columns, normalizes the data | |
-- cut off for cleaning data -- | |
The rest of the code has: | |
* Defining model | |
* training model | |
* testing model | |
csv/usgs_gsvb_v2 | Initial set of raw data |
Originally this data came from two sources USGS and Greenstream sensor data | |
This dataset has been massaged and combined previously to make the current file. |
Water is listed as one of the objectives. The specific water entry is "Address issues of water scarcity and quality".
In this case, the project measures water level (scarcity) at a postion based upon nearby sensors.
This is a synoposis of this guide
- download this pax via laptop
https://github.com/ZOSOpenTools/meta/releases/download/v0.6.2/meta-0.6.2.pax.Z
- upload to USS on zos system
scp meta-0.6.2.pax.Z [email protected]:/netskin/.
- update your shell with proper settings
- add this to ~/.bashrc
export _BPXK_AUTOCVT=ON
export _CEE_RUNOPTS="$_CEE_RUNOPTS FILETAG(AUTOCVT,AUTOTAG) POSIX(ON)"
export _TAG_REDIR_ERR=txt
export _TAG_REDIR_IN=txt
export _TAG_REDIR_OUT=txt
- logout or source your .bashrc and use `$ export ` to see these settings are in effect.
- Unpack the pax file
cd /netskin
pax -rf meta-0.6.2.pax.Z
cd meta-0.6.2
- source the
.env
file into your current environment . ./.env
- install the bootstrap
zopen init
- specify
/netskin/zopen
as destination - NOTE: If it fails due to cacert.pem error, just redo
. ./.env
andzopen init
- Add
zopen-config
to~/.profile
. /netskin/zopen/etc/zopen-config
zopen install llamacpp
- download the model
curl -L -O https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/resolve/main/llama-2-7b-chat.ggmlv3.q4_0.bin
- set patch to the driver
ls /netskin/zopen/usr/local/zopen/llamacpp/llamacpp/bin/
Igor's blog post on porting LLaMa to USS
Create a script to time execution time to use LLaMa on USS to generate some summary text.
$ cat << EOF > doit.sh
start=`date +%s`
main -m ./llama-2-7b-chat.ggmlv3.q4_0.bin -n 125 -i -p "[INST] <<SYS>> You are a helpful, respectful and honest assistant. <</SYS>> Write a markdown file that summarizes the following text: call for code IBM TechXchange This is a mini two day hackathon for prizes. It is due at 3pm today.[/INST]"
end=`date +%s`
runtime=$((end-start))
echo $runtime
EOF
not this one
this one