Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Capture CUDA logging output #7298

Merged
merged 6 commits into from
May 18, 2024
Merged

Capture CUDA logging output #7298

merged 6 commits into from
May 18, 2024

Conversation

fraxy-v
Copy link
Contributor

@fraxy-v fraxy-v commented May 15, 2024

Addresses logging capture issues reported here #5797.

This PR adds a CUDA logging callback similar to that in ggml-metal.h:

GGML_API void ggml_backend_metal_log_set_callback(ggml_log_callback log_callback, void * user_data);

However, the issue OP has certainly experienced other uncaptured log messages. For example:

MY LOGGER SAYS: ggml_cuda_host_malloc: failed to allocate 256.00 MiB of pinned memory: no CUDA-capable device is detected
MY LOGGER SAYS: llama_kv_cache_init:        CPU KV buffer size =   256.00 MiB
MY LOGGER SAYS: llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
MY LOGGER SAYS: ggml_cuda_host_malloc: failed to allocate 0.12 MiB of pinned memory: no CUDA-capable device is detected
MY LOGGER SAYS: llama_new_context_with_model:        CPU  output buffer size =     0.12 MiB
ggml_gallocr_reserve_n: reallocating CUDA_Host buffer from size 0.00 MiB to 70.50 MiB
MY LOGGER SAYS: ggml_cuda_host_malloc: failed to allocate 70.50 MiB of pinned memory: no CUDA-capable device is detected
MY LOGGER SAYS: llama_new_context_with_model:  CUDA_Host compute buffer size =    70.50 MiB
MY LOGGER SAYS: llama_new_context_with_model: graph nodes  = 1030
MY LOGGER SAYS: llama_new_context_with_model: graph splits = 1

system_info: n_threads = 4 / 8 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | 
sampling: 
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order: 
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature 
generate: n_ctx = 512, n_batch = 2048, n_predict = -1, n_keep = 1


<s> Tell me a story about a llama
MY LOGGER SAYS: 
MY LOGGER SAYS: llama_print_timings:   

As evident methods in ggml-alloc.c, e.g. ggml_gallocr_reserve_n, use directly stderr. So that begs the question whether the internal logging in llama.cpp should not be made commonly available, e.g. in common.h.

Copy link
Contributor

github-actions bot commented May 15, 2024

📈 llama.cpp server for bench-server-baseline on Standard_NC4as_T4_v3 for phi-2-q4_0: 547 iterations 🚀

Expand details for performance related PR only
  • Concurrent users: 8, duration: 10m
  • HTTP request : avg=8584.51ms p(95)=20980.66ms fails=, finish reason: stop=488 truncated=59
  • Prompt processing (pp): avg=102.17tk/s p(95)=445.57tk/s
  • Token generation (tg): avg=59.06tk/s p(95)=46.91tk/s
  • ggml-org/models/phi-2/ggml-model-q4_0.gguf parallel=8 ctx-size=16384 ngl=33 batch-size=2048 ubatch-size=256 pp=1024 pp+tg=2048 branch=master commit=89b3236152d97e0361939091b467d4530226e9fc

prompt_tokens_seconds

More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 547 iterations"
    y-axis "llamacpp:prompt_tokens_seconds"
    x-axis "llamacpp:prompt_tokens_seconds" 1715771052 --> 1715771682
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 631.04, 631.04, 631.04, 631.04, 631.04, 775.03, 775.03, 775.03, 775.03, 775.03, 799.41, 799.41, 799.41, 799.41, 799.41, 879.48, 879.48, 879.48, 879.48, 879.48, 884.0, 884.0, 884.0, 884.0, 884.0, 874.61, 874.61, 874.61, 874.61, 874.61, 885.73, 885.73, 885.73, 885.73, 885.73, 897.23, 897.23, 897.23, 897.23, 897.23, 902.34, 902.34, 902.34, 902.34, 902.34, 901.33, 901.33, 901.33, 901.33, 901.33, 912.65, 912.65, 912.65, 912.65, 912.65, 941.97, 941.97, 941.97, 941.97, 941.97, 940.5, 940.5, 940.5, 940.5, 940.5, 910.44, 910.44, 910.44, 910.44, 910.44, 894.75, 894.75, 894.75, 894.75, 894.75, 898.0, 898.0, 898.0, 898.0, 898.0, 896.99, 896.99, 896.99, 896.99, 896.99, 914.04, 914.04, 914.04, 914.04, 914.04, 913.35, 913.35, 913.35, 913.35, 913.35, 908.36, 908.36, 908.36, 908.36, 908.36, 914.71, 914.71, 914.71, 914.71, 914.71, 915.53, 915.53, 915.53, 915.53, 915.53, 912.0, 912.0, 912.0, 912.0, 912.0, 909.79, 909.79, 909.79, 909.79, 909.79, 910.26, 910.26, 910.26, 910.26, 910.26, 917.82, 917.82, 917.82, 917.82, 917.82, 917.26, 917.26, 917.26, 917.26, 917.26, 916.29, 916.29, 916.29, 916.29, 916.29, 920.37, 920.37, 920.37, 920.37, 920.37, 918.98, 918.98, 918.98, 918.98, 918.98, 915.25, 915.25, 915.25, 915.25, 915.25, 912.07, 912.07, 912.07, 912.07, 912.07, 909.94, 909.94, 909.94, 909.94, 909.94, 912.09, 912.09, 912.09, 912.09, 912.09, 916.14, 916.14, 916.14, 916.14, 916.14, 911.68, 911.68, 911.68, 911.68, 911.68, 908.88, 908.88, 908.88, 908.88, 908.88, 908.27, 908.27, 908.27, 908.27, 908.27, 906.47, 906.47, 906.47, 906.47, 906.47, 907.11, 907.11, 907.11, 907.11, 907.11, 905.2, 905.2, 905.2, 905.2, 905.2, 899.11, 899.11, 899.11, 899.11, 899.11, 899.22, 899.22, 899.22, 899.22, 899.22, 898.47, 898.47, 898.47, 898.47, 898.47, 895.48, 895.48, 895.48, 895.48, 895.48, 898.21, 898.21, 898.21, 898.21, 898.21, 897.71, 897.71, 897.71, 897.71, 897.71, 896.59, 896.59, 896.59, 896.59, 896.59, 897.85, 897.85, 897.85, 897.85, 897.85, 898.96, 898.96, 898.96, 898.96, 898.96, 897.75, 897.75, 897.75, 897.75, 897.75, 898.16, 898.16, 898.16, 898.16, 898.16, 900.17, 900.17, 900.17, 900.17, 900.17, 906.48, 906.48, 906.48, 906.48, 906.48, 904.9, 904.9, 904.9, 904.9, 904.9, 905.33, 905.33, 905.33, 905.33, 905.33, 906.02, 906.02, 906.02, 906.02, 906.02, 905.25, 905.25, 905.25, 905.25, 905.25, 906.47, 906.47, 906.47, 906.47, 906.47, 908.75, 908.75, 908.75, 908.75, 908.75, 908.32, 908.32, 908.32, 908.32, 908.32, 908.32]
                    
predicted_tokens_seconds
More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 547 iterations"
    y-axis "llamacpp:predicted_tokens_seconds"
    x-axis "llamacpp:predicted_tokens_seconds" 1715771052 --> 1715771682
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 41.66, 41.66, 41.66, 41.66, 41.66, 33.49, 33.49, 33.49, 33.49, 33.49, 34.68, 34.68, 34.68, 34.68, 34.68, 35.38, 35.38, 35.38, 35.38, 35.38, 35.43, 35.43, 35.43, 35.43, 35.43, 36.08, 36.08, 36.08, 36.08, 36.08, 36.79, 36.79, 36.79, 36.79, 36.79, 36.62, 36.62, 36.62, 36.62, 36.62, 36.12, 36.12, 36.12, 36.12, 36.12, 36.12, 36.12, 36.12, 36.12, 36.12, 35.86, 35.86, 35.86, 35.86, 35.86, 35.28, 35.28, 35.28, 35.28, 35.28, 33.6, 33.6, 33.6, 33.6, 33.6, 32.86, 32.86, 32.86, 32.86, 32.86, 31.86, 31.86, 31.86, 31.86, 31.86, 31.12, 31.12, 31.12, 31.12, 31.12, 31.12, 31.12, 31.12, 31.12, 31.12, 31.31, 31.31, 31.31, 31.31, 31.31, 31.21, 31.21, 31.21, 31.21, 31.21, 30.77, 30.77, 30.77, 30.77, 30.77, 30.54, 30.54, 30.54, 30.54, 30.54, 30.88, 30.88, 30.88, 30.88, 30.88, 30.89, 30.89, 30.89, 30.89, 30.89, 30.81, 30.81, 30.81, 30.81, 30.81, 31.08, 31.08, 31.08, 31.08, 31.08, 31.12, 31.12, 31.12, 31.12, 31.12, 31.21, 31.21, 31.21, 31.21, 31.21, 31.57, 31.57, 31.57, 31.57, 31.57, 31.74, 31.74, 31.74, 31.74, 31.74, 31.78, 31.78, 31.78, 31.78, 31.78, 31.76, 31.76, 31.76, 31.76, 31.76, 31.94, 31.94, 31.94, 31.94, 31.94, 31.88, 31.88, 31.88, 31.88, 31.88, 31.76, 31.76, 31.76, 31.76, 31.76, 31.31, 31.31, 31.31, 31.31, 31.31, 31.06, 31.06, 31.06, 31.06, 31.06, 31.06, 31.06, 31.06, 31.06, 31.06, 31.32, 31.32, 31.32, 31.32, 31.32, 31.33, 31.33, 31.33, 31.33, 31.33, 31.39, 31.39, 31.39, 31.39, 31.39, 31.51, 31.51, 31.51, 31.51, 31.51, 31.51, 31.51, 31.51, 31.51, 31.51, 31.34, 31.34, 31.34, 31.34, 31.34, 31.23, 31.23, 31.23, 31.23, 31.23, 29.63, 29.63, 29.63, 29.63, 29.63, 29.64, 29.64, 29.64, 29.64, 29.64, 29.57, 29.57, 29.57, 29.57, 29.57, 29.55, 29.55, 29.55, 29.55, 29.55, 29.55, 29.55, 29.55, 29.55, 29.55, 29.53, 29.53, 29.53, 29.53, 29.53, 29.56, 29.56, 29.56, 29.56, 29.56, 29.54, 29.54, 29.54, 29.54, 29.54, 29.54, 29.54, 29.54, 29.54, 29.54, 29.56, 29.56, 29.56, 29.56, 29.56, 29.53, 29.53, 29.53, 29.53, 29.53, 29.62, 29.62, 29.62, 29.62, 29.62, 29.69, 29.69, 29.69, 29.69, 29.69, 29.81, 29.81, 29.81, 29.81, 29.81, 29.9, 29.9, 29.9, 29.9, 29.9, 29.99, 29.99, 29.99, 29.99, 29.99, 29.99, 29.99, 29.99, 29.99, 29.99, 30.12]
                    

Details

kv_cache_usage_ratio

More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 547 iterations"
    y-axis "llamacpp:kv_cache_usage_ratio"
    x-axis "llamacpp:kv_cache_usage_ratio" 1715771052 --> 1715771682
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.18, 0.18, 0.18, 0.18, 0.18, 0.24, 0.24, 0.24, 0.24, 0.24, 0.11, 0.11, 0.11, 0.11, 0.11, 0.19, 0.19, 0.19, 0.19, 0.19, 0.21, 0.21, 0.21, 0.21, 0.21, 0.1, 0.1, 0.1, 0.1, 0.1, 0.13, 0.13, 0.13, 0.13, 0.13, 0.18, 0.18, 0.18, 0.18, 0.18, 0.18, 0.18, 0.18, 0.18, 0.18, 0.15, 0.15, 0.15, 0.15, 0.15, 0.22, 0.22, 0.22, 0.22, 0.22, 0.33, 0.33, 0.33, 0.33, 0.33, 0.23, 0.23, 0.23, 0.23, 0.23, 0.39, 0.39, 0.39, 0.39, 0.39, 0.25, 0.25, 0.25, 0.25, 0.25, 0.21, 0.21, 0.21, 0.21, 0.21, 0.09, 0.09, 0.09, 0.09, 0.09, 0.29, 0.29, 0.29, 0.29, 0.29, 0.29, 0.29, 0.29, 0.29, 0.29, 0.15, 0.15, 0.15, 0.15, 0.15, 0.13, 0.13, 0.13, 0.13, 0.13, 0.16, 0.16, 0.16, 0.16, 0.16, 0.29, 0.29, 0.29, 0.29, 0.29, 0.14, 0.14, 0.14, 0.14, 0.14, 0.15, 0.15, 0.15, 0.15, 0.15, 0.17, 0.17, 0.17, 0.17, 0.17, 0.15, 0.15, 0.15, 0.15, 0.15, 0.12, 0.12, 0.12, 0.12, 0.12, 0.15, 0.15, 0.15, 0.15, 0.15, 0.22, 0.22, 0.22, 0.22, 0.22, 0.08, 0.08, 0.08, 0.08, 0.08, 0.12, 0.12, 0.12, 0.12, 0.12, 0.28, 0.28, 0.28, 0.28, 0.28, 0.33, 0.33, 0.33, 0.33, 0.33, 0.35, 0.35, 0.35, 0.35, 0.35, 0.16, 0.16, 0.16, 0.16, 0.16, 0.15, 0.15, 0.15, 0.15, 0.15, 0.14, 0.14, 0.14, 0.14, 0.14, 0.14, 0.14, 0.14, 0.14, 0.14, 0.18, 0.18, 0.18, 0.18, 0.18, 0.31, 0.31, 0.31, 0.31, 0.31, 0.42, 0.42, 0.42, 0.42, 0.42, 0.5, 0.5, 0.5, 0.5, 0.5, 0.54, 0.54, 0.54, 0.54, 0.54, 0.25, 0.25, 0.25, 0.25, 0.25, 0.27, 0.27, 0.27, 0.27, 0.27, 0.22, 0.22, 0.22, 0.22, 0.22, 0.27, 0.27, 0.27, 0.27, 0.27, 0.15, 0.15, 0.15, 0.15, 0.15, 0.2, 0.2, 0.2, 0.2, 0.2, 0.24, 0.24, 0.24, 0.24, 0.24, 0.19, 0.19, 0.19, 0.19, 0.19, 0.16, 0.16, 0.16, 0.16, 0.16, 0.25, 0.25, 0.25, 0.25, 0.25, 0.1, 0.1, 0.1, 0.1, 0.1, 0.09, 0.09, 0.09, 0.09, 0.09, 0.13, 0.13, 0.13, 0.13, 0.13, 0.1, 0.1, 0.1, 0.1, 0.1, 0.14, 0.14, 0.14, 0.14, 0.14, 0.23, 0.23, 0.23, 0.23, 0.23, 0.23, 0.23, 0.23, 0.23, 0.23, 0.29]
                    
requests_processing
More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 547 iterations"
    y-axis "llamacpp:requests_processing"
    x-axis "llamacpp:requests_processing" 1715771052 --> 1715771682
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 6.0, 6.0, 6.0, 6.0, 6.0, 7.0, 7.0, 7.0, 7.0, 7.0, 1.0, 1.0, 1.0, 1.0, 1.0, 4.0, 4.0, 4.0, 4.0, 4.0, 5.0, 5.0, 5.0, 5.0, 5.0, 6.0, 6.0, 6.0, 6.0, 6.0, 7.0, 7.0, 7.0, 7.0, 7.0, 8.0, 8.0, 8.0, 8.0, 8.0, 5.0, 5.0, 5.0, 5.0, 5.0, 4.0, 4.0, 4.0, 4.0, 4.0, 5.0, 5.0, 5.0, 5.0, 5.0, 7.0, 7.0, 7.0, 7.0, 7.0, 8.0, 8.0, 8.0, 8.0, 8.0, 7.0, 7.0, 7.0, 7.0, 7.0, 5.0, 5.0, 5.0, 5.0, 5.0, 6.0, 6.0, 6.0, 6.0, 6.0, 4.0, 4.0, 4.0, 4.0, 4.0, 7.0, 7.0, 7.0, 7.0, 7.0, 8.0, 8.0, 8.0, 8.0, 8.0, 6.0, 6.0, 6.0, 6.0, 6.0, 5.0, 5.0, 5.0, 5.0, 5.0, 3.0, 3.0, 3.0, 3.0, 3.0, 7.0, 7.0, 7.0, 7.0, 7.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 5.0, 5.0, 5.0, 5.0, 5.0, 4.0, 4.0, 4.0, 4.0, 4.0, 3.0, 3.0, 3.0, 3.0, 3.0, 6.0, 6.0, 6.0, 6.0, 6.0, 7.0, 7.0, 7.0, 7.0, 7.0, 8.0, 8.0, 8.0, 8.0, 8.0, 2.0, 2.0, 2.0, 2.0, 2.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 6.0, 6.0, 6.0, 6.0, 6.0, 3.0, 3.0, 3.0, 3.0, 3.0, 6.0, 6.0, 6.0, 6.0, 6.0, 0.0, 0.0, 0.0, 0.0, 0.0, 6.0, 6.0, 6.0, 6.0, 6.0, 3.0, 3.0, 3.0, 3.0, 3.0, 4.0, 4.0, 4.0, 4.0, 4.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 4.0, 4.0, 4.0, 4.0, 4.0, 5.0, 5.0, 5.0, 5.0, 5.0, 6.0, 6.0, 6.0, 6.0, 6.0, 4.0, 4.0, 4.0, 4.0, 4.0, 5.0, 5.0, 5.0, 5.0, 5.0, 6.0, 6.0, 6.0, 6.0, 6.0, 7.0, 7.0, 7.0, 7.0, 7.0, 3.0, 3.0, 3.0, 3.0, 3.0, 4.0, 4.0, 4.0, 4.0, 4.0, 7.0, 7.0, 7.0, 7.0, 7.0, 1.0, 1.0, 1.0, 1.0, 1.0, 4.0, 4.0, 4.0, 4.0, 4.0, 5.0, 5.0, 5.0, 5.0, 5.0, 8.0, 8.0, 8.0, 8.0, 8.0, 7.0, 7.0, 7.0, 7.0, 7.0, 5.0, 5.0, 5.0, 5.0, 5.0, 4.0, 4.0, 4.0, 4.0, 4.0, 0.0]
                    

@mofosyne mofosyne added enhancement New feature or request Nvidia GPU Issues specific to Nvidia GPUs Review Complexity : Medium Generally require more time to grok but manageable by beginner to medium expertise level labels May 15, 2024
@ggerganov
Copy link
Owner

As evident methods in ggml-alloc.c, e.g. ggml_gallocr_reserve_n, use directly stderr. So that begs the question whether the internal logging in llama.cpp should not be made commonly available, e.g. in common.h.

Most of the printfs there are during Debug builds - these are not a problem.
For the rest though, I'm not sure - I am wondering if we should just keep using fprintf(stderr, ... for errors and only propagate a log function when we want to print some info/warning messages (such as the ones addressed in the current issue).

ggml-cuda.cu Outdated Show resolved Hide resolved
ggml-cuda.cu Outdated Show resolved Hide resolved
llama.cpp Outdated Show resolved Hide resolved
@ggerganov ggerganov requested a review from slaren May 16, 2024 08:31
llama.cpp Outdated Show resolved Hide resolved
llama.cpp Outdated Show resolved Hide resolved
fraxy-v and others added 2 commits May 18, 2024 09:42
Co-authored-by: slaren <[email protected]>
Co-authored-by: slaren <[email protected]>
@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label May 18, 2024
@slaren slaren merged commit f5bf761 into ggerganov:master May 18, 2024
62 of 68 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs Review Complexity : Medium Generally require more time to grok but manageable by beginner to medium expertise level
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants