« , -слушайте мы интересовались почему openwebui не ест функции. Сама по себе эта модель мне не нужна » да зависает. вот файл лога. print_info: file type = Q4_0
print_info: file size = 5.06 GiB (4.71 BPW)
load: special_eos_id is not in special_eog_ids — the tokenizer config may be incorrect
load: printing all EOG tokens:
load: — 1 (‘‘)
load: — 107 (‘‘)
load: special tokens cache size = 108
load: token to piece cache size = 1.6014 MB
print_info: arch = gemma2
print_info: vocab_only = 0
print_info: no_alloc = 0
print_info: n_ctx_train = 8192
print_info: n_embd = 3584
print_info: n_embd_inp = 3584
print_info: n_layer = 42
print_info: n_head = 16
print_info: n_head_kv = 8
print_info: n_rot = 256
print_info: n_swa = 4096
print_info: is_swa_any = 1
print_info: n_embd_head_k = 256
print_info: n_embd_head_v = 256
print_info: n_gqa = 2
print_info: n_embd_k_gqa = 2048
print_info: n_embd_v_gqa = 2048
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-06
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 6.2e-02
print_info: n_ff = 14336
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: n_expert_groups = 0
print_info: n_group_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 8192
print_info: rope_yarn_log_mul= 0.0000
print_info: rope_finetuned = unknown
print_info: model type = 9B
print_info: model params = 9.24 B
print_info: general.name = gemma-2-9b-it
print_info: vocab type = SPM
print_info: n_vocab = 256000
print_info: n_merges = 0
print_info: BOS token = 2 ‘‘
print_info: EOS token = 1 ‘‘
print_info: EOT token = 107 ‘‘
print_info: UNK token = 3 ‘‘
print_info: PAD token = 0 ‘‘
print_info: LF token = 227 ‘<0x0A>‘
print_info: EOG token = 1 ‘‘
print_info: EOG token = 107 ‘‘
print_info: max token length = 93
load_tensors: loading model tensors, this can take a while… (mmap = false)
load_tensors: CPU model buffer size = 5185.21 MiB
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 4096
llama_context: n_ctx_seq = 4096
llama_context: n_batch = 512
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = auto
llama_context: kv_unified = false
llama_context: freq_base = 10000.0
llama_context: freq_scale = 1
llama_context: n_ctx_seq (4096) < n_ctx_train (8192) -- the full capacity of the model will not be utilized
llama_context: CPU output buffer size = 0.99 MiB
llama_kv_cache_iswa: using full-size SWA cache (ref: https://github.com/ggml-org/llama.cpp/pull/13194#issuecomment-2868343055)
llama_kv_cache_iswa: creating non-SWA KV cache, size = 4096 cells
llama_kv_cache: CPU KV buffer size = 672.00 MiB
llama_kv_cache: size = 672.00 MiB ( 4096 cells, 21 layers, 1/1 seqs), K (f16): 336.00 MiB, V (f16): 336.00 MiB
llama_kv_cache_iswa: creating SWA KV cache, size = 4096 cells
llama_kv_cache: CPU KV buffer size = 672.00 MiB
llama_kv_cache: size = 672.00 MiB ( 4096 cells, 21 layers, 1/1 seqs), K (f16): 336.00 MiB, V (f16): 336.00 MiB
llama_context: Flash Attention was auto, set to enabled
llama_context: CPU compute buffer size = 514.00 MiB
llama_context: graph nodes = 1524
llama_context: graph splits = 1
time=2026-04-10T08:06:12.829Z level=INFO source=server.go:1402 msg="llama runner started in 12.10 seconds"
time=2026-04-10T08:06:12.829Z level=INFO source=sched.go:561 msg="loaded runners" count=1
time=2026-04-10T08:06:12.829Z level=INFO source=server.go:1364 msg="waiting for llama runner to start responding"
time=2026-04-10T08:06:12.829Z level=INFO source=server.go:1402 msg="llama runner started in 12.10 seconds"
[GIN] 2026/04/10 - 08:06:12 | 200 | 12.79224431s | 127.0.0.1 | POST "/api/generate"
[GIN] 2026/04/10 - 08:06:37 | 200 | 4.707431766s | 127.0.0.1 | POST "/api/chat"
[GIN] 2026/04/10 - 08:06:45 | 200 | 463.959µs | 172.17.0.1 | GET "/api/tags"
[GIN] 2026/04/10 - 08:06:45 | 200 | 28.211µs | 172.17.0.1 | GET "/api/ps"
[GIN] 2026/04/10 - 08:06:45 | 200 | 43.38µs | 172.17.0.1 | GET "/api/version"
[GIN] 2026/04/10 - 08:07:26 | 200 | 78.747µs | 172.17.0.1 | GET "/api/version"
[GIN] 2026/04/10 - 08:07:28 | 200 | 398.629µs | 172.17.0.1 | GET "/api/tags"
[GIN] 2026/04/10 - 08:07:34 | 200 | 52.56µs | 172.17.0.1 | GET "/api/ps"
[GIN] 2026/04/10 - 08:07:50 | 200 | 444.119µs | 172.17.0.1 | GET "/api/tags"
[GIN] 2026/04/10 - 08:07:53 | 200 | 55.61µs | 172.17.0.1 | GET "/api/ps"