Skip to content

Models path change behaviour #61

@prawel

Description

@prawel

When I change models path in settings, it overwrites any models.ini file that is there, without any confirmation. This is wrong.

Also, app doesn't scan subdirectories for GGUFs.

Ideally, it would be great that when pointed to directory that contains models.ini, in llama-server --models-preset syntax, menubar should display that models, with possibility to load/unload them, just like llama-server webui does, along with loaded/unloaded status.

It should also follow settings from models.ini file, e.g.:

version = 1

[*]
fit = on ; Automatic memory fitting
ctk = q8_0 ; KV cache key quantization
ctv = q8_0 ; KV cache value quantization
fa = on ; Enable flash attention
mlock = on ; Lock model in RAM
kvu = on ; Unified KV cache buffer
fit = on
ctx-size = 0
jinja = on

[GPT-OSS-20B]
m = /Users/pk/test/gguf/ggml-org/gpt-oss-20b-GGUF/gpt-oss-20b-mxfp4.gguf
grammar-file = /Users/pk/test/gpt-oss-grammar.gbnf
temp = 1.0
top-p = 1.0
top-k = 0
chat-template-kwargs = {"reasoning_effort":"low"}

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions