Skip to content

Question : Have you tried gemma 4 - 31b ? And can I use the latest llama cpp with this project ? #8

@x4080

Description

@x4080

Nice works, ini this address, someone said that just using --mmap it can run directly big model directly with llama cpp (i tried but failed)

https://thoughts.jock.pl/p/local-llm-35b-mac-mini-gemma-swap-production-2026

Thats why your project seems intereseting

Have you tried gemma 4 - 31b or qwen 3.5 35b a3b ? And can I use the latest llama cpp with this project ?

Thanks

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions