Replies: 2 comments 4 replies
-
|
@Fenris159 you are highlighting a very good point. As Manifest is a router, the context window can drastically change based on the chosen model, and this is decided on the fly.16k is very low, we changed it for 2M so we avoid problems. How do you feel about that? |
Beta Was this translation helpful? Give feedback.
4 replies
-
|
Hi, I also have this problem with the ctx. So the correct parameter to enter is 2M? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi — I’m using Manifest as an OpenClaw provider, and I’m trying to understand the intended behavior of manifest/auto for context budgeting.
Right now, my OpenClaw config for the Manifest provider declares manifest/auto with contextWindow: 16000, and OpenClaw then treats sessions as 16k-context sessions. That affects compaction and the session meter, even when Manifest may route the request to models with much larger native context windows.
From the README and architecture docs, my understanding is that:
manifest/auto is a router, not a single fixed model
Manifest discovers models from connected providers and routes requests to the best fit
the actual backing model may vary by request
So I’m trying to understand what the correct client behavior is here.
Questions:
Is manifest/auto intended to expose a single fixed contextWindow to OpenAI-compatible clients?
If yes, should that value be conservative (for example, the minimum guaranteed context among eligible routed models)?
If no, is there a mechanism or planned mechanism for clients like OpenClaw to learn the effective context window dynamically?
Is the current expectation that users manually set a safe cap in the client config, even though the routed model may often support much more?
More broadly: is the 16k value intentional design, or just a placeholder/default for compatibility?
The reason I’m asking is that a hard 16k client cap seems like it may under-utilize Manifest’s routing benefits when larger-context models are available, while setting it arbitrarily high seems unsafe if Manifest can still choose a smaller-context model.
I’m mainly looking for clarity on the intended contract between Manifest’s auto route and OpenAI-compatible clients such as OpenClaw.
Thanks. @brunobuddy
Beta Was this translation helpful? Give feedback.
All reactions