How to use LLamaBarn with VSCode #66
Replies: 2 comments
-
|
I also tried using VS code insiders eddition to use LlamaBarn as an open AI compatible provider however was unable to run any models. Any help would be appreciated. |
Beta Was this translation helpful? Give feedback.
-
|
@mikeholczer Have you tried checking the Extension Settings and searching for "port"? You'll likely need to update the port to match the LlamaBarn port for the embedded Llama.cpp server. The VSCode extension looks to have several options for ports for chat, completion, embeddings, and tools. Ymmv but maybe try setting all of them to the same port found in the LlamaBarn modal?
|
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
-
I see VSCode listed in the "works with" section of the readme, so I figured I'd need to install the https://github.com/ggml-org/llama.vscode extension to use it with VSCode, but I can't figured out where to set the LLamaBarn URL in the VSCode settings.
Can someone give me pointers? Thanks.
Beta Was this translation helpful? Give feedback.
All reactions