Llama object has no attribute 'ctx' #1615
Replies: 10 comments 11 replies
-
|
I had a similar error message but not the source of the issue doesn't seem the same. In my case I had specified the wrong path to my model. However, it led me to investigate the code right before the error, it is attempting to load the model into self.ctx. Maybe there is something wrong with your bin file so it can't load it? Have to tried to load that model with llama.cpp, or tried other models? |
Beta Was this translation helpful? Give feedback.
-
|
I just discovered llama.ccp or whatever And I'm trying to wrap my head
around what the difference between that and kibold and tavern really are
.. It seems like this technology has evolved so fast and there are some
ways that have come out to make it easier but some people are still using
some mixed matched methods for whatever reason or another. I'm so confused.
I think my issue with this error is because of permissions on this stupid
@ss work computer and the fact that I cannot run things in administrator
mode. So it's probably trying to do some read or write stuff and doesn't
have proper permissions. Either way I give up trying to run this locally..
What I'm really trying to do is run the best role playing chat bot
simulator that can utilize text to speech functions like Silero. So I
figure the Pygmalion model is what I should try to run.. So far I'm having
a hell of a time getting the character to remember anything about itself
compared to running that same character with vicuna..
Can anyone suggest a YouTube Earth that is talking primarily about role
playing and these chat models? I really want to make some cool game like a
mud or something.
…On Fri, Apr 28, 2023, 8:44 AM electric-samurai ***@***.***> wrote:
I had a similar error message but not the source of the issue doesn't seem
the same. In my case I had specified the wrong path to my model. However,
it led me to investigate the code right before the error, it is attempting
to load the model into self.ctx. Maybe there is something wrong with your
bin file so it can't load it? Have to tried to load that model with
llama.cpp, or tried other models?
—
Reply to this email directly, view it on GitHub
<#1615 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6FJK66UGU4P5FK6CF6DJILXDO3ULANCNFSM6AAAAAAXOWINUY>
.
You are receiving this because you authored the thread.Message ID:
<oobabooga/text-generation-webui/repo-discussions/1615/comments/5754142@
github.com>
|
Beta Was this translation helpful? Give feedback.
-
|
@HorrySheet I'm getting a similar error to the above. Did you manage to find a solution? |
Beta Was this translation helpful? Give feedback.
-
|
I'm on Windows and had a similar-looking problem: Instead of passing the path like this: I passed it like this: |
Beta Was this translation helpful? Give feedback.
-
|
Thanks but it seems there is a whole other issue going in with it. I'm
taking a break for now.
…On Sat, May 6, 2023, 7:45 PM Manuel ***@***.***> wrote:
I'm on Windows and had the same problem. Instead of passing the path like
this:
model_path="F:\LLMs\alpaca_7B\ggml-model-q4_0.bin"
I passed it like this:
model_path="F:\\LLMs\\alpaca_7B\\ggml-model-q4_0.bin"
—
Reply to this email directly, view it on GitHub
<#1615 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6FJK635DQ6NOFJQHKQZTH3XE3PCPANCNFSM6AAAAAAXOWINUY>
.
You are receiving this because you were mentioned.Message ID:
<oobabooga/text-generation-webui/repo-discussions/1615/comments/5826244@
github.com>
|
Beta Was this translation helpful? Give feedback.
-
|
Provide an absolute path to your models directory:
|
Beta Was this translation helpful? Give feedback.
-
|
I tried all kinds of configurations of the file path but still getting the same error: Note the first line has two backslashes at each folder level but they don't show in the comment here |
Beta Was this translation helpful? Give feedback.
-
|
To fix this you need to modify this line in your source code for vicuna model 7bCTX_MAX = 8192 llm = Llama( |
Beta Was this translation helpful? Give feedback.
-
|
This worked for us: Issue #0
when using
we also changed the name from 'ggml-model-q4_0.bin' to 'ggml_model_q4_0.bin' |
Beta Was this translation helpful? Give feedback.
-
|
You have to reinstall llama-cpp-python using the command below to fix it: pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Um.. so I thought I followed the instructions and I cant seem to get this thing to run any models I stick in the folder and have it download via hugging face.. always gives something around the lines of this error .. What did I do wrong? Something in the installation process or what?
Gradio HTTP request redirected to localhost :)
bin C:\Oobabooga\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.dll
C:\Oobabooga\installer_files\env\lib\site-packages\bitsandbytes\cextension.py:33: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
Loading xzuyn_pygmalion-6B-v3-ggml-q4_3...
llama.cpp weights detected: models\xzuyn_pygmalion-6B-v3-ggml-q4_3\ggml-model-q4_3.bin
Traceback (most recent call last):
File "C:\Oobabooga\text-generation-webui\server.py", line 914, in
shared.model, shared.tokenizer = load_model(shared.model_name)
File "C:\Oobabooga\text-generation-webui\modules\models.py", line 141, in load_model
model, tokenizer = LlamaCppModel.from_pretrained(model_file)
File "C:\Oobabooga\text-generation-webui\modules\llamacpp_model_alternative.py", line 30, in from_pretrained
self.model = Llama(**params)
File "C:\Oobabooga\installer_files\env\lib\site-packages\llama_cpp\llama.py", line 111, in init
self.ctx = llama_cpp.llama_init_from_file(
File "C:\Oobabooga\installer_files\env\lib\site-packages\llama_cpp\llama_cpp.py", line 156, in llama_init_from_file
return _lib.llama_init_from_file(path_model, params)
OSError: [WinError -1073741795] Windows Error 0xc000001d
Exception ignored in: <function Llama.del at 0x0000016462BEBD00>
Traceback (most recent call last):
File "C:\Oobabooga\installer_files\env\lib\site-packages\llama_cpp\llama.py", line 804, in del
if self.ctx is not None:
AttributeError: 'Llama' object has no attribute 'ctx'
Done!
Press any key to continue . . .
Beta Was this translation helpful? Give feedback.
All reactions