I have install llmtune according to the readme
and
llmtune generate --model llama-13b-4bit --weights llama-13b-4bit.pt --prompt "the pyramids were built by"
ths works
but load lora faild
llmtune generate --model llama-13b-4bit --weights llama-13b-4bit.pt --adapter alpaca-lora-13b-4bit --instruction "Write a well-thought out recipe for a blueberry lasagna dish." --max-length 500
/usr/local/python3.9.16/lib/python3.9/site-packages/peft/tuners/lora.py:161 in add_adapter │
│ │
│ 158 │ │ │ model_config = self.model.config.to_dict() if hasattr(self.model.config, "to │
│ 159 │ │ │ config = self._prepare_lora_config(config, model_config) │
│ 160 │ │ │ self.peft_config[adapter_name] = config │
│ ❱ 161 │ │ self._find_and_replace(adapter_name) │
│ 162 │ │ if len(self.peft_config) > 1 and self.peft_config[adapter_name].bias != "none": │
│ 163 │ │ │ raise ValueError( │
│ 164 │ │ │ │ "LoraModel supports only 1 adapter with bias. When using multiple adapte │
│ │
│ /usr/local/python3.9.16/lib/python3.9/site-packages/peft/tuners/lora.py:246 in _find_and_replace │
│ │
│ 243 │ │ │ │ │ │ │ │ ) │
│ 244 │ │ │ │ │ │ │ │ kwargs["fan_in_fan_out"] = lora_config.fan_in_fan_out = │
│ 245 │ │ │ │ │ │ else: │
│ ❱ 246 │ │ │ │ │ │ │ raise ValueError( │
│ 247 │ │ │ │ │ │ │ │ f"Target module {target} is not supported. " │
│ 248 │ │ │ │ │ │ │ │ f"Currently, only torch.nn.Linear and Conv1D are sup │
│ 249 │ │ │ │ │ │ │ ) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Target module QuantLinear() is not supported. Currently, only torch.nn.Linear and
Conv1D are supported.
any one helps
I have install llmtune according to the readme
and
llmtune generate --model llama-13b-4bit --weights llama-13b-4bit.pt --prompt "the pyramids were built by"
ths works
but load lora faild
llmtune generate --model llama-13b-4bit --weights llama-13b-4bit.pt --adapter alpaca-lora-13b-4bit --instruction "Write a well-thought out recipe for a blueberry lasagna dish." --max-length 500
/usr/local/python3.9.16/lib/python3.9/site-packages/peft/tuners/lora.py:161 in add_adapter │
│ │
│ 158 │ │ │ model_config = self.model.config.to_dict() if hasattr(self.model.config, "to │
│ 159 │ │ │ config = self._prepare_lora_config(config, model_config) │
│ 160 │ │ │ self.peft_config[adapter_name] = config │
│ ❱ 161 │ │ self._find_and_replace(adapter_name) │
│ 162 │ │ if len(self.peft_config) > 1 and self.peft_config[adapter_name].bias != "none": │
│ 163 │ │ │ raise ValueError( │
│ 164 │ │ │ │ "LoraModel supports only 1 adapter with bias. When using multiple adapte │
│ │
│ /usr/local/python3.9.16/lib/python3.9/site-packages/peft/tuners/lora.py:246 in _find_and_replace │
│ │
│ 243 │ │ │ │ │ │ │ │ ) │
│ 244 │ │ │ │ │ │ │ │ kwargs["fan_in_fan_out"] = lora_config.fan_in_fan_out = │
│ 245 │ │ │ │ │ │ else: │
│ ❱ 246 │ │ │ │ │ │ │ raise ValueError( │
│ 247 │ │ │ │ │ │ │ │ f"Target module {target} is not supported. " │
│ 248 │ │ │ │ │ │ │ │ f"Currently, only
torch.nn.LinearandConv1Dare sup ││ 249 │ │ │ │ │ │ │ ) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Target module QuantLinear() is not supported. Currently, only
torch.nn.LinearandConv1Dare supported.any one helps