Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The true weight file not find? #131

Open
fengmingfeng opened this issue Nov 20, 2023 · 4 comments
Open

The true weight file not find? #131

fengmingfeng opened this issue Nov 20, 2023 · 4 comments

Comments

@fengmingfeng
Copy link

when I run the code ".pytho demo.py", the weight not same as the provided one

100%|███████████████████████████████████████| 241M/241M [01:38<00:00, 2.55MiB/s]
Loading LLaMA-Adapter from ckpts/7fa55208379faf2dd862565284101b0e4a2a72114d6490a95e432cf9d9b6c813_BIAS-7B.pth
Traceback (most recent call last):
File "demo.py", line 11, in
model, preprocess = llama.load("BIAS-7B", llama_dir, device)
File "/root/autodl-tmp/LLaMA-Adapter/llama_adapter_v2_multimodal7b/llama/llama_adapter.py", line 309, in load
model = LLaMA_adapter(
File "/root/autodl-tmp/LLaMA-Adapter/llama_adapter_v2_multimodal7b/llama/llama_adapter.py", line 30, in init
with open(os.path.join(llama_ckpt_dir, "params.json"), "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/path/to/LLaMA/cpu/params.json'

@csuhan
Copy link
Collaborator

csuhan commented Nov 20, 2023

It seems that you forgot to pass the llama_type param:

def load(name, llama_dir, llama_type="7B", device="cuda" if torch.cuda.is_available() else "cpu", download_root='ckpts', max_seq_len=512, phase="finetune")

https://github.com/OpenGVLab/LLaMA-Adapter/blob/b82e90bd31dc75d6fdca0a35b53ff00d1b96bc47/llama_adapter_v2_multimodal7b/llama/llama_adapter.py#L290C1-L291C26

@fengmingfeng
Copy link
Author

these code just download the base model,but it not provide tokenizer.model and params.json and so on!
could you please provide these files with me.
/path/to/llama_model_weights
├── 7B
│   ├── checklist.chk
│   ├── consolidated.00.pth
│   └── params.json
└── tokenizer.model

_MODELS = {
"BIAS-7B": "https://github.com/OpenGVLab/LLaMA-Adapter/releases/download/v.2.0.0/7fa55208379faf2dd862565284101b0e4a2a72114d6490a95e432cf9d9b6c813_BIAS-7B.pth",
"LORA-BIAS-7B": "https://github.com/OpenGVLab/LLaMA-Adapter/releases/download/v.2.0.0/1bcbffc43484332672092e0024a8699a6eb5f558161aebf98a7c6b1db67224d1_LORA-BIAS-7B.pth",
"CAPTION-7B": "https://github.com/OpenGVLab/LLaMA-Adapter/releases/download/v.2.0.0/5088aeb63a89746b90bcfd5cb819e1c7411b2771b267c6d131ce73e250a8abf0_CAPTION-7B.pth",
"LORA-BIAS-7B-v21": "https://github.com/OpenGVLab/LLaMA-Adapter/releases/download/v.2.1.0/d26d107eec32127ac86ef1997cf7169de1c56a59c539fc1258c6798b969e289c_LORA-BIAS-7B-v21.pth",
# "LORA16-7B": "",
# "PARTIAL-7B": ""
}

def available_models():
return list(_MODELS.keys())

def load(name, llama_dir, llama_type="7B", device="cuda" if torch.cuda.is_available() else "cpu", download_root='ckpts', max_seq_len=512,
phase="finetune"):
if name in _MODELS:
model_path = _download(_MODELS[name], download_root)
elif os.path.isfile(name):
model_path = name
else:
return RuntimeError(f"Model {name} not found; available models = {available_models()}"), None

# BIAS-7B or https://xxx/sha256_BIAS-7B.pth -> 7B
# llama_type = name.split('.')[0].split('-')[-1]
llama_ckpt_dir = os.path.join(llama_dir, llama_type)
llama_tokenzier_path = os.path.join(llama_dir, 'tokenizer.model')

# load llama_adapter weights and model_cfg
print(f'Loading LLaMA-Adapter from {model_path}')
ckpt = torch.load(model_path, map_location='cpu')
model_cfg = ckpt.get('config', {})

model = LLaMA_adapter(
    llama_ckpt_dir, llama_tokenzier_path,
    max_seq_len=512, max_batch_size=1,
    clip_model='ViT-L/14',
    v_embed_dim=768, v_depth=8,
    v_num_heads=16, v_mlp_ratio=4.0,
    query_len=10, query_layer=31,
    w_bias=model_cfg.get('w_bias', False), 
    w_lora=model_cfg.get('w_lora', False), 
    lora_rank=model_cfg.get('lora_rank', 16),
    w_new_gate=model_cfg.get('w_lora', False), # for compatibility
    phase=phase)

@Cheng909865905
Copy link

你好 我也遇到同样的问题 找不到BIAS-7B 模型的相关tokenizer.model 和 params.json文件 ,请问您解决了嘛 我的q 909865905 将不甚感激

@fengmingfeng
Copy link
Author

你好 我也遇到同样的问题 找不到BIAS-7B 模型的相关tokenizer.model 和 params.json文件 ,请问您解决了嘛 我的q 909865905 将不甚感激

抱歉,我也没有解决

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants