You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, first of all, thank you for the excellent work.
From my understanding of the paper, in Llama Adapter v1, the adaption prompt is inserted into the topmost L layers of the transformer.
However, in the code below, if self.adapter_layer is 30, doesn't it insert the adapter from the 3rd to the 32nd layer of the transformer?
Could you please explain why -1 * self.adapter_layer was used here?
Hello, first of all, thank you for the excellent work.
From my understanding of the paper, in Llama Adapter v1, the adaption prompt is inserted into the topmost L layers of the transformer.
However, in the code below, if self.adapter_layer is 30, doesn't it insert the adapter from the 3rd to the 32nd layer of the transformer?
Could you please explain why -1 * self.adapter_layer was used here?
https://github.com/OpenGVLab/LLaMA-Adapter/blob/8c50ee5d5d393c9bee5fcfda6aaea31d3ca3c40c/alpaca_finetuning_v1/llama/model.py
I really appreciate any help you can provide.
The text was updated successfully, but these errors were encountered: