You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Model I am using is intfloat/multilingual-e5-small
I want to fine tune it on my custom jargon words for sentence similarity task. (Mapping a acronym to its full form) without disturbing previously trained weights of e5 small model.
It would be a great help if you could provide me some resources on how to achieve this.
In which format should I prepare my dataset and how to fine tune this model without disturbing previous weights
Thanks
The text was updated successfully, but these errors were encountered:
Model I am using is intfloat/multilingual-e5-small
I want to fine tune it on my custom jargon words for sentence similarity task. (Mapping a acronym to its full form) without disturbing previously trained weights of e5 small model.
It would be a great help if you could provide me some resources on how to achieve this.
In which format should I prepare my dataset and how to fine tune this model without disturbing previous weights
Thanks
The text was updated successfully, but these errors were encountered: