Skip to content

v9.0

Compare
Choose a tag to compare
@ParisNeo ParisNeo released this 27 Jan 21:10
· 1190 commits to main since this release

Code fully moved to fastAPI

New bindings. Faster than ever.
Better binding installation parameters with fixed versions.

Created new install method for hugging face, exllamav2 and python llama cpp.
Added conda library so that we can install more complex stuff from lollms directly.

New multi tools paradigms to solve libraries versions problems and incompatibility between them.
Added ollama client and server
Added vllm client and server
Added petals client and server

Full compatibility with open ai API allowing the user to use any client application with lollms, which basically means that you can use for example google gimini binding and use lollms to route it to autogen or some other open ai compatible API, just configure it to use the lollms server instead.

New lollms generation interface that allows you to build your own apps using raw generation or persona augmented generation through lollms.

An unreal engine plugin is gonna be released to give life to your lollms characters