Skip to content

vndee/local-assistant-examples

Repository files navigation

Local Assistant Examples

Welcome to the Local Assistant Examples repository — a collection of educational examples built on top of large language models (LLMs). This repository was initially created as part of my blog post, Build your own RAG and run it locally: Langchain + Ollama + Streamlit.

Previously named local-rag-example, this project has been renamed to local-assistant-example to reflect the broader scope of its content. Over time, I decided to expand this project to include more examples and educational material, consolidating everything into one place rather than maintaining multiple repositories. Each example now lives in its own folder, with a dedicated README explaining the example and providing instructions on how to run it. The first example, originally from the blog post, can now be found in the simple-rag folder.

Available Examples

  • Simple RAG: Demonstrates how to build and run a Retrieval-Augmented Generation (RAG) model locally.

More examples will be added soon, so stay tuned!

Note: This repository is not intended for production use. It is designed to be as simple as possible to help newcomers understand the concepts of working with LLMs application.