Skip to content

Commit

Permalink
Readme updates for LiteLLM.
Browse files Browse the repository at this point in the history
  • Loading branch information
dylanhogg committed Dec 5, 2023
1 parent 5e37bfe commit bef117c
Showing 1 changed file with 25 additions and 5 deletions.
30 changes: 25 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Create knowledge graphs with LLMs.

![example machine learning output](https://github.com/dylanhogg/llmgraph/blob/main/docs/img/header.jpg?raw=true)

llmgraph enables you to create knowledge graphs in [GraphML](http://graphml.graphdrawing.org/), [GEXF](https://gexf.net/), and HTML formats (generated via [pyvis](https://github.com/WestHealth/pyvis)) from a given source entity Wikipedia page. The knowledge graphs are generated by extracting world knowledge from ChatGPT or other large language models (LLMs).
llmgraph enables you to create knowledge graphs in [GraphML](http://graphml.graphdrawing.org/), [GEXF](https://gexf.net/), and HTML formats (generated via [pyvis](https://github.com/WestHealth/pyvis)) from a given source entity Wikipedia page. The knowledge graphs are generated by extracting world knowledge from ChatGPT or other large language models (LLMs) as supported by [LiteLLM](https://github.com/BerriAI/litellm).

For a background on knowledge graphs see a [youtube overview by Computerphile](https://www.youtube.com/watch?v=PZBm7M0HGzw)

Expand All @@ -21,7 +21,7 @@ For a background on knowledge graphs see a [youtube overview by Computerphile](h
- Many entity types and relationships supported by [customised prompts](https://github.com/dylanhogg/llmgraph/blob/main/llmgraph/prompts.yaml).
- Cache support to iteratively grow a knowledge graph, efficiently.
- Outputs `total tokens` used to understand LLM costs (even though a default run is only about 1 cent).
- Customisable model (default is `gpt-3.5-turbo` for speed and cost).
- Customisable model (default is OpenAI `gpt-3.5-turbo` for speed and cost).

## Installation

Expand Down Expand Up @@ -53,7 +53,15 @@ llmgraph machine-learning "https://en.wikipedia.org/wiki/Artificial_intelligence

This example creates a 3 level graph, based on the given start node `Artificial Intelligence`.

Note that you will need to set an environment variable '`OPENAI_API_KEY`' prior to running. See the [OpenAI docs](https://platform.openai.com/docs/quickstart/step-2-setup-your-api-key) for more info. The `total tokens used` is output as the run progresses. For reference this 3 level example used a total of 7,650 gpt-3.5-turbo tokens, which is approx 1.5 cents as of Oct 2023.
By default OpenAI is used and you will need to set an environment variable '`OPENAI_API_KEY`' prior to running. See the [OpenAI docs](https://platform.openai.com/docs/quickstart/step-2-setup-your-api-key) for more info. The `total tokens used` is output as the run progresses. For reference this 3 level example used a total of 7,650 gpt-3.5-turbo tokens, which is approx 1.5 cents as of Oct 2023.

You can also specify a different LLM provider, including running with a local [ollama](https://github.com/jmorganca/ollama) model. You should be able to specify anything supported by [LiteLLM](https://github.com/BerriAI/litellm) as described here: https://docs.litellm.ai/docs/providers. Note that the prompts to extract related entities were tested with OpenAI and may not work as well with other models.

Local [ollama/llama2](https://ollama.ai/library/llama2) model example:

```bash
llmgraph machine-learning "https://en.wikipedia.org/wiki/Artificial_intelligence" --levels 3 --llm-model ollama/llama2 --llm-base-url http://localhost:<your_port>
```

The `entity_type` sets the LLM prompt used to find related entities to include in the graph. The full list can be seen in [prompts.yaml](https://github.com/dylanhogg/llmgraph/blob/main/llmgraph/prompts.yaml) and include the following entity types:

Expand Down Expand Up @@ -85,7 +93,8 @@ The `entity_type` sets the LLM prompt used to find related entities to include i
- `--output-folder` (TEXT): Folder location to write outputs [default: ./_output/]
- `--llm-model` (TEXT): The model name [default: gpt-3.5-turbo]
- `--llm-temp` (FLOAT): LLM temperature value [default: 0.0]
- `--llm-use-localhost` (INTEGER): LLM use localhost:8081 instead of OpenAI [default: 0]
- `--llm-base-url` (TEXT): LLM will use custom base URL instead of the automatic one [default: None]
- `--version`: Display llmgraph version and exit.
- `--help`: Show this message and exit.

## More Examples of HTML Output
Expand Down Expand Up @@ -156,7 +165,6 @@ Each call to the LLM API (and Wikipedia) is cached locally in a `.joblib_cache`

## Future Improvements

- Improve support for locally running LLM server (e.g. via [ollama](https://ollama.ai/))
- Contrast graph output from different LLM models (e.g. [Llama2](https://huggingface.co/docs/transformers/model_doc/llama2) vs [Mistral](https://huggingface.co/docs/transformers/model_doc/mistral) vs [ChatGPT-4](https://openai.com/chatgpt))
- Investigate the hypothosis that this approach provides insight into how an LLM views the world.
- Include more examples in this documentation and make examples available for easy browsing.
Expand All @@ -174,3 +182,15 @@ Contributions to llmgraph are welcome. Please follow these steps:
2. Create a new branch for your feature or bug fix.
3. Make your changes and commit them.
4. Create a pull request with a description of your changes.

## Thanks 🙏

Thanks to @breitburg for implementing the LiteLLM updates.

## References

- https://arxiv.org/abs/2211.10511 - Knowledge Graph Generation From Text
- https://arxiv.org/abs/2310.04562 - Towards Foundation Models for Knowledge Graph Reasoning
- https://arxiv.org/abs/2206.14268 - BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from Pretrained Language Models
- https://github.com/aws/graph-notebook - Graph Notebook: easily query and visualize graphs
- https://github.com/KiddoZhu/NBFNet-PyG - PyG re-implementation of Neural Bellman-Ford Networks

0 comments on commit bef117c

Please sign in to comment.