diff --git a/W&B_Prompts_with_Custom_Columns.ipynb b/W&B_Prompts_with_Custom_Columns.ipynb
new file mode 100644
index 00000000..f1252f0c
--- /dev/null
+++ b/W&B_Prompts_with_Custom_Columns.ipynb
@@ -0,0 +1,632 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "e-ZYaV5KGVmA"
+ },
+ "source": [
+ "\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "gJSVEAGWGVmA"
+ },
+ "source": [
+ "\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "9f7yMKLwGVmA"
+ },
+ "source": [
+ "**[Weights & Biases Prompts](https://docs.wandb.ai/guides/prompts?utm_source=code&utm_medium=colab&utm_campaign=prompts)** is a suite of LLMOps tools built for the development of LLM-powered applications.\n",
+ "\n",
+ "Use W&B Prompts to visualize and inspect the execution flow of your LLMs, analyze the inputs and outputs of your LLMs, view the intermediate results and securely store and manage your prompts and LLM chain configurations.\n",
+ "\n",
+ "#### [🪄 View Prompts In Action](https://wandb.ai/timssweeney/prompts-demo/)\n",
+ "\n",
+ "**In this notebook we will demostrate W&B Prompts:**\n",
+ "\n",
+ "- Using our 1-line LangChain integration\n",
+ "- Using our Trace class when building your own LLM Pipelines\n",
+ "\n",
+ "See here for the full [W&B Prompts documentation](https://docs.wandb.ai/guides/prompts)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "A4wI3b_8GVmB"
+ },
+ "source": [
+ "## Installation"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "id": "nDoIqQ8_GVmB"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install \"wandb>=0.15.4\" -qqq\n",
+ "!pip install \"langchain>=0.0.218\" openai -qqq"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {
+ "id": "PcGiSWBSGVmB"
+ },
+ "outputs": [],
+ "source": [
+ "import langchain\n",
+ "assert langchain.__version__ >= \"0.0.218\", \"Please ensure you are using LangChain v0.0.188 or higher\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "pbmQIsjJGVmB"
+ },
+ "source": [
+ "## Setup\n",
+ "\n",
+ "This demo requires that you have an [OpenAI key](https://platform.openai.com)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {
+ "id": "ZH4g2B0lGVmB",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "22295db6-5369-474d-a8ea-fb45c4c92085"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Paste your OpenAI key from: https://platform.openai.com/account/api-keys\n",
+ "··········\n",
+ "OpenAI API key configured\n"
+ ]
+ }
+ ],
+ "source": [
+ "import os\n",
+ "from getpass import getpass\n",
+ "\n",
+ "if os.getenv(\"OPENAI_API_KEY\") is None:\n",
+ " os.environ[\"OPENAI_API_KEY\"] = getpass(\"Paste your OpenAI key from: https://platform.openai.com/account/api-keys\\n\")\n",
+ "assert os.getenv(\"OPENAI_API_KEY\", \"\").startswith(\"sk-\"), \"This doesn't look like a valid OpenAI API key\"\n",
+ "print(\"OpenAI API key configured\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "79KOB2EhGVmB"
+ },
+ "source": [
+ "# W&B Prompts\n",
+ "\n",
+ "W&B Prompts consists of three main components:\n",
+ "\n",
+ "**Trace table**: Overview of the inputs and outputs of a chain.\n",
+ "\n",
+ "**Trace timeline**: Displays the execution flow of the chain and is color-coded according to component types.\n",
+ "\n",
+ "**Model architecture**: View details about the structure of the chain and the parameters used to initialize each component of the chain.\n",
+ "\n",
+ "After running this section, you will see a new panel automatically created in your workspace, showing each execution, the trace, and the model architecture"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "5kxmdm3zGVmC"
+ },
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "9u97K5vVGVmC"
+ },
+ "source": [
+ "## Maths with LangChain"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "oneRFmv6GVmC"
+ },
+ "source": [
+ "Set the `LANGCHAIN_WANDB_TRACING` environment variable as well as any other relevant [W&B environment variables](https://docs.wandb.ai/guides/track/environment-variables). This could includes a W&B project name, team name, and more. See [wandb.init](https://docs.wandb.ai/ref/python/init) for a full list of arguments."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {
+ "id": "ACl-rMtAGVmC"
+ },
+ "outputs": [],
+ "source": [
+ "os.environ[\"LANGCHAIN_WANDB_TRACING\"] = \"true\"\n",
+ "os.environ[\"WANDB_PROJECT\"] = \"langchain-testing\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {
+ "id": "csp3MXG4GVmC"
+ },
+ "outputs": [],
+ "source": [
+ "from langchain.chat_models import ChatOpenAI\n",
+ "from langchain.agents import load_tools, initialize_agent, AgentType"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "2hWU2GcAGVmC"
+ },
+ "source": [
+ "Create a standard math Agent using LangChain"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {
+ "id": "l_JkVMlRGVmC"
+ },
+ "outputs": [],
+ "source": [
+ "llm = ChatOpenAI(temperature=0)\n",
+ "tools = load_tools([\"llm-math\"], llm=llm)\n",
+ "math_agent = initialize_agent(tools,\n",
+ " llm,\n",
+ " agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "9FFviwCPGVmC"
+ },
+ "source": [
+ "Use LangChain as normal by calling your Agent.\n",
+ "\n",
+ " You will see a Weights & Biases run start and you will be asked for your [Weights & Biases API key](wwww.wandb.ai/authorize). Once your enter your API key, the inputs and outputs of your Agent calls will start to be streamed to the Weights & Biases App."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {
+ "id": "y-RHjVN4GVmC",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 178
+ },
+ "outputId": "5ccd5f32-6137-46c3-9abd-d458dbdbacca"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "\u001b[34m\u001b[1mwandb\u001b[0m: Streaming LangChain activity to W&B at https://wandb.ai/carey/langchain-testing/runs/lcznj5lg\n",
+ "\u001b[34m\u001b[1mwandb\u001b[0m: `WandbTracer` is currently in beta.\n",
+ "\u001b[34m\u001b[1mwandb\u001b[0m: Please report any issues to https://github.com/wandb/wandb/issues with the tag `langchain`.\n"
+ ]
+ },
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "LLMMathChain._evaluate(\"\n",
+ "import math\n",
+ "math.sqrt(5.4)\n",
+ "\") raised error: invalid syntax (, line 1). Please try again with a valid numerical expression\n",
+ "0.005720801417544866\n",
+ "0.15096209512635608\n"
+ ]
+ }
+ ],
+ "source": [
+ "# some sample maths questions\n",
+ "questions = [\n",
+ " \"Find the square root of 5.4.\",\n",
+ " \"What is 3 divided by 7.34 raised to the power of pi?\",\n",
+ " \"What is the sin of 0.47 radians, divided by the cube root of 27?\"\n",
+ "]\n",
+ "\n",
+ "for question in questions:\n",
+ " try:\n",
+ " # call your Agent as normal\n",
+ " answer = math_agent.run(question)\n",
+ " print(answer)\n",
+ " except Exception as e:\n",
+ " # any errors will be also logged to Weights & Biases\n",
+ " print(e)\n",
+ " pass"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "SNYFSaUrGVmC"
+ },
+ "source": [
+ "Once each Agent execution completes, all calls in your LangChain object will be logged to Weights & Biases"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "m0bL1xpkGVmC"
+ },
+ "source": [
+ "### LangChain Context Manager\n",
+ "Depending on your use case, you might instead prefer to use a context manager to manage your logging to W&B.\n",
+ "\n",
+ "**✨ New: Custom columns** can be logged directly to W&B to display in the same Trace Table with this snippet:\n",
+ "```python\n",
+ "import wandb\n",
+ "wandb.log(custom_metrics_dict, commit=False})\n",
+ "```\n",
+ "Use `commit=False` to make sure that metadata is logged to the same row of the Trace Table as the LangChain output."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {
+ "id": "7i9Pj1NKGVmC",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 35
+ },
+ "outputId": "b44f3ae7-fd49-437f-af7b-fb8f82056bd0"
+ },
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "'1.0891804557407723'"
+ ],
+ "application/vnd.google.colaboratory.intrinsic+json": {
+ "type": "string"
+ }
+ },
+ "metadata": {},
+ "execution_count": 10
+ }
+ ],
+ "source": [
+ "from langchain.callbacks import wandb_tracing_enabled\n",
+ "import wandb # To enable custom column logging with wandb.run.log()\n",
+ "\n",
+ "# unset the environment variable and use a context manager instead\n",
+ "if \"LANGCHAIN_WANDB_TRACING\" in os.environ:\n",
+ " del os.environ[\"LANGCHAIN_WANDB_TRACING\"]\n",
+ "\n",
+ "# enable tracing using a context manager\n",
+ "with wandb_tracing_enabled():\n",
+ " for i in range (10):\n",
+ " # Log any custom columns you'd like to add to the Trace Table\n",
+ " wandb.log({\"custom_column\": i}, commit=False)\n",
+ " try:\n",
+ " math_agent.run(f\"What is {i} raised to .123243 power?\") # this should be traced\n",
+ " except:\n",
+ " pass\n",
+ "\n",
+ "math_agent.run(\"What is 2 raised to .123243 power?\") # this should not be traced"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "JDLzoorhGVmC"
+ },
+ "source": [
+ "# Non-Lang Chain Implementation\n",
+ "\n",
+ "\n",
+ "A W&B Trace is created by logging 1 or more \"spans\". A root span is expected, which can accept nested child spans, which can in turn accept their own child spans. A Span represents a unit of work, Spans can have type `AGENT`, `TOOL`, `LLM` or `CHAIN`\n",
+ "\n",
+ "When logging with Trace, a single W&B run can have multiple calls to a LLM, Tool, Chain or Agent logged to it, there is no need to start a new W&B run after each generation from your model or pipeline, instead each call will be appended to the Trace Table.\n",
+ "\n",
+ "In this quickstart, we will how to log a single call to an OpenAI model to W&B Trace as a single span. Then we will show how to log a more complex series of nested spans.\n",
+ "\n",
+ "## Logging with W&B Trace\n",
+ "A high-level Trace api is available from the [`wandb-addon`](https://github.com/soumik12345/wandb-addons) community library from [@soumik12345](https://github.com/soumik12345). This will be replaced by a wandb-native integration shortly."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "FO3Kf2ngGVmC"
+ },
+ "outputs": [],
+ "source": [
+ "# Install wandb-addons\n",
+ "!git clone https://github.com/soumik12345/wandb-addons.git\n",
+ "!pip install ./wandb-addons[prompts] openai wandb -qqq"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "7z98yfoqGVmD"
+ },
+ "source": [
+ "Call wandb.init to start a W&B run. Here you can pass a W&B project name as well as an entity name (if logging to a W&B Team), as well as a config and more. See wandb.init for the full list of arguments.\n",
+ "\n",
+ "You will see a Weights & Biases run start and be asked for your [Weights & Biases API key](wwww.wandb.ai/authorize). Once your enter your API key, the inputs and outputs of your Agent calls will start to be streamed to the Weights & Biases App.\n",
+ "\n",
+ "**Note:** A W&B run supports logging as many traces you needed to a single run, i.e. you can make multiple calls of `run.log` without the need to create a new run each time"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "ZcvgzZ55GVmD"
+ },
+ "outputs": [],
+ "source": [
+ "import wandb\n",
+ "\n",
+ "# start a wandb run to log to\n",
+ "wandb.init(project=\"trace-example\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "4_3Wrg2YGVmD"
+ },
+ "source": [
+ "You can also set the entity argument in wandb.init if logging to a W&B Team.\n",
+ "\n",
+ "### Logging a single Span\n",
+ "Now we will query OpenAI times and log the results to a W&B Trace. We will log the inputs and outputs, start and end times, whether the OpenAI call was successful, the token usage, and additional metadata.\n",
+ "\n",
+ "You can see the full description of the arguments to the Trace class [here](https://soumik12345.github.io/wandb-addons/prompts/tracer/)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "q2pkMhpMGVmD"
+ },
+ "outputs": [],
+ "source": [
+ "import openai\n",
+ "import datetime\n",
+ "from wandb_addons.prompts import Trace\n",
+ "\n",
+ "openai.api_key = os.environ[\"OPENAI_API_KEY\"]\n",
+ "\n",
+ "# define your conifg\n",
+ "model_name = \"gpt-3.5-turbo\"\n",
+ "temperature = 0.7\n",
+ "system_message = \"You are a helpful assistant that always replies in 3 concise bullet points using markdown.\"\n",
+ "\n",
+ "queries_ls = [\n",
+ " \"What is the capital of France?\",\n",
+ " \"How do I boil an egg?\" * 10000, # deliberately trigger an openai error\n",
+ " \"What to do if the aliens arrive?\"\n",
+ "]\n",
+ "\n",
+ "for query in queries_ls:\n",
+ " messages=[\n",
+ " {\"role\": \"system\", \"content\": system_message},\n",
+ " {\"role\": \"user\", \"content\": query}\n",
+ " ]\n",
+ "\n",
+ " start_time_ms = datetime.datetime.now().timestamp() * 1000\n",
+ " try:\n",
+ " response = openai.ChatCompletion.create(model=model_name,\n",
+ " messages=messages,\n",
+ " temperature=temperature\n",
+ " )\n",
+ "\n",
+ " end_time_ms = round(datetime.datetime.now().timestamp() * 1000) # logged in milliseconds\n",
+ " status=\"success\"\n",
+ " status_message=None,\n",
+ " response_text = response[\"choices\"][0][\"message\"][\"content\"]\n",
+ " token_usage = response[\"usage\"].to_dict()\n",
+ "\n",
+ "\n",
+ " except Exception as e:\n",
+ " end_time_ms = round(datetime.datetime.now().timestamp() * 1000) # logged in milliseconds\n",
+ " status=\"error\"\n",
+ " status_message=str(e)\n",
+ " response_text = \"\"\n",
+ " token_usage = {}\n",
+ "\n",
+ " # create a span in wandb\n",
+ " root_span = Trace(\n",
+ " name=\"root_span\",\n",
+ " kind=\"llm\", # kind can be \"llm\", \"chain\", \"agent\" or \"tool\"\n",
+ " status_code=status,\n",
+ " status_message=status_message,\n",
+ " metadata={\"temperature\": temperature,\n",
+ " \"token_usage\": token_usage,\n",
+ " \"model_name\": model_name},\n",
+ " start_time_ms=start_time_ms,\n",
+ " end_time_ms=end_time_ms,\n",
+ " inputs={\"system_prompt\": system_message, \"query\": query},\n",
+ " outputs={\"response\": response_text},\n",
+ " )\n",
+ "\n",
+ " # log the span to wandb\n",
+ " root_span.log(name=\"openai_trace\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "XFcwFgaDGVmD"
+ },
+ "source": [
+ "### Logging a LLM pipeline using nested Spans\n",
+ "\n",
+ "In this example we will simulate an Agent being called, which then calls a LLM Chain, which calls an OpenAI LLM and then the Agent \"calls\" a Calculator tool.\n",
+ "\n",
+ "The inputs, outputs and metadata for each step in the execution of our \"Agent\" is logged in its own span. Spans can have child"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "ACMaGuYUGVmD"
+ },
+ "outputs": [],
+ "source": [
+ "import time\n",
+ "\n",
+ "openai.api_key = os.environ[\"OPENAI_API_KEY\"]\n",
+ "\n",
+ "# The query our agent has to answer\n",
+ "query = \"How many days until the next US election?\"\n",
+ "\n",
+ "# part 1 - an Agent is started...\n",
+ "start_time_ms = round(datetime.datetime.now().timestamp() * 1000)\n",
+ "\n",
+ "root_span = Trace(\n",
+ " name=\"MyAgent\",\n",
+ " kind=\"agent\",\n",
+ " start_time_ms=start_time_ms,\n",
+ " metadata={\"user\": \"optimus_12\"})\n",
+ "\n",
+ "\n",
+ "# part 2 - The Agent calls into a LLMChain..\n",
+ "chain_span = Trace(\n",
+ " name=\"LLMChain\",\n",
+ " kind=\"chain\",\n",
+ " start_time_ms=start_time_ms)\n",
+ "\n",
+ "# add the Chain span as a child of the root\n",
+ "root_span.add_child(chain_span)\n",
+ "\n",
+ "\n",
+ "# part 3 - the LLMChain calls an OpenAI LLM...\n",
+ "messages=[\n",
+ " {\"role\": \"system\", \"content\": system_message},\n",
+ " {\"role\": \"user\", \"content\": query}\n",
+ "]\n",
+ "\n",
+ "response = openai.ChatCompletion.create(model=model_name,\n",
+ " messages=messages,\n",
+ " temperature=temperature)\n",
+ "\n",
+ "llm_end_time_ms = round(datetime.datetime.now().timestamp() * 1000)\n",
+ "response_text = response[\"choices\"][0][\"message\"][\"content\"]\n",
+ "token_usage = response[\"usage\"].to_dict()\n",
+ "\n",
+ "llm_span = Trace(\n",
+ " name=\"OpenAI\",\n",
+ " kind=\"llm\",\n",
+ " status_code=\"success\",\n",
+ " metadata={\"temperature\":temperature,\n",
+ " \"token_usage\": token_usage,\n",
+ " \"model_name\":model_name},\n",
+ " start_time_ms=start_time_ms,\n",
+ " end_time_ms=llm_end_time_ms,\n",
+ " inputs={\"system_prompt\":system_message, \"query\":query},\n",
+ " outputs={\"response\": response_text},\n",
+ " )\n",
+ "\n",
+ "# add the LLM span as a child of the Chain span...\n",
+ "chain_span.add_child(llm_span)\n",
+ "\n",
+ "# update the end time of the Chain span\n",
+ "chain_span.add_inputs_and_outputs(\n",
+ " inputs={\"query\":query},\n",
+ " outputs={\"response\": response_text})\n",
+ "\n",
+ "# update the Chain span's end time\n",
+ "chain_span._span.end_time_ms = llm_end_time_ms\n",
+ "\n",
+ "\n",
+ "# part 4 - the Agent then calls a Tool...\n",
+ "time.sleep(3)\n",
+ "days_to_election = 117\n",
+ "tool_end_time_ms = round(datetime.datetime.now().timestamp() * 1000)\n",
+ "\n",
+ "# create a Tool span\n",
+ "tool_span = Trace(\n",
+ " name=\"Calculator\",\n",
+ " kind=\"tool\",\n",
+ " status_code=\"success\",\n",
+ " start_time_ms=llm_end_time_ms,\n",
+ " end_time_ms=tool_end_time_ms,\n",
+ " inputs={\"input\": response_text},\n",
+ " outputs={\"result\": days_to_election})\n",
+ "\n",
+ "# add the TOOL span as a child of the root\n",
+ "root_span.add_child(tool_span)\n",
+ "\n",
+ "\n",
+ "# part 5 - the final results from the tool are added\n",
+ "root_span.add_inputs_and_outputs(inputs={\"query\": query},\n",
+ " outputs={\"result\": days_to_election})\n",
+ "root_span._span.end_time_ms = tool_end_time_ms\n",
+ "\n",
+ "\n",
+ "# part 6 - log all spans to W&B by logging the root span\n",
+ "root_span.log(name=\"openai_trace\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "nBFVwawPGVmD"
+ },
+ "source": [
+ "Once each Agent execution completes, all calls in your LangChain object will be logged to Weights & Biases"
+ ]
+ }
+ ],
+ "metadata": {
+ "accelerator": "GPU",
+ "colab": {
+ "provenance": [],
+ "include_colab_link": true
+ },
+ "kernelspec": {
+ "display_name": "Python 3",
+ "name": "python3"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}