diff --git a/tutorials/43_Building_a_Tool_Calling_Agent.ipynb b/tutorials/43_Building_a_Tool_Calling_Agent.ipynb index ee18af3..ea3ee2a 100644 --- a/tutorials/43_Building_a_Tool_Calling_Agent.ipynb +++ b/tutorials/43_Building_a_Tool_Calling_Agent.ipynb @@ -5,15 +5,7 @@ "metadata": { "id": "2OvkPji9O-qX" }, - "source": [ - "# Tutorial: Building a Tool-Calling Agent\n", - "\n", - "- **Level**: Beginner\n", - "- **Time to complete**: 15 minutes\n", - "- **Components Used**: [`Agent`](https://docs.haystack.deepset.ai/docs/agent), [`OpenAIChatGenerator`](https://docs.haystack.deepset.ai/docs/openaichatgenerator), [`SerperDevWebSearch`](https://docs.haystack.deepset.ai/docs/serperdevwebsearch), [`ComponentTool`](https://docs.haystack.deepset.ai/docs/componenttool), [`SuperComponent`](https://docs.haystack.deepset.ai/docs/supercomponents)\n", - "- **Prerequisites**: You must have an [OpenAI API Key](https://platform.openai.com/api-keys) and a [SerperDev API Key](https://serper.dev/api-key)\n", - "- **Goal**: After completing this tutorial, you'll have learned how to create an Agent that can use tools both components and pipelines to answer questions and perform tasks." - ] + "source": "# Tutorial: Building a Tool-Calling Agent\n\n- **Level**: Beginner\n- **Time to complete**: 15 minutes\n- **Components Used**: [`Agent`](https://docs.haystack.deepset.ai/docs/agent), [`OpenAIChatGenerator`](https://docs.haystack.deepset.ai/docs/openaichatgenerator), [`SerperDevWebSearch`](https://docs.haystack.deepset.ai/docs/serperdevwebsearch), [`ComponentTool`](https://docs.haystack.deepset.ai/docs/componenttool), [`PipelineTool`](https://docs.haystack.deepset.ai/docs/pipelinetool)\n- **Prerequisites**: You must have an [OpenAI API Key](https://platform.openai.com/api-keys) and a [SerperDev API Key](https://serper.dev/api-key)\n- **Goal**: After completing this tutorial, you'll have learned how to create an Agent that can use both components and pipelines as tools to answer questions and perform tasks." }, { "cell_type": "markdown", @@ -203,7 +195,6 @@ }, "outputs": [], "source": [ - "from haystack.components.builders.answer_builder import AnswerBuilder\n", "from haystack.components.converters.html import HTMLToDocument\n", "from haystack.components.converters.output_adapter import OutputAdapter\n", "from haystack.components.fetchers.link_content import LinkContentFetcher\n", @@ -242,17 +233,7 @@ "metadata": { "id": "wLIcnWl-66QA" }, - "source": [ - "### Creating a Tool from a Pipeline\n", - "\n", - "Next, wrap the `search_pipeline` inside a [`SuperComponent`](https://docs.haystack.deepset.ai/docs/supercomponents) and turn it into a tool using `ComponentTool`. The `ComponentTool` automatically creates LLM-compatible tool schemas based on the component’s input sockets. \n", - "\n", - "To control what data the `ComponentTool` should receive and returns, you can optionally define `input_mapping` and `output_mapping`. For example, this lets you ensure that only the `\"query\"` input of the `search_pipeline` is mentioned in LLM-compatible tool schema, and only `\"search_result\"` is returned from the `SuperComponent`.\n", - "\n", - "Finally, you can initialize the Agent with the resulting `search_tool`.\n", - "\n", - "> πŸ’‘ Learn alternative ways of creating tools in [`Tool`](https://docs.haystack.deepset.ai/docs/tool) and [`MCPTool`](https://docs.haystack.deepset.ai/docs/mcptool) documentation pages." - ] + "source": "### Creating a Tool from a Pipeline\n\nNext, wrap the `search_pipeline` in a [`PipelineTool`](https://docs.haystack.deepset.ai/docs/pipelinetool). `PipelineTool` directly exposes a pipeline as an LLM-callable tool, replacing the older pattern of wrapping a pipeline in a `SuperComponent` and then passing it to `ComponentTool`.\n\nUse `input_mapping` and `output_mapping` to control which pipeline inputs and outputs are exposed. Here, `input_mapping` ensures only `\"query\"` is surfaced in the tool schema, and `output_mapping` extracts the formatted string produced by `output_adapter`.\n\nFinally, you can initialize the Agent with the resulting `search_tool`.\n\n> πŸ’‘ Learn alternative ways of creating tools in [`Tool`](https://docs.haystack.deepset.ai/docs/tool) and [`MCPTool`](https://docs.haystack.deepset.ai/docs/mcptool) documentation pages." }, { "cell_type": "code", @@ -261,42 +242,7 @@ "id": "yxaN3KBo65pv" }, "outputs": [], - "source": [ - "from haystack.core.super_component import SuperComponent\n", - "from haystack.tools import ComponentTool\n", - "from haystack.components.agents import Agent\n", - "from haystack.components.generators.chat import OpenAIChatGenerator\n", - "\n", - "search_component = SuperComponent(\n", - " pipeline=search_pipeline,\n", - " input_mapping={\"query\": [\"search.query\"]},\n", - " output_mapping={\"output_adapter.output\": \"search_result\"},\n", - ")\n", - "\n", - "search_tool = ComponentTool(\n", - " name=\"search\",\n", - " description=\"Use this tool to search for information on the internet.\",\n", - " component=search_component,\n", - " outputs_to_string={\"source\": \"search_result\"},\n", - ")\n", - "\n", - "agent = Agent(\n", - " chat_generator=OpenAIChatGenerator(model=\"gpt-4o-mini\"),\n", - " tools=[search_tool],\n", - " system_prompt=\"\"\"\n", - " You are a deep research assistant.\n", - " You create comprehensive research reports to answer the user's questions.\n", - " You use the 'search'-tool to answer any questions.\n", - " You perform multiple searches until you have the information you need to answer the question.\n", - " Make sure you research different aspects of the question.\n", - " Use markdown to format your response.\n", - " When you use information from the websearch results, cite your sources using markdown links.\n", - " It is important that you cite accurately.\n", - " \"\"\",\n", - " exit_conditions=[\"text\"],\n", - " max_agent_steps=20,\n", - ")" - ] + "source": "from haystack.tools import PipelineTool\nfrom haystack.components.agents import Agent\nfrom haystack.components.generators.chat import OpenAIChatGenerator\n\nsearch_tool = PipelineTool(\n name=\"search\",\n description=\"Use this tool to search for information on the internet.\",\n pipeline=search_pipeline,\n input_mapping={\"query\": [\"search.query\"]},\n output_mapping={\"output_adapter.output\": \"search_result\"},\n outputs_to_string={\"source\": \"search_result\"},\n)\n\nagent = Agent(\n chat_generator=OpenAIChatGenerator(model=\"gpt-4o-mini\"),\n tools=[search_tool],\n system_prompt=\"\"\"\n You are a deep research assistant.\n You create comprehensive research reports to answer the user's questions.\n You use the 'search'-tool to answer any questions.\n You perform multiple searches until you have the information you need to answer the question.\n Make sure you research different aspects of the question.\n Use markdown to format your response.\n When you use information from the websearch results, cite your sources using markdown links.\n It is important that you cite accurately.\n \"\"\",\n exit_conditions=[\"text\"],\n max_agent_steps=20,\n)" }, { "cell_type": "markdown", @@ -320,7 +266,6 @@ "query = \"What are the latest updates on the Artemis moon mission?\"\n", "messages = [ChatMessage.from_user(query)]\n", "\n", - "agent.warm_up()\n", "agent_output = agent.run(messages=messages)\n", "\n", "print(agent_output[\"messages\"][-1].text)" @@ -353,22 +298,7 @@ "metadata": { "id": "czMjWwnxPA-3" }, - "source": [ - "Let's break down this last example in the tutorial.\n", - "The **Agent** is the main component that orchestrates the interaction between the LLM and tools.\n", - "We use **ComponentTool** as a wrapper that allows Haystack components to be used as tools by the agent.\n", - "The **SuperComponent** wraps entire pipelines so that they can be used as components and thus also as tools.\n", - "\n", - "We created a sophisticated search pipeline that:\n", - "1. Searches the web using SerperDevWebSearch\n", - "2. Fetches content from the found links\n", - "3. Converts HTML content to Documents\n", - "4. Formats the results for the Agent\n", - "\n", - "The Agent then uses this pipeline as a tool to gather information and generate comprehensive answers.\n", - "\n", - "By the way, did you know that the Agent is a Haystack component itself? That means you can use and combine an Agent in your pipelines just like any other component!" - ] + "source": "Let's break down this last example in the tutorial.\nThe **Agent** is the main component that orchestrates the interaction between the LLM and tools.\nWe use **ComponentTool** as a wrapper that allows individual Haystack components to be used as tools by the agent.\nThe **PipelineTool** wraps entire pipelines so that they can be used as tools directly, without needing an intermediate `SuperComponent`.\n\nWe created a sophisticated search pipeline that:\n1. Searches the web using SerperDevWebSearch\n2. Fetches content from the found links\n3. Converts HTML content to Documents\n4. Formats the results for the Agent\n\nThe Agent then uses this pipeline as a tool to gather information and generate comprehensive answers.\n\nBy the way, did you know that the Agent is a Haystack component itself? That means you can use and combine an Agent in your pipelines just like any other component!" }, { "cell_type": "markdown", diff --git a/tutorials/45_Creating_a_Multi_Agent_System.ipynb b/tutorials/45_Creating_a_Multi_Agent_System.ipynb index 8e6e783..2b62b7b 100644 --- a/tutorials/45_Creating_a_Multi_Agent_System.ipynb +++ b/tutorials/45_Creating_a_Multi_Agent_System.ipynb @@ -55,10 +55,10 @@ "name": "stdout", "output_type": "stream", "text": [ - "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m514.7/514.7 kB\u001b[0m \u001b[31m8.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", - "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m96.7/96.7 kB\u001b[0m \u001b[31m8.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", - "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m74.5/74.5 kB\u001b[0m \u001b[31m5.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", - "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.3/3.3 MB\u001b[0m \u001b[31m55.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[2K \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m514.7/514.7 kB\u001b[0m \u001b[31m8.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[2K \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m96.7/96.7 kB\u001b[0m \u001b[31m8.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[2K \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m74.5/74.5 kB\u001b[0m \u001b[31m5.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", + "\u001b[2K \u001b[90m\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u001b[0m \u001b[32m3.3/3.3 MB\u001b[0m \u001b[31m55.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25h" ] } @@ -462,7 +462,7 @@ "id": "OHA3nnEJIWEb" }, "source": [ - "> πŸ’‘ When turning a custom component into a tool using `ComponentTool`, make sure its input parameters are well-defined. You can do this in one of two ways:\n", + "> \ud83d\udca1 When turning a custom component into a tool using `ComponentTool`, make sure its input parameters are well-defined. You can do this in one of two ways:\n", "1. Pass a `properties` dictionary to `ComponentTool`, or\n", "2. Use parameter annotations in the `run` method's docstring, like so:\n", "```python\n", @@ -523,9 +523,9 @@ "\n", "Let's now build the other tool for the writer agent, this one will save content to an [InMemoryDocumentStore](https://docs.haystack.deepset.ai/docs/inmemorydocumentstore).\n", "\n", - "To make this work, start by creating a pipeline that includes the custom `DocumentAdapter` component along with the [DocumentWriter](https://docs.haystack.deepset.ai/docs/documentwriter). Once the pipeline is ready, wrap it in a `SuperComponent` and then convert it into a tool using `ComponentTool`.\n", + "To make this work, start by creating a pipeline that includes the custom `DocumentAdapter` component along with the [DocumentWriter](https://docs.haystack.deepset.ai/docs/documentwriter). Once the pipeline is ready, wrap it in a `PipelineTool` to expose it as an LLM-callable tool.\n", "\n", - "> πŸ’‘ Tip: You could also [create a tool](https://docs.haystack.deepset.ai/docs/tool#tool-initialization) from a simple function that runs the pipeline. However, the recommended approach is to use `SuperComponent` together with `ComponentTool`, especially if you plan to deploy the tool with [Hayhooks](https://docs.haystack.deepset.ai/docs/hayhooks), since this method supports better serialization. Learn more about `SuperComponents` in [Tutorial: Creating Custom SuperComponents](https://haystack.deepset.ai/tutorials/44_creating_custom_supercomponents)" + "> \ud83d\udca1 **Tip**: You could also wrap the pipeline in a `@tool`-decorated function for full control over what the LLM sees \u2014 including simplified parameters, formatted output, and custom error handling. Use `PipelineTool` when you prefer **declarative configuration**: the pipeline is captured in one serializable object. Learn more in the [PipelineTool docs](https://docs.haystack.deepset.ai/docs/pipelinetool)." ] }, { @@ -554,42 +554,7 @@ "output_type": "execute_result" } ], - "source": [ - "from haystack import Pipeline, component, Document, SuperComponent\n", - "from haystack.components.writers import DocumentWriter\n", - "from haystack.document_stores.in_memory import InMemoryDocumentStore\n", - "from typing import List\n", - "\n", - "\n", - "@component\n", - "class DocumentAdapter:\n", - " @component.output_types(documents=List[Document])\n", - " def run(self, content: str, title: str):\n", - " return {\"documents\": [Document(content=content, meta={\"title\": title})]}\n", - "\n", - "\n", - "document_store = InMemoryDocumentStore()\n", - "\n", - "doc_store_writer_pipeline = Pipeline()\n", - "doc_store_writer_pipeline.add_component(\"adapter\", DocumentAdapter())\n", - "doc_store_writer_pipeline.add_component(\"writer\", DocumentWriter(document_store=document_store))\n", - "doc_store_writer_pipeline.connect(\"adapter\", \"writer\")\n", - "\n", - "doc_store_writer = ComponentTool(\n", - " component=SuperComponent(doc_store_writer_pipeline),\n", - " name=\"doc_store_writer\",\n", - " description=\"Use this tool to write/save content to document store\",\n", - " parameters={\n", - " \"type\": \"object\",\n", - " \"properties\": {\n", - " \"title\": {\"type\": \"string\", \"description\": \"The title of the Document\"},\n", - " \"content\": {\"type\": \"string\", \"description\": \"The content of the Document\"},\n", - " },\n", - " \"required\": [\"title\", \"content\"],\n", - " },\n", - ")\n", - "doc_store_writer.parameters" - ] + "source": "from haystack import Pipeline, component, Document\nfrom haystack.components.writers import DocumentWriter\nfrom haystack.document_stores.in_memory import InMemoryDocumentStore\nfrom haystack.tools import PipelineTool\nfrom typing import List\n\n\n@component\nclass DocumentAdapter:\n @component.output_types(documents=List[Document])\n def run(self, content: str, title: str):\n return {\"documents\": [Document(content=content, meta={\"title\": title})]}\n\n\ndocument_store = InMemoryDocumentStore()\n\ndoc_store_writer_pipeline = Pipeline()\ndoc_store_writer_pipeline.add_component(\"adapter\", DocumentAdapter())\ndoc_store_writer_pipeline.add_component(\"writer\", DocumentWriter(document_store=document_store))\ndoc_store_writer_pipeline.connect(\"adapter\", \"writer\")\n\ndoc_store_writer = PipelineTool(\n pipeline=doc_store_writer_pipeline,\n name=\"doc_store_writer\",\n description=\"Use this tool to write/save content to document store\",\n parameters={\n \"type\": \"object\",\n \"properties\": {\n \"title\": {\"type\": \"string\", \"description\": \"The title of the Document\"},\n \"content\": {\"type\": \"string\", \"description\": \"The content of the Document\"},\n },\n \"required\": [\"title\", \"content\"],\n },\n)\ndoc_store_writer.parameters" }, { "cell_type": "markdown", @@ -783,7 +748,7 @@ "\n", " The Silk Road was a network of Asian trade routes active from the second century BCE until the mid-15th century. Spanning over 6,400 km (4,000 mi), ...File Content for https://en.wikipedia.org/wiki/Silk_Road_(marketplace)\n", "\n", - " The name \"Silk Road\" comes from a historical network of trade routes started during the Han Dynasty (206 BCE – 220 CE) between Europe, India, China, and many ...File Content for https://en.wikipedia.org/wiki/The_Silk_Roads\n", + " The name \"Silk Road\" comes from a historical network of trade routes started during the Han Dynasty (206 BCE \u2013 220 CE) between Europe, India, China, and many ...File Content for https://en.wikipedia.org/wiki/The_Silk_Roads\n", "\n", " The Silk Roads: A New History of the World is a 2015 non-fiction book written by English historian Peter Frankopan, a historian at the University of Oxford.File Content for https://en.wikipedia.org/wiki/Cities_along_the_Silk_Road\n", "\n", @@ -793,7 +758,7 @@ "\n", "The Silk Road was a network of trade routes that facilitated commerce and cultural exchange between various civilizations, particularly between Europe and Asia. It was active from the 2nd century BCE until the mid-15th century and spanned over 6,400 kilometers (approximately 4,000 miles).\n", "\n", - "The name \"Silk Road\" is derived from the lucrative silk trade that was carried out along these routes, starting during the Han Dynasty (206 BCE – 220 CE) under Emperor Wu. The routes connected China with India, Persia, and further to Europe, allowing for the exchange of goods, ideas, and cultures.\n", + "The name \"Silk Road\" is derived from the lucrative silk trade that was carried out along these routes, starting during the Han Dynasty (206 BCE \u2013 220 CE) under Emperor Wu. The routes connected China with India, Persia, and further to Europe, allowing for the exchange of goods, ideas, and cultures.\n", "\n", "Historically, the Silk Road comprised various routes, including both overland and maritime paths. It played a significant role in the development of the civilizations that it connected by facilitating trade in not only silk but also other commodities like spices, textiles, and precious stones.\n", "\n", @@ -804,13 +769,13 @@ "\n", "\n", "[TOOL RESULT]\n", - "{'messages': [{'role': 'system', 'meta': {}, 'name': None, 'content': [{'text': '\\n You are a research agent that can find information on web or specifically on wikipedia. \\n Use wiki_search tool if you need facts and use web_search tool for latest news on topics.\\n Use one tool at a time. Try different queries if you need more information.\\n Only use the retrieved context, do not use your own knowledge.\\n Summarize the all retrieved information before returning response to the user.\\n '}]}, {'role': 'user', 'meta': {}, 'name': None, 'content': [{'text': 'Can you provide an overview of the history of the Silk Road?'}]}, {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'tool_calls', 'completion_start_time': '2025-05-28T15:16:49.904424', 'usage': None}, 'name': None, 'content': [{'tool_call': {'tool_name': 'wiki_search', 'arguments': {'query': 'History of the Silk Road'}, 'id': 'call_pxDTCHYeS27s9drCvQ507ie8'}}]}, {'role': 'tool', 'meta': {}, 'name': None, 'content': [{'tool_call_result': {'result': 'File Content for https://en.wikipedia.org/wiki/Silk_Road\\n\\n The Silk Road was a network of Asian trade routes active from the second century BCE until the mid-15th century. Spanning over 6,400 km (4,000 mi), ...File Content for https://en.wikipedia.org/wiki/Silk_Road_(marketplace)\\n\\n The name \"Silk Road\" comes from a historical network of trade routes started during the Han Dynasty (206 BCE – 220 CE) between Europe, India, China, and many ...File Content for https://en.wikipedia.org/wiki/The_Silk_Roads\\n\\n The Silk Roads: A New History of the World is a 2015 non-fiction book written by English historian Peter Frankopan, a historian at the University of Oxford.File Content for https://en.wikipedia.org/wiki/Cities_along_the_Silk_Road\\n\\n It came into existence in the 2nd century BCE, when Emperor Wu of the Han dynasty was in power, and lasted until the 15th century CE, when the Ottoman Empire ...File Content for https://en.wikipedia.org/wiki/Northern_Silk_Road\\n\\n The Northern Silk Road is a historic inland trade route in Northwest China and Central Asia originating in the ancient Chinese capital of Chang\\'an (modern ...', 'origin': {'tool_name': 'wiki_search', 'arguments': {'query': 'History of the Silk Road'}, 'id': 'call_pxDTCHYeS27s9drCvQ507ie8'}, 'error': False}}]}, {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'stop', 'completion_start_time': '2025-05-28T15:16:51.627946', 'usage': None}, 'name': None, 'content': [{'text': 'The Silk Road was a network of trade routes that facilitated commerce and cultural exchange between various civilizations, particularly between Europe and Asia. It was active from the 2nd century BCE until the mid-15th century and spanned over 6,400 kilometers (approximately 4,000 miles).\\n\\nThe name \"Silk Road\" is derived from the lucrative silk trade that was carried out along these routes, starting during the Han Dynasty (206 BCE – 220 CE) under Emperor Wu. The routes connected China with India, Persia, and further to Europe, allowing for the exchange of goods, ideas, and cultures.\\n\\nHistorically, the Silk Road comprised various routes, including both overland and maritime paths. It played a significant role in the development of the civilizations that it connected by facilitating trade in not only silk but also other commodities like spices, textiles, and precious stones.\\n\\nThe importance of the Silk Road diminished in the late 15th century due to the rise of maritime trade routes and the expansion of empires, such as the Ottoman Empire, which changed the dynamics of trade and cultural exchange in the regions it linked.\\n\\nOverall, the Silk Road is a crucial part of world history, representing an extensive system of trade that fostered interactions among diverse cultures over centuries.'}]}], 'last_message': {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'stop', 'completion_start_time': '2025-05-28T15:16:51.627946', 'usage': None}, 'name': None, 'content': [{'text': 'The Silk Road was a network of trade routes that facilitated commerce and cultural exchange between various civilizations, particularly between Europe and Asia. It was active from the 2nd century BCE until the mid-15th century and spanned over 6,400 kilometers (approximately 4,000 miles).\\n\\nThe name \"Silk Road\" is derived from the lucrative silk trade that was carried out along these routes, starting during the Han Dynasty (206 BCE – 220 CE) under Emperor Wu. The routes connected China with India, Persia, and further to Europe, allowing for the exchange of goods, ideas, and cultures.\\n\\nHistorically, the Silk Road comprised various routes, including both overland and maritime paths. It played a significant role in the development of the civilizations that it connected by facilitating trade in not only silk but also other commodities like spices, textiles, and precious stones.\\n\\nThe importance of the Silk Road diminished in the late 15th century due to the rise of maritime trade routes and the expansion of empires, such as the Ottoman Empire, which changed the dynamics of trade and cultural exchange in the regions it linked.\\n\\nOverall, the Silk Road is a crucial part of world history, representing an extensive system of trade that fostered interactions among diverse cultures over centuries.'}]}}\n", + "{'messages': [{'role': 'system', 'meta': {}, 'name': None, 'content': [{'text': '\\n You are a research agent that can find information on web or specifically on wikipedia. \\n Use wiki_search tool if you need facts and use web_search tool for latest news on topics.\\n Use one tool at a time. Try different queries if you need more information.\\n Only use the retrieved context, do not use your own knowledge.\\n Summarize the all retrieved information before returning response to the user.\\n '}]}, {'role': 'user', 'meta': {}, 'name': None, 'content': [{'text': 'Can you provide an overview of the history of the Silk Road?'}]}, {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'tool_calls', 'completion_start_time': '2025-05-28T15:16:49.904424', 'usage': None}, 'name': None, 'content': [{'tool_call': {'tool_name': 'wiki_search', 'arguments': {'query': 'History of the Silk Road'}, 'id': 'call_pxDTCHYeS27s9drCvQ507ie8'}}]}, {'role': 'tool', 'meta': {}, 'name': None, 'content': [{'tool_call_result': {'result': 'File Content for https://en.wikipedia.org/wiki/Silk_Road\\n\\n The Silk Road was a network of Asian trade routes active from the second century BCE until the mid-15th century. Spanning over 6,400 km (4,000 mi), ...File Content for https://en.wikipedia.org/wiki/Silk_Road_(marketplace)\\n\\n The name \"Silk Road\" comes from a historical network of trade routes started during the Han Dynasty (206 BCE \u2013 220 CE) between Europe, India, China, and many ...File Content for https://en.wikipedia.org/wiki/The_Silk_Roads\\n\\n The Silk Roads: A New History of the World is a 2015 non-fiction book written by English historian Peter Frankopan, a historian at the University of Oxford.File Content for https://en.wikipedia.org/wiki/Cities_along_the_Silk_Road\\n\\n It came into existence in the 2nd century BCE, when Emperor Wu of the Han dynasty was in power, and lasted until the 15th century CE, when the Ottoman Empire ...File Content for https://en.wikipedia.org/wiki/Northern_Silk_Road\\n\\n The Northern Silk Road is a historic inland trade route in Northwest China and Central Asia originating in the ancient Chinese capital of Chang\\'an (modern ...', 'origin': {'tool_name': 'wiki_search', 'arguments': {'query': 'History of the Silk Road'}, 'id': 'call_pxDTCHYeS27s9drCvQ507ie8'}, 'error': False}}]}, {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'stop', 'completion_start_time': '2025-05-28T15:16:51.627946', 'usage': None}, 'name': None, 'content': [{'text': 'The Silk Road was a network of trade routes that facilitated commerce and cultural exchange between various civilizations, particularly between Europe and Asia. It was active from the 2nd century BCE until the mid-15th century and spanned over 6,400 kilometers (approximately 4,000 miles).\\n\\nThe name \"Silk Road\" is derived from the lucrative silk trade that was carried out along these routes, starting during the Han Dynasty (206 BCE \u2013 220 CE) under Emperor Wu. The routes connected China with India, Persia, and further to Europe, allowing for the exchange of goods, ideas, and cultures.\\n\\nHistorically, the Silk Road comprised various routes, including both overland and maritime paths. It played a significant role in the development of the civilizations that it connected by facilitating trade in not only silk but also other commodities like spices, textiles, and precious stones.\\n\\nThe importance of the Silk Road diminished in the late 15th century due to the rise of maritime trade routes and the expansion of empires, such as the Ottoman Empire, which changed the dynamics of trade and cultural exchange in the regions it linked.\\n\\nOverall, the Silk Road is a crucial part of world history, representing an extensive system of trade that fostered interactions among diverse cultures over centuries.'}]}], 'last_message': {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'stop', 'completion_start_time': '2025-05-28T15:16:51.627946', 'usage': None}, 'name': None, 'content': [{'text': 'The Silk Road was a network of trade routes that facilitated commerce and cultural exchange between various civilizations, particularly between Europe and Asia. It was active from the 2nd century BCE until the mid-15th century and spanned over 6,400 kilometers (approximately 4,000 miles).\\n\\nThe name \"Silk Road\" is derived from the lucrative silk trade that was carried out along these routes, starting during the Han Dynasty (206 BCE \u2013 220 CE) under Emperor Wu. The routes connected China with India, Persia, and further to Europe, allowing for the exchange of goods, ideas, and cultures.\\n\\nHistorically, the Silk Road comprised various routes, including both overland and maritime paths. It played a significant role in the development of the civilizations that it connected by facilitating trade in not only silk but also other commodities like spices, textiles, and precious stones.\\n\\nThe importance of the Silk Road diminished in the late 15th century due to the rise of maritime trade routes and the expansion of empires, such as the Ottoman Empire, which changed the dynamics of trade and cultural exchange in the regions it linked.\\n\\nOverall, the Silk Road is a crucial part of world history, representing an extensive system of trade that fostered interactions among diverse cultures over centuries.'}]}}\n", "\n", - "The Silk Road was a vast network of trade routes that facilitated commerce and cultural exchange among various civilizations, primarily between Europe and Asia. Here’s an overview of its history:\n", + "The Silk Road was a vast network of trade routes that facilitated commerce and cultural exchange among various civilizations, primarily between Europe and Asia. Here\u2019s an overview of its history:\n", "\n", "- **Time Period**: The Silk Road was active from the 2nd century BCE until the mid-15th century, spanning over 6,400 kilometers (approximately 4,000 miles).\n", "\n", - "- **Origin of the Name**: The term \"Silk Road\" comes from the lucrative silk trade that flourished along these routes, a practice that began during the Han Dynasty (206 BCE – 220 CE) under Emperor Wu.\n", + "- **Origin of the Name**: The term \"Silk Road\" comes from the lucrative silk trade that flourished along these routes, a practice that began during the Han Dynasty (206 BCE \u2013 220 CE) under Emperor Wu.\n", "\n", "- **Geographical Connections**: The routes connected China with India, Persia, and extended to Europe, allowing not only for the trade of silk but also various commodities such as spices, textiles, and precious stones.\n", "\n", @@ -855,7 +820,7 @@ "type": "string" }, "text/plain": [ - "'The Silk Road was a vast network of trade routes that facilitated commerce and cultural exchange among various civilizations, primarily between Europe and Asia. Here’s an overview of its history:\\n\\n- **Time Period**: The Silk Road was active from the 2nd century BCE until the mid-15th century, spanning over 6,400 kilometers (approximately 4,000 miles).\\n\\n- **Origin of the Name**: The term \"Silk Road\" comes from the lucrative silk trade that flourished along these routes, a practice that began during the Han Dynasty (206 BCE – 220 CE) under Emperor Wu.\\n\\n- **Geographical Connections**: The routes connected China with India, Persia, and extended to Europe, allowing not only for the trade of silk but also various commodities such as spices, textiles, and precious stones.\\n\\n- **Cultural Exchange**: The Silk Road served as a conduit for the exchange of goods, ideas, technologies, and cultures, significantly impacting the development of the civilizations involved.\\n\\n- **Dynamics Changes**: The significance of the Silk Road diminished in the late 15th century due to the advent of maritime trade routes and the rise of empires like the Ottoman Empire, which altered trade dynamics in the regions connected by the Silk Road.\\n\\nOverall, the Silk Road represents a crucial chapter in world history, illustrating how trade can foster interactions among diverse cultures over centuries.'" + "'The Silk Road was a vast network of trade routes that facilitated commerce and cultural exchange among various civilizations, primarily between Europe and Asia. Here\u2019s an overview of its history:\\n\\n- **Time Period**: The Silk Road was active from the 2nd century BCE until the mid-15th century, spanning over 6,400 kilometers (approximately 4,000 miles).\\n\\n- **Origin of the Name**: The term \"Silk Road\" comes from the lucrative silk trade that flourished along these routes, a practice that began during the Han Dynasty (206 BCE \u2013 220 CE) under Emperor Wu.\\n\\n- **Geographical Connections**: The routes connected China with India, Persia, and extended to Europe, allowing not only for the trade of silk but also various commodities such as spices, textiles, and precious stones.\\n\\n- **Cultural Exchange**: The Silk Road served as a conduit for the exchange of goods, ideas, technologies, and cultures, significantly impacting the development of the civilizations involved.\\n\\n- **Dynamics Changes**: The significance of the Silk Road diminished in the late 15th century due to the advent of maritime trade routes and the rise of empires like the Ottoman Empire, which altered trade dynamics in the regions connected by the Silk Road.\\n\\nOverall, the Silk Road represents a crucial chapter in world history, illustrating how trade can foster interactions among diverse cultures over centuries.'" ] }, "execution_count": 66, @@ -922,13 +887,13 @@ "\n", " Retrieval-augmented generation (RAG) is a technique that enables large language models (LLMs) to retrieve and incorporate new information.File Content for https://en.wikipedia.org/wiki/Rag\n", "\n", - " Retrieval-augmented generation, generative AI with the addition of information retrieval capabilities. See also. edit Β· All pages with titles beginning with Rag ...File Content for https://en.wikipedia.org/wiki/Prompt_engineering\n", + " Retrieval-augmented generation, generative AI with the addition of information retrieval capabilities. See also. edit \u00b7 All pages with titles beginning with Rag ...File Content for https://en.wikipedia.org/wiki/Prompt_engineering\n", "\n", " Retrieval-augmented generation (RAG) is a technique that enables generative ... Graph retrieval-augmented generation. edit. GraphRAG with a knowledge ...File Content for https://en.wikipedia.org/wiki/Large_language_model\n", "\n", " Retrieval-augmented generation (RAG) is another approach that enhances LLMs by integrating them with document retrieval systems. Given a query, a document ...File Content for https://en.wikipedia.org/wiki/Category:Information_retrieval_systems\n", "\n", - " Retrieval-augmented generation Β· RetrievalWare Β· Reverse DNS lookup Β· Reverse telephone directory. S. List of search appliance vendors Β· Search engine ( ...\n", + " Retrieval-augmented generation \u00b7 RetrievalWare \u00b7 Reverse DNS lookup \u00b7 Reverse telephone directory. S. List of search appliance vendors \u00b7 Search engine ( ...\n", "\n", "\n", "\n", @@ -972,7 +937,7 @@ "\n", " Confidential computing is a security and privacy-enhancing computational technique focused on protecting data in use.\n", "\n", - "Retrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here’s a summary of how RAG pipelines work:\n", + "Retrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here\u2019s a summary of how RAG pipelines work:\n", "\n", "1. **Information Retrieval**: The first step in a RAG pipeline involves querying an external document retrieval system. Given an input query, the system retrieves relevant documents or pieces of information from a larger corpus.\n", "\n", @@ -985,19 +950,19 @@ "\n", "\n", "[TOOL RESULT]\n", - "{'messages': [{'role': 'system', 'meta': {}, 'name': None, 'content': [{'text': '\\n You are a research agent that can find information on web or specifically on wikipedia. \\n Use wiki_search tool if you need facts and use web_search tool for latest news on topics.\\n Use one tool at a time. Try different queries if you need more information.\\n Only use the retrieved context, do not use your own knowledge.\\n Summarize the all retrieved information before returning response to the user.\\n '}]}, {'role': 'user', 'meta': {}, 'name': None, 'content': [{'text': 'Summarize how RAG pipelines work.'}]}, {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'tool_calls', 'completion_start_time': '2025-05-28T15:18:33.573590', 'usage': None}, 'name': None, 'content': [{'tool_call': {'tool_name': 'wiki_search', 'arguments': {'query': 'RAG pipelines'}, 'id': 'call_xnAEMUy7iCVrRSDbViUJHqXG'}}]}, {'role': 'tool', 'meta': {}, 'name': None, 'content': [{'tool_call_result': {'result': 'File Content for https://en.wikipedia.org/wiki/Retrieval-augmented_generation\\n\\n Retrieval-augmented generation (RAG) is a technique that enables large language models (LLMs) to retrieve and incorporate new information. With RAG, LLMs do ...File Content for https://en.wikipedia.org/wiki/Prompt_engineering\\n\\n GraphRAG (coined by Microsoft Research) is a technique that extends RAG with the use of a knowledge graph (usually, LLM-generated) to allow the model to ...File Content for https://en.wikipedia.org/wiki/Large_language_model\\n\\n Retrieval-augmented generation (RAG) is another approach that enhances LLMs by integrating them with document retrieval systems. Given a query, a document ...File Content for https://en.wikipedia.org/wiki/RagTime\\n\\n RagTime is a frame-oriented business publishing software which combines word processing, spreadsheets, simple drawings, image processing, and chartsFile Content for https://en.wikipedia.org/wiki/Zelten_oil_field\\n\\n SOC operates the Raguba field in the central part of the Sirte Basin. The field is connected by pipeline to the main line between the Zelten field and Brega.', 'origin': {'tool_name': 'wiki_search', 'arguments': {'query': 'RAG pipelines'}, 'id': 'call_xnAEMUy7iCVrRSDbViUJHqXG'}, 'error': False}}]}, {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'tool_calls', 'completion_start_time': '2025-05-28T15:18:35.182539', 'usage': None}, 'name': None, 'content': [{'tool_call': {'tool_name': 'wiki_search', 'arguments': {'query': 'Retrieval-augmented generation'}, 'id': 'call_ECjlLz2bcH0JO0po00qG50LT'}}]}, {'role': 'tool', 'meta': {}, 'name': None, 'content': [{'tool_call_result': {'result': 'File Content for https://en.wikipedia.org/wiki/Retrieval-augmented_generation\\n\\n Retrieval-augmented generation (RAG) is a technique that enables large language models (LLMs) to retrieve and incorporate new information.File Content for https://en.wikipedia.org/wiki/Rag\\n\\n Retrieval-augmented generation, generative AI with the addition of information retrieval capabilities. See also. edit Β· All pages with titles beginning with Rag ...File Content for https://en.wikipedia.org/wiki/Prompt_engineering\\n\\n Retrieval-augmented generation (RAG) is a technique that enables generative ... Graph retrieval-augmented generation. edit. GraphRAG with a knowledge ...File Content for https://en.wikipedia.org/wiki/Large_language_model\\n\\n Retrieval-augmented generation (RAG) is another approach that enhances LLMs by integrating them with document retrieval systems. Given a query, a document ...File Content for https://en.wikipedia.org/wiki/Category:Information_retrieval_systems\\n\\n Retrieval-augmented generation Β· RetrievalWare Β· Reverse DNS lookup Β· Reverse telephone directory. S. List of search appliance vendors Β· Search engine ( ...', 'origin': {'tool_name': 'wiki_search', 'arguments': {'query': 'Retrieval-augmented generation'}, 'id': 'call_ECjlLz2bcH0JO0po00qG50LT'}, 'error': False}}]}, {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'tool_calls', 'completion_start_time': '2025-05-28T15:18:36.804910', 'usage': None}, 'name': None, 'content': [{'tool_call': {'tool_name': 'wiki_search', 'arguments': {'query': 'how RAG pipelines work'}, 'id': 'call_fIBCEMoLaUUDoJnZw137JVSP'}}]}, {'role': 'tool', 'meta': {}, 'name': None, 'content': [{'tool_call_result': {'result': 'File Content for https://en.wikipedia.org/wiki/Retrieval-augmented_generation\\n\\n RAG improves large language models (LLMs) by incorporating information retrieval before generating responses. Unlike traditional LLMs that rely on static ...File Content for https://en.wikipedia.org/wiki/Prompt_engineering\\n\\n Prompt engineering is the process of structuring or crafting an instruction in order to produce the best possible output from a generative artificial ...File Content for https://en.wikipedia.org/wiki/RNA_therapeutics\\n\\n The main types of RNA therapeutics are those based on messenger RNA (mRNA), antisense RNA (asRNA), RNA interference (RNAi), RNA activation (RNAa) and RNA ...File Content for https://en.wikipedia.org/wiki/Aviation_Cadet_Training_Program_(USN)\\n\\n The US Navy had four programs (NavCad, NAP, AVMIDN, and MarCad) for the training of naval aviators.File Content for https://en.wikipedia.org/wiki/DataStax\\n\\n DataStax, Inc. is a real-time data for AI company based in Santa Clara, California. Its product Astra DB is a cloud database-as-a-service based on Apache ...', 'origin': {'tool_name': 'wiki_search', 'arguments': {'query': 'how RAG pipelines work'}, 'id': 'call_fIBCEMoLaUUDoJnZw137JVSP'}, 'error': False}}]}, {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'tool_calls', 'completion_start_time': '2025-05-28T15:18:38.455416', 'usage': None}, 'name': None, 'content': [{'tool_call': {'tool_name': 'wiki_search', 'arguments': {'query': 'Retrieval-augmented generation RAG mechanism'}, 'id': 'call_3GRCPA5myd3uwqFhqbvXn1qk'}}]}, {'role': 'tool', 'meta': {}, 'name': None, 'content': [{'tool_call_result': {'result': 'File Content for https://en.wikipedia.org/wiki/Retrieval-augmented_generation\\n\\n Retrieval-augmented generation (RAG) is a technique that enables large language models (LLMs) to retrieve and incorporate new information. With RAG ...File Content for https://en.wikipedia.org/wiki/Prompt_engineering\\n\\n Retrieval-augmented generation (RAG) is a technique that enables generative artificial intelligence (Gen AI) models to retrieve and incorporate new information.File Content for https://en.wikipedia.org/wiki/Large_language_model\\n\\n Retrieval-augmented generation (RAG) is another approach that enhances LLMs by integrating them with document retrieval systems. Given a query, a document ...File Content for https://en.wikipedia.org/wiki/Generative_artificial_intelligence\\n\\n Generative artificial intelligence is a subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of ...File Content for https://en.wikipedia.org/wiki/Confidential_computing\\n\\n Confidential computing is a security and privacy-enhancing computational technique focused on protecting data in use.', 'origin': {'tool_name': 'wiki_search', 'arguments': {'query': 'Retrieval-augmented generation RAG mechanism'}, 'id': 'call_3GRCPA5myd3uwqFhqbvXn1qk'}, 'error': False}}]}, {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'stop', 'completion_start_time': '2025-05-28T15:18:41.265652', 'usage': None}, 'name': None, 'content': [{'text': 'Retrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here’s a summary of how RAG pipelines work:\\n\\n1. **Information Retrieval**: The first step in a RAG pipeline involves querying an external document retrieval system. Given an input query, the system retrieves relevant documents or pieces of information from a larger corpus.\\n\\n2. **Response Generation**: After retrieving the relevant documents, the RAG leverages the LLM to generate a response. The LLM uses the external documents as context, allowing it to produce more accurate and informed outputs compared to traditional LLMs that rely purely on their pre-existing knowledge.\\n\\n3. **Integration of New Information**: By incorporating real-time data retrieved from documents, RAG systems can provide answers that are more relevant, up-to-date, and comprehensive. This bridging between retrieval systems and generative models allows for better-informed and nuanced responses.\\n\\nIn essence, RAG works by utilizing a two-step process, combining the strengths of information retrieval with the generative capabilities of LLMs, to enhance the quality of generated content.'}]}], 'last_message': {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'stop', 'completion_start_time': '2025-05-28T15:18:41.265652', 'usage': None}, 'name': None, 'content': [{'text': 'Retrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here’s a summary of how RAG pipelines work:\\n\\n1. **Information Retrieval**: The first step in a RAG pipeline involves querying an external document retrieval system. Given an input query, the system retrieves relevant documents or pieces of information from a larger corpus.\\n\\n2. **Response Generation**: After retrieving the relevant documents, the RAG leverages the LLM to generate a response. The LLM uses the external documents as context, allowing it to produce more accurate and informed outputs compared to traditional LLMs that rely purely on their pre-existing knowledge.\\n\\n3. **Integration of New Information**: By incorporating real-time data retrieved from documents, RAG systems can provide answers that are more relevant, up-to-date, and comprehensive. This bridging between retrieval systems and generative models allows for better-informed and nuanced responses.\\n\\nIn essence, RAG works by utilizing a two-step process, combining the strengths of information retrieval with the generative capabilities of LLMs, to enhance the quality of generated content.'}]}}\n", + "{'messages': [{'role': 'system', 'meta': {}, 'name': None, 'content': [{'text': '\\n You are a research agent that can find information on web or specifically on wikipedia. \\n Use wiki_search tool if you need facts and use web_search tool for latest news on topics.\\n Use one tool at a time. Try different queries if you need more information.\\n Only use the retrieved context, do not use your own knowledge.\\n Summarize the all retrieved information before returning response to the user.\\n '}]}, {'role': 'user', 'meta': {}, 'name': None, 'content': [{'text': 'Summarize how RAG pipelines work.'}]}, {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'tool_calls', 'completion_start_time': '2025-05-28T15:18:33.573590', 'usage': None}, 'name': None, 'content': [{'tool_call': {'tool_name': 'wiki_search', 'arguments': {'query': 'RAG pipelines'}, 'id': 'call_xnAEMUy7iCVrRSDbViUJHqXG'}}]}, {'role': 'tool', 'meta': {}, 'name': None, 'content': [{'tool_call_result': {'result': 'File Content for https://en.wikipedia.org/wiki/Retrieval-augmented_generation\\n\\n Retrieval-augmented generation (RAG) is a technique that enables large language models (LLMs) to retrieve and incorporate new information. With RAG, LLMs do ...File Content for https://en.wikipedia.org/wiki/Prompt_engineering\\n\\n GraphRAG (coined by Microsoft Research) is a technique that extends RAG with the use of a knowledge graph (usually, LLM-generated) to allow the model to ...File Content for https://en.wikipedia.org/wiki/Large_language_model\\n\\n Retrieval-augmented generation (RAG) is another approach that enhances LLMs by integrating them with document retrieval systems. Given a query, a document ...File Content for https://en.wikipedia.org/wiki/RagTime\\n\\n RagTime is a frame-oriented business publishing software which combines word processing, spreadsheets, simple drawings, image processing, and chartsFile Content for https://en.wikipedia.org/wiki/Zelten_oil_field\\n\\n SOC operates the Raguba field in the central part of the Sirte Basin. The field is connected by pipeline to the main line between the Zelten field and Brega.', 'origin': {'tool_name': 'wiki_search', 'arguments': {'query': 'RAG pipelines'}, 'id': 'call_xnAEMUy7iCVrRSDbViUJHqXG'}, 'error': False}}]}, {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'tool_calls', 'completion_start_time': '2025-05-28T15:18:35.182539', 'usage': None}, 'name': None, 'content': [{'tool_call': {'tool_name': 'wiki_search', 'arguments': {'query': 'Retrieval-augmented generation'}, 'id': 'call_ECjlLz2bcH0JO0po00qG50LT'}}]}, {'role': 'tool', 'meta': {}, 'name': None, 'content': [{'tool_call_result': {'result': 'File Content for https://en.wikipedia.org/wiki/Retrieval-augmented_generation\\n\\n Retrieval-augmented generation (RAG) is a technique that enables large language models (LLMs) to retrieve and incorporate new information.File Content for https://en.wikipedia.org/wiki/Rag\\n\\n Retrieval-augmented generation, generative AI with the addition of information retrieval capabilities. See also. edit \u00b7 All pages with titles beginning with Rag ...File Content for https://en.wikipedia.org/wiki/Prompt_engineering\\n\\n Retrieval-augmented generation (RAG) is a technique that enables generative ... Graph retrieval-augmented generation. edit. GraphRAG with a knowledge ...File Content for https://en.wikipedia.org/wiki/Large_language_model\\n\\n Retrieval-augmented generation (RAG) is another approach that enhances LLMs by integrating them with document retrieval systems. Given a query, a document ...File Content for https://en.wikipedia.org/wiki/Category:Information_retrieval_systems\\n\\n Retrieval-augmented generation \u00b7 RetrievalWare \u00b7 Reverse DNS lookup \u00b7 Reverse telephone directory. S. List of search appliance vendors \u00b7 Search engine ( ...', 'origin': {'tool_name': 'wiki_search', 'arguments': {'query': 'Retrieval-augmented generation'}, 'id': 'call_ECjlLz2bcH0JO0po00qG50LT'}, 'error': False}}]}, {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'tool_calls', 'completion_start_time': '2025-05-28T15:18:36.804910', 'usage': None}, 'name': None, 'content': [{'tool_call': {'tool_name': 'wiki_search', 'arguments': {'query': 'how RAG pipelines work'}, 'id': 'call_fIBCEMoLaUUDoJnZw137JVSP'}}]}, {'role': 'tool', 'meta': {}, 'name': None, 'content': [{'tool_call_result': {'result': 'File Content for https://en.wikipedia.org/wiki/Retrieval-augmented_generation\\n\\n RAG improves large language models (LLMs) by incorporating information retrieval before generating responses. Unlike traditional LLMs that rely on static ...File Content for https://en.wikipedia.org/wiki/Prompt_engineering\\n\\n Prompt engineering is the process of structuring or crafting an instruction in order to produce the best possible output from a generative artificial ...File Content for https://en.wikipedia.org/wiki/RNA_therapeutics\\n\\n The main types of RNA therapeutics are those based on messenger RNA (mRNA), antisense RNA (asRNA), RNA interference (RNAi), RNA activation (RNAa) and RNA ...File Content for https://en.wikipedia.org/wiki/Aviation_Cadet_Training_Program_(USN)\\n\\n The US Navy had four programs (NavCad, NAP, AVMIDN, and MarCad) for the training of naval aviators.File Content for https://en.wikipedia.org/wiki/DataStax\\n\\n DataStax, Inc. is a real-time data for AI company based in Santa Clara, California. Its product Astra DB is a cloud database-as-a-service based on Apache ...', 'origin': {'tool_name': 'wiki_search', 'arguments': {'query': 'how RAG pipelines work'}, 'id': 'call_fIBCEMoLaUUDoJnZw137JVSP'}, 'error': False}}]}, {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'tool_calls', 'completion_start_time': '2025-05-28T15:18:38.455416', 'usage': None}, 'name': None, 'content': [{'tool_call': {'tool_name': 'wiki_search', 'arguments': {'query': 'Retrieval-augmented generation RAG mechanism'}, 'id': 'call_3GRCPA5myd3uwqFhqbvXn1qk'}}]}, {'role': 'tool', 'meta': {}, 'name': None, 'content': [{'tool_call_result': {'result': 'File Content for https://en.wikipedia.org/wiki/Retrieval-augmented_generation\\n\\n Retrieval-augmented generation (RAG) is a technique that enables large language models (LLMs) to retrieve and incorporate new information. With RAG ...File Content for https://en.wikipedia.org/wiki/Prompt_engineering\\n\\n Retrieval-augmented generation (RAG) is a technique that enables generative artificial intelligence (Gen AI) models to retrieve and incorporate new information.File Content for https://en.wikipedia.org/wiki/Large_language_model\\n\\n Retrieval-augmented generation (RAG) is another approach that enhances LLMs by integrating them with document retrieval systems. Given a query, a document ...File Content for https://en.wikipedia.org/wiki/Generative_artificial_intelligence\\n\\n Generative artificial intelligence is a subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of ...File Content for https://en.wikipedia.org/wiki/Confidential_computing\\n\\n Confidential computing is a security and privacy-enhancing computational technique focused on protecting data in use.', 'origin': {'tool_name': 'wiki_search', 'arguments': {'query': 'Retrieval-augmented generation RAG mechanism'}, 'id': 'call_3GRCPA5myd3uwqFhqbvXn1qk'}, 'error': False}}]}, {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'stop', 'completion_start_time': '2025-05-28T15:18:41.265652', 'usage': None}, 'name': None, 'content': [{'text': 'Retrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here\u2019s a summary of how RAG pipelines work:\\n\\n1. **Information Retrieval**: The first step in a RAG pipeline involves querying an external document retrieval system. Given an input query, the system retrieves relevant documents or pieces of information from a larger corpus.\\n\\n2. **Response Generation**: After retrieving the relevant documents, the RAG leverages the LLM to generate a response. The LLM uses the external documents as context, allowing it to produce more accurate and informed outputs compared to traditional LLMs that rely purely on their pre-existing knowledge.\\n\\n3. **Integration of New Information**: By incorporating real-time data retrieved from documents, RAG systems can provide answers that are more relevant, up-to-date, and comprehensive. This bridging between retrieval systems and generative models allows for better-informed and nuanced responses.\\n\\nIn essence, RAG works by utilizing a two-step process, combining the strengths of information retrieval with the generative capabilities of LLMs, to enhance the quality of generated content.'}]}], 'last_message': {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'stop', 'completion_start_time': '2025-05-28T15:18:41.265652', 'usage': None}, 'name': None, 'content': [{'text': 'Retrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here\u2019s a summary of how RAG pipelines work:\\n\\n1. **Information Retrieval**: The first step in a RAG pipeline involves querying an external document retrieval system. Given an input query, the system retrieves relevant documents or pieces of information from a larger corpus.\\n\\n2. **Response Generation**: After retrieving the relevant documents, the RAG leverages the LLM to generate a response. The LLM uses the external documents as context, allowing it to produce more accurate and informed outputs compared to traditional LLMs that rely purely on their pre-existing knowledge.\\n\\n3. **Integration of New Information**: By incorporating real-time data retrieved from documents, RAG systems can provide answers that are more relevant, up-to-date, and comprehensive. This bridging between retrieval systems and generative models allows for better-informed and nuanced responses.\\n\\nIn essence, RAG works by utilizing a two-step process, combining the strengths of information retrieval with the generative capabilities of LLMs, to enhance the quality of generated content.'}]}}\n", "\n", "\n", "\n", "[TOOL CALL]\n", "Tool: writer_tool \n", - "Arguments: {\"messages\":[{\"role\":\"user\",\"content\":[{\"text\":\"Summary of how RAG pipelines work:\\n\\nRetrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here’s a summary of how RAG pipelines work:\\n\\n1. **Information Retrieval**: The first step in a RAG pipeline involves querying an external document retrieval system. Given an input query, the system retrieves relevant documents or pieces of information from a larger corpus.\\n\\n2. **Response Generation**: After retrieving the relevant documents, the RAG leverages the LLM to generate a response. The LLM uses the external documents as context, allowing it to produce more accurate and informed outputs compared to traditional LLMs that rely purely on their pre-existing knowledge.\\n\\n3. **Integration of New Information**: By incorporating real-time data retrieved from documents, RAG systems can provide answers that are more relevant, up-to-date, and comprehensive. This bridging between retrieval systems and generative models allows for better-informed and nuanced responses.\\n\\nIn essence, RAG works by utilizing a two-step process, combining the strengths of information retrieval with the generative capabilities of LLMs, to enhance the quality of generated content.\"}]}]}\n", + "Arguments: {\"messages\":[{\"role\":\"user\",\"content\":[{\"text\":\"Summary of how RAG pipelines work:\\n\\nRetrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here\u2019s a summary of how RAG pipelines work:\\n\\n1. **Information Retrieval**: The first step in a RAG pipeline involves querying an external document retrieval system. Given an input query, the system retrieves relevant documents or pieces of information from a larger corpus.\\n\\n2. **Response Generation**: After retrieving the relevant documents, the RAG leverages the LLM to generate a response. The LLM uses the external documents as context, allowing it to produce more accurate and informed outputs compared to traditional LLMs that rely purely on their pre-existing knowledge.\\n\\n3. **Integration of New Information**: By incorporating real-time data retrieved from documents, RAG systems can provide answers that are more relevant, up-to-date, and comprehensive. This bridging between retrieval systems and generative models allows for better-informed and nuanced responses.\\n\\nIn essence, RAG works by utilizing a two-step process, combining the strengths of information retrieval with the generative capabilities of LLMs, to enhance the quality of generated content.\"}]}]}\n", "\n", "\n", "\n", "[TOOL CALL]\n", "Tool: notion_writer \n", - "Arguments: {\"title\":\"How RAG Pipelines Work\",\"content\":\"Retrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here’s a summary of how RAG pipelines work:\\n\\n1. **Information Retrieval**: The first step in a RAG pipeline involves querying an external document retrieval system. Given an input query, the system retrieves relevant documents or pieces of information from a larger corpus.\\n\\n2. **Response Generation**: After retrieving the relevant documents, the RAG leverages the LLM to generate a response. The LLM uses the external documents as context, allowing it to produce more accurate and informed outputs compared to traditional LLMs that rely purely on their pre-existing knowledge.\\n\\n3. **Integration of New Information**: By incorporating real-time data retrieved from documents, RAG systems can provide answers that are more relevant, up-to-date, and comprehensive. This bridging between retrieval systems and generative models allows for better-informed and nuanced responses.\\n\\nIn essence, RAG works by utilizing a two-step process, combining the strengths of information retrieval with the generative capabilities of LLMs, to enhance the quality of generated content.\"}\n", + "Arguments: {\"title\":\"How RAG Pipelines Work\",\"content\":\"Retrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here\u2019s a summary of how RAG pipelines work:\\n\\n1. **Information Retrieval**: The first step in a RAG pipeline involves querying an external document retrieval system. Given an input query, the system retrieves relevant documents or pieces of information from a larger corpus.\\n\\n2. **Response Generation**: After retrieving the relevant documents, the RAG leverages the LLM to generate a response. The LLM uses the external documents as context, allowing it to produce more accurate and informed outputs compared to traditional LLMs that rely purely on their pre-existing knowledge.\\n\\n3. **Integration of New Information**: By incorporating real-time data retrieved from documents, RAG systems can provide answers that are more relevant, up-to-date, and comprehensive. This bridging between retrieval systems and generative models allows for better-informed and nuanced responses.\\n\\nIn essence, RAG works by utilizing a two-step process, combining the strengths of information retrieval with the generative capabilities of LLMs, to enhance the quality of generated content.\"}\n", "\n", "\n", "\n", @@ -1007,7 +972,7 @@ "\n", "\n", "[TOOL RESULT]\n", - "{'messages': [{'role': 'system', 'meta': {}, 'name': None, 'content': [{'text': '\\n You are a writer agent that saves given information to different locations.\\n Do not change the provided content before saving.\\n Infer the title from the text if not provided. \\n When you need to save provided information to Notion, use notion_writer tool.\\n When you need to save provided information to document store, use doc_store_writer tool\\n If no location is mentioned, use notion_writer tool to save the information.\\n '}]}, {'role': 'user', 'meta': {}, 'name': None, 'content': [{'text': 'Summary of how RAG pipelines work:\\n\\nRetrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here’s a summary of how RAG pipelines work:\\n\\n1. **Information Retrieval**: The first step in a RAG pipeline involves querying an external document retrieval system. Given an input query, the system retrieves relevant documents or pieces of information from a larger corpus.\\n\\n2. **Response Generation**: After retrieving the relevant documents, the RAG leverages the LLM to generate a response. The LLM uses the external documents as context, allowing it to produce more accurate and informed outputs compared to traditional LLMs that rely purely on their pre-existing knowledge.\\n\\n3. **Integration of New Information**: By incorporating real-time data retrieved from documents, RAG systems can provide answers that are more relevant, up-to-date, and comprehensive. This bridging between retrieval systems and generative models allows for better-informed and nuanced responses.\\n\\nIn essence, RAG works by utilizing a two-step process, combining the strengths of information retrieval with the generative capabilities of LLMs, to enhance the quality of generated content.'}]}, {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'tool_calls', 'completion_start_time': '2025-05-28T15:18:50.908456', 'usage': None}, 'name': None, 'content': [{'tool_call': {'tool_name': 'notion_writer', 'arguments': {'title': 'How RAG Pipelines Work', 'content': 'Retrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here’s a summary of how RAG pipelines work:\\n\\n1. **Information Retrieval**: The first step in a RAG pipeline involves querying an external document retrieval system. Given an input query, the system retrieves relevant documents or pieces of information from a larger corpus.\\n\\n2. **Response Generation**: After retrieving the relevant documents, the RAG leverages the LLM to generate a response. The LLM uses the external documents as context, allowing it to produce more accurate and informed outputs compared to traditional LLMs that rely purely on their pre-existing knowledge.\\n\\n3. **Integration of New Information**: By incorporating real-time data retrieved from documents, RAG systems can provide answers that are more relevant, up-to-date, and comprehensive. This bridging between retrieval systems and generative models allows for better-informed and nuanced responses.\\n\\nIn essence, RAG works by utilizing a two-step process, combining the strengths of information retrieval with the generative capabilities of LLMs, to enhance the quality of generated content.'}, 'id': 'call_t8V2bODfsxlOiTeCdVWWznir'}}]}, {'role': 'tool', 'meta': {}, 'name': None, 'content': [{'tool_call_result': {'result': \"{'success': True, 'status_code': 200}\", 'origin': {'tool_name': 'notion_writer', 'arguments': {'title': 'How RAG Pipelines Work', 'content': 'Retrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here’s a summary of how RAG pipelines work:\\n\\n1. **Information Retrieval**: The first step in a RAG pipeline involves querying an external document retrieval system. Given an input query, the system retrieves relevant documents or pieces of information from a larger corpus.\\n\\n2. **Response Generation**: After retrieving the relevant documents, the RAG leverages the LLM to generate a response. The LLM uses the external documents as context, allowing it to produce more accurate and informed outputs compared to traditional LLMs that rely purely on their pre-existing knowledge.\\n\\n3. **Integration of New Information**: By incorporating real-time data retrieved from documents, RAG systems can provide answers that are more relevant, up-to-date, and comprehensive. This bridging between retrieval systems and generative models allows for better-informed and nuanced responses.\\n\\nIn essence, RAG works by utilizing a two-step process, combining the strengths of information retrieval with the generative capabilities of LLMs, to enhance the quality of generated content.'}, 'id': 'call_t8V2bODfsxlOiTeCdVWWznir'}, 'error': False}}]}], 'last_message': {'role': 'tool', 'meta': {}, 'name': None, 'content': [{'tool_call_result': {'result': \"{'success': True, 'status_code': 200}\", 'origin': {'tool_name': 'notion_writer', 'arguments': {'title': 'How RAG Pipelines Work', 'content': 'Retrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here’s a summary of how RAG pipelines work:\\n\\n1. **Information Retrieval**: The first step in a RAG pipeline involves querying an external document retrieval system. Given an input query, the system retrieves relevant documents or pieces of information from a larger corpus.\\n\\n2. **Response Generation**: After retrieving the relevant documents, the RAG leverages the LLM to generate a response. The LLM uses the external documents as context, allowing it to produce more accurate and informed outputs compared to traditional LLMs that rely purely on their pre-existing knowledge.\\n\\n3. **Integration of New Information**: By incorporating real-time data retrieved from documents, RAG systems can provide answers that are more relevant, up-to-date, and comprehensive. This bridging between retrieval systems and generative models allows for better-informed and nuanced responses.\\n\\nIn essence, RAG works by utilizing a two-step process, combining the strengths of information retrieval with the generative capabilities of LLMs, to enhance the quality of generated content.'}, 'id': 'call_t8V2bODfsxlOiTeCdVWWznir'}, 'error': False}}]}}\n", + "{'messages': [{'role': 'system', 'meta': {}, 'name': None, 'content': [{'text': '\\n You are a writer agent that saves given information to different locations.\\n Do not change the provided content before saving.\\n Infer the title from the text if not provided. \\n When you need to save provided information to Notion, use notion_writer tool.\\n When you need to save provided information to document store, use doc_store_writer tool\\n If no location is mentioned, use notion_writer tool to save the information.\\n '}]}, {'role': 'user', 'meta': {}, 'name': None, 'content': [{'text': 'Summary of how RAG pipelines work:\\n\\nRetrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here\u2019s a summary of how RAG pipelines work:\\n\\n1. **Information Retrieval**: The first step in a RAG pipeline involves querying an external document retrieval system. Given an input query, the system retrieves relevant documents or pieces of information from a larger corpus.\\n\\n2. **Response Generation**: After retrieving the relevant documents, the RAG leverages the LLM to generate a response. The LLM uses the external documents as context, allowing it to produce more accurate and informed outputs compared to traditional LLMs that rely purely on their pre-existing knowledge.\\n\\n3. **Integration of New Information**: By incorporating real-time data retrieved from documents, RAG systems can provide answers that are more relevant, up-to-date, and comprehensive. This bridging between retrieval systems and generative models allows for better-informed and nuanced responses.\\n\\nIn essence, RAG works by utilizing a two-step process, combining the strengths of information retrieval with the generative capabilities of LLMs, to enhance the quality of generated content.'}]}, {'role': 'assistant', 'meta': {'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'tool_calls', 'completion_start_time': '2025-05-28T15:18:50.908456', 'usage': None}, 'name': None, 'content': [{'tool_call': {'tool_name': 'notion_writer', 'arguments': {'title': 'How RAG Pipelines Work', 'content': 'Retrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here\u2019s a summary of how RAG pipelines work:\\n\\n1. **Information Retrieval**: The first step in a RAG pipeline involves querying an external document retrieval system. Given an input query, the system retrieves relevant documents or pieces of information from a larger corpus.\\n\\n2. **Response Generation**: After retrieving the relevant documents, the RAG leverages the LLM to generate a response. The LLM uses the external documents as context, allowing it to produce more accurate and informed outputs compared to traditional LLMs that rely purely on their pre-existing knowledge.\\n\\n3. **Integration of New Information**: By incorporating real-time data retrieved from documents, RAG systems can provide answers that are more relevant, up-to-date, and comprehensive. This bridging between retrieval systems and generative models allows for better-informed and nuanced responses.\\n\\nIn essence, RAG works by utilizing a two-step process, combining the strengths of information retrieval with the generative capabilities of LLMs, to enhance the quality of generated content.'}, 'id': 'call_t8V2bODfsxlOiTeCdVWWznir'}}]}, {'role': 'tool', 'meta': {}, 'name': None, 'content': [{'tool_call_result': {'result': \"{'success': True, 'status_code': 200}\", 'origin': {'tool_name': 'notion_writer', 'arguments': {'title': 'How RAG Pipelines Work', 'content': 'Retrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here\u2019s a summary of how RAG pipelines work:\\n\\n1. **Information Retrieval**: The first step in a RAG pipeline involves querying an external document retrieval system. Given an input query, the system retrieves relevant documents or pieces of information from a larger corpus.\\n\\n2. **Response Generation**: After retrieving the relevant documents, the RAG leverages the LLM to generate a response. The LLM uses the external documents as context, allowing it to produce more accurate and informed outputs compared to traditional LLMs that rely purely on their pre-existing knowledge.\\n\\n3. **Integration of New Information**: By incorporating real-time data retrieved from documents, RAG systems can provide answers that are more relevant, up-to-date, and comprehensive. This bridging between retrieval systems and generative models allows for better-informed and nuanced responses.\\n\\nIn essence, RAG works by utilizing a two-step process, combining the strengths of information retrieval with the generative capabilities of LLMs, to enhance the quality of generated content.'}, 'id': 'call_t8V2bODfsxlOiTeCdVWWznir'}, 'error': False}}]}], 'last_message': {'role': 'tool', 'meta': {}, 'name': None, 'content': [{'tool_call_result': {'result': \"{'success': True, 'status_code': 200}\", 'origin': {'tool_name': 'notion_writer', 'arguments': {'title': 'How RAG Pipelines Work', 'content': 'Retrieval-Augmented Generation (RAG) is a method that enhances large language models (LLMs) by integrating the capability to retrieve external information before generating responses. Here\u2019s a summary of how RAG pipelines work:\\n\\n1. **Information Retrieval**: The first step in a RAG pipeline involves querying an external document retrieval system. Given an input query, the system retrieves relevant documents or pieces of information from a larger corpus.\\n\\n2. **Response Generation**: After retrieving the relevant documents, the RAG leverages the LLM to generate a response. The LLM uses the external documents as context, allowing it to produce more accurate and informed outputs compared to traditional LLMs that rely purely on their pre-existing knowledge.\\n\\n3. **Integration of New Information**: By incorporating real-time data retrieved from documents, RAG systems can provide answers that are more relevant, up-to-date, and comprehensive. This bridging between retrieval systems and generative models allows for better-informed and nuanced responses.\\n\\nIn essence, RAG works by utilizing a two-step process, combining the strengths of information retrieval with the generative capabilities of LLMs, to enhance the quality of generated content.'}, 'id': 'call_t8V2bODfsxlOiTeCdVWWznir'}, 'error': False}}]}}\n", "\n", "The summary of how Retrieval-Augmented Generation (RAG) pipelines work has been successfully saved in Notion. If you need anything else, feel free to ask!\n", "\n" @@ -1034,7 +999,7 @@ "source": [ "## What's next\n", "\n", - "πŸŽ‰ Congratulations! You've just built a multi-agent system with Haystack, where specialized agents work together to research and write, each with their own tools and responsibilities. You now have a flexible foundation for building more complex, modular agent workflows.\n", + "\ud83c\udf89 Congratulations! You've just built a multi-agent system with Haystack, where specialized agents work together to research and write, each with their own tools and responsibilities. You now have a flexible foundation for building more complex, modular agent workflows.\n", "\n", "Curious to keep exploring? Here are a few great next steps:\n", "\n", @@ -1061,4 +1026,4 @@ }, "nbformat": 4, "nbformat_minor": 0 -} +} \ No newline at end of file