跳转到主要内容

标签(标签)

资源精选(342) Go开发(108) Go语言(103) Go(99) LLM(84) angular(83) 大语言模型(67) 人工智能(56) 前端开发(50) LangChain(43) golang(43) 机器学习(40) Go工程师(38) Go程序员(38) Go开发者(36) React(34) Go基础(29) Python(24) Vue(23) Web开发(20) 深度学习(20) Java(20) Web技术(19) 精选资源(19) ChatGTP(17) Cookie(16) android(16) 前端框架(13) JavaScript(13) 智能体(12) Next.js(12) ChatGPT(11) LLMOps(11) 提示工程(11) 聊天机器人(11) 安卓(11) typescript(10) 资料精选(10) PostgreSQL(10) mlops(10) NLP(10) 第三方Cookie(9) Redwoodjs(9) RAG(9) Go语言中级开发(9) 自然语言处理(9) 区块链(9) 安全(9) 全栈开发(8) OpenAI(8) Linux(8) AI(8) GraphQL(8) iOS(8) 数据科学(8) 软件架构(7) Go语言高级开发(7) AWS(7) C++(7) whisper(6) Prisma(6) 隐私保护(6) Agent(6) JSON(6) DevOps(6) 数据可视化(6) wasm(6) 计算机视觉(6) 算法(6) Rust(6) 微服务(6) MCP(5) 隐私沙盒(5) FedCM(5) 语音识别(5) Angular开发(5) 快速应用开发(5) 生成式AI(5) LLaMA(5) 低代码开发(5) Go测试(5) gorm(5) REST API(5) kafka(5) 推荐系统(5) WebAssembly(5) GameDev(5) 数据分析(5) CMS(5) CSS(5) machine-learning(5) 机器人(5) 游戏开发(5) Blockchain(5) Web安全(5) nextjs(5) Kotlin(5) 低代码平台(5) 机器学习资源(5) Go资源(5) Nodejs(5) PHP(5) Swift(5) RAG架构(4) devin(4) LLM Agent(4) Blitz(4) javascript框架(4) Redwood(4) GDPR(4) 生成式人工智能(4) Angular16(4) Alpaca(4) 编程语言(4) SAML(4) JWT(4) JSON处理(4) Go并发(4) 移动开发(4) 移动应用(4) security(4) 认证(4) 隐私(4) spring-boot(4) 物联网(4) 网络安全(4) API(4) Ruby(4) 信息安全(4) flutter(4) 专家智能体(3) Chrome(3) CHIPS(3) 3PC(3) SSE(3) 人工智能软件工程师(3) Remix(3) Ubuntu(3) GPT4All(3) 模型评估(3) 软件开发(3) 问答系统(3) 开发工具(3) 最佳实践(3) RxJS(3) SSR(3) Node.js(3) Dolly(3) 移动应用开发(3) 低代码(3) IAM(3) Web框架(3) CORS(3) 基准测试(3) Go语言数据库开发(3) Oauth2(3) 并发(3) 主题(3) Theme(3) earth(3) nginx(3) 软件工程(3) azure(3) keycloak(3) 生产力工具(3) gpt3(3) 工作流(3) C(3) jupyter(3) prometheus(3) GAN(3) Spring(3) 逆向工程(3) 应用安全(3) Docker(3) Django(3) Machine Learning(3) R(3) .NET(3) 大数据(3) Hacking(3) 渗透测试(3) C++资源(3) Mac(3) 微信小程序(3) Python资源(3) JHipster(3) SQL(2) Apache(2) Hashicorp Vault(2) Spring Cloud Vault(2) Go语言Web开发(2) Go测试工程师(2) WebSocket(2) 容器化(2) AES(2) 加密(2) 输入验证(2) ORM(2) Fiber(2) Postgres(2) Gorilla Mux(2) Go数据库开发(2) 模块(2) 泛型(2) 指针(2) HTTP(2) PostgreSQL开发(2) Vault(2) K8s(2) Spring boot(2) R语言(2) 深度学习资源(2) 半监督学习(2) semi-supervised-learning(2) architecture(2) 普罗米修斯(2) 嵌入模型(2) productivity(2) 编码(2) Qt(2) 前端(2) Rust语言(2) NeRF(2) 神经辐射场(2) 元宇宙(2) CPP(2) spark(2) 流处理(2) Ionic(2) 人体姿势估计(2) human-pose-estimation(2) 视频处理(2) deep-learning(2) kotlin语言(2) kotlin开发(2) burp(2) Chatbot(2) npm(2) quantum(2) OCR(2) 游戏(2) game(2) 内容管理系统(2) MySQL(2) python-books(2) pentest(2) opengl(2) IDE(2) 漏洞赏金(2) Web(2) 知识图谱(2) PyTorch(2) 数据库(2) reverse-engineering(2) 数据工程(2) swift开发(2) rest(2) robotics(2) ios-animation(2) 知识蒸馏(2) 安卓开发(2) nestjs(2) solidity(2) 爬虫(2) 面试(2) 容器(2) C++精选(2) 人工智能资源(2) 备忘单(2) 编程书籍(2) angular资源(2) 速查表(2) cheatsheets(2) SecOps(2) mlops资源(2) R资源(2) DDD(2) 架构设计模式(2) 量化(2) Hacking资源(2) 强化学习(2) flask(2) 设计(2) 性能(2) Sysadmin(2) 系统管理员(2) Java资源(2) 机器学习精选(2) android资源(2) android-UI(2) Mac资源(2) iOS资源(2) Vue资源(2) flutter资源(2) JavaScript精选(2) JavaScript资源(2) Rust开发(2) deeplearning(2) RAD(2)

category

LangChain是一个使用大型语言模型(LLM)构建应用程序的框架,而LangGraph是一个基于LangChain构建的库,允许循环工作流和代理创建。

LangGraph使构建具有“内存”的AI应用程序变得更加容易。想象一下,在一次对话中,你可以记住过去的问题和答案。LangGraph允许大型语言模型(LLM)做到这一点!受数据处理工具的启发,它使用简单的函数(如Python代码)在循环中连接应用程序的不同部分。这种“记忆”允许与LLM进行更复杂的交互。长时间运行的业务流程(LRBP)也可以通过长暂停时间、可恢复的工作流和创建多个代理来完成作业等功能实现高效自动化。

在这篇文章中,我们将重点介绍如何在LangGraph中启用持久性和共享状态。为了简单起见,我们将按顺序使用多个问题与代理进行交互,其中每个问题的答案取决于前一个问题。以下是按顺序排列的问题和答案:

问题1:谁赢得了2022年国际足联世界杯?

答案:阿根廷在决赛中击败法国,赢得了2022年国际足联世界杯。

问题2:另一支决赛队伍的首都是哪里?

答:巴黎是法国的首都。

问题3:另一支决赛队伍上一次赢得国际足联世界杯是对阵谁的?那个国家的人口是多少?

答案:法国队在决赛中击败克罗地亚队,赢得了2018年国际足联世界杯。克罗地亚的人口约为400万。

此外,法国的人口约为6700万。

现在让我们看看代码:

pip install -U langgraph
pip install langchain_openai

我们需要导出一些环境变量:
导出OPENAI_API_KEY=sk-。..

# Imports necessary libraries for LangGraph and message types

from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
import operator
from langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, ToolMessage
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults
from langgraph.checkpoint.sqlite import SqliteSaver


# Configure Tavily search tool for a maximum of 2 results
tool = TavilySearchResults(max_results=2)


# Define the type of agent state with an accumulating list of messages
class AgentState(TypedDict):
    messages: Annotated[list[AnyMessage], operator.add]

# Use SqliteSaver for in-memory checkpointing
memory = SqliteSaver.from_conn_string(":memory:")

在这里,我们使用SqliteSaver进行内存检查点:

SqliteSaver:SqliteSaver是langgraph.checkpoint模块中的一个类。它负责保存和恢复LangGraph应用程序的状态。

from_conn_string方法:在SqliteSaver类上调用from_conn_string方法。此方法用于初始化检查点机制。

“:memory:”String:“:memore:”字符串是传递给from_conn_String方法的参数。此字符串指定数据库的连接字符串。

 

定义Agent类和相关函数:

class Agent:
    def __init__(self, model, tools, checkpointer, system=""):
        self.system = system
        graph = StateGraph(AgentState)
        graph.add_node("llm", self.call_openai)
        graph.add_node("action", self.take_action)
        graph.add_conditional_edges("llm", self.exists_action, {True: "action", False: END})
        graph.add_edge("action", "llm")
        graph.set_entry_point("llm")
        self.graph = graph.compile(checkpointer=checkpointer)
        self.tools = {t.name: t for t in tools}
        self.model = model.bind_tools(tools)

    def call_openai(self, state: AgentState):
        messages = state['messages']
        if self.system:
            messages = [SystemMessage(content=self.system)] + messages
        message = self.model.invoke(messages)
        return {'messages': [message]}

    def exists_action(self, state: AgentState):
        result = state['messages'][-1]
        return len(result.tool_calls) > 0

    def take_action(self, state: AgentState):
        tool_calls = state['messages'][-1].tool_calls
        results = []
        for t in tool_calls:
            print(f"Calling: {t}")
            result = self.tools[t['name']].invoke(t['args'])
            results.append(ToolMessage(tool_call_id=t['id'], name=t['name'], content=str(result)))
        print("Back to the model!")
        return {'messages': results}

现在,我们将定义Agent对象和各种方法:

class Agent:
    """
    This class defines an agent within the LangGraph framework. 

    Args:
        model (langchain.llms.base.AbstractLLM): The large language model to be used by the agent.
        tools (list): A list of LangChain tool objects used for various functionalities.
        checkpointer (langgraph.checkpoint.base.Checkpointer): The checkpointer object for persisting agent state.
        system (str, optional): An optional system message to prepend to the conversation history. Defaults to "".
    """


    def __init__(self, model, tools, checkpointer, system=""):
        self.system = system
        self.graph = self._build_graph(model, checkpointer)  # Encapsulate graph building in a private method
        self.tools = {t.name: t for t in tools}  # Create a dictionary for efficient tool access by name
        self.model = model.bind_tools(tools)  # Bind tools to the provided large language model


    def _build_graph(self, model, checkpointer):
        """
        This private method builds the LangGraph for the agent.

        Args:
            model (langchain.llms.base.AbstractLLM): The large language model to be used.
            checkpointer (langgraph.checkpoint.base.Checkpointer): The checkpointer object for state persistence.

        Returns:
            StateGraph: The compiled LangGraph for the agent.
        """

        graph = StateGraph(AgentState)
        graph.add_node("llm", self.call_openai)
        graph.add_node("action", self.take_action)
        graph.add_conditional_edges("llm", self.exists_action, {True: "action", False: END})
        graph.add_edge("action", "llm")
        graph.set_entry_point("llm")
        return graph.compile(checkpointer=checkpointer)


    def call_openai(self, state: AgentState):
        """
        This method interacts with the large language model.

        Args:
            state (AgentState): The current state of the agent, including message history.

        Returns:
            AgentState: The updated agent state with the latest message from the large language model.
        """

        messages = state['messages']
        if self.system:
            messages = [SystemMessage(content=self.system)] + messages
        message = self.model.invoke(messages)
        return {'messages': [message]}


    def exists_action(self, state: AgentState):
        """
        This method checks if the latest message from the large language model requires any tool actions.

        Args:
            state (AgentState): The current state of the agent.

        Returns:
            bool: True if the latest message contains tool calls, False otherwise.
        """

        result = state['messages'][-1]
        return len(result.tool_calls) > 0


    def take_action(self, state: AgentState):
        """
        This method executes tool actions requested by the large language model.

        Args:
            state (AgentState): The current state of the agent.

        Returns:
            AgentState: The updated agent state with the results of tool actions.
        """

        tool_calls = state['messages'][-1].tool_calls
        results = []
        for t in tool_calls:
            print(f"Calling: {t}")
            result = self.tools[t['name']].invoke(t['args'])
            results.append(ToolMessage(tool_call_id=t['id'], name=t['name'], content=str(result)))
        print("Back to the model!")
        return {'messages': results} 

重要提示:langgraph.checkpoint.base。检查点对象负责状态持久性

prompt = """You are a smart research assistant. Use the search engine to look up information. \
You are allowed to make multiple calls (either together or in sequence). \
Only look up information when you are sure of what you want. \
If you need to look up some information before asking a follow up question, you are allowed to do that!
"""
model = ChatOpenAI(model="gpt-4o")
abot = Agent(model, [tool], system=prompt, checkpointer=memory) 

现在我们来问第一个问题:

# Define initial user message as a question about the FIFA World Cup winner in 2022
messages = [HumanMessage(content="Who won the FIFA World Cup in 2022?")]

# Set thread information for potential tracking or routing
thread = {"configurable": {"thread_id": "1"}}

# Stream messages through the agent's graph (abot.graph)
for event in abot.graph.stream({"messages": messages}, thread):
  # Iterate through each value in the streamed event
  for v in event.values():
    # Print the list of messages associated with the current value
    print(v['messages'])

请注意,我们已经配置了{“thread_id”:“1”}。这将是“记住”对话的线索。状态数据将被持久化,并在SqliteSaver的帮助下可供使用,该程序将状态持久化在SQLite数据库中,无论是在内存中还是在给定位置。

AIMessage的响应是:

[AIMessage(content='Argentina won the 2022 FIFA World Cup. They defeated France in the final, which was held in Qatar. The match concluded with a dramatic penalty shootout after a thrilling game that ended in a 3-3 draw after extra time.', response_metadata={'token_usage': {'completion_tokens': 50, 'prompt_tokens': 1410, 'total_tokens': 1460}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_319be4768e', 'finish_reason': 'stop', 'logprobs': None}, id='run-a2e85a23-3b46-4d76-8b9c-ab8cd3865ff5-0')]

所以答案是:阿根廷赢得了2022年国际足联世界杯。他们在卡塔尔举行的决赛中击败了法国队。

现在,我们将提出第二个问题:

另一支决赛队伍的首都是哪里?

messages = [HumanMessage(content="What is the capital city of the other finalist team?")]
thread = {"configurable": {"thread_id": "1"}}
for event in abot.graph.stream({"messages": messages}, thread):
    for v in event.values():
        print(v)

请注意,我们正在维护{“thread_id”:“1”},以便代理可以提供上下文答案。答案是:

[AIMessage(content='The other finalist team in the 2022 FIFA World Cup was France. The capital city of France is Paris.', response_metadata={'token_usage': {'completion_tokens': 24, 'prompt_tokens': 1113, 'total_tokens': 1137}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_319be4768e', 'finish_reason': 'stop', 'logprobs': None}, id='run-9eeed0ef-52c2-4aee-8b19-b3a60749472c-0')]}

2022年国际足联世界杯的另一支决赛队伍是法国队。法国的首都是巴黎。

 

现在,我们将提出与第二个问题密切相关的第三个问题:

最后一支决赛队伍是谁赢得了国际足联世界杯?那个国家的人口是多少?

messages = [HumanMessage(content="Againts who the other finalist team last win the FIFA World Cup?What is the population of that country?")]
thread = {"configurable": {"thread_id": "1"}}
for event in abot.graph.stream({"messages": messages}, thread):
    for v in event.values():
        print(v)

And the answer is:

[AIMessage(content='France last won the FIFA World Cup in 2018, defeating Croatia in the final.\n\nAs of 2023, the population of Croatia is approximately 3,988,566.', response_metadata={'token_usage': {'completion_tokens': 38, 'prompt_tokens': 2351, 'total_tokens': 2389}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_319be4768e', 'finish_reason': 'stop', 'logprobs': None}, id='run-c41fa98d-e87f-4570-b127-2ab208ec25b1-0')]

法国上一次赢得国际足联世界杯是在2018年,在决赛中击败克罗地亚。截至2023年,克罗地亚的人口约为3988566人。

因此,你可以看到thread={“configurable”:{“thread_id”:“1”}}确保了所有对话都在交互中得到维护。这可以持久化在数据库中,以便它可以由处理相同或相关任务的同一代理或任何其他代理调用。

 

现在,仅出于测试目的,我们将更改thread_id并检查响应:

messages = [HumanMessage(content="Againts who the other finalist team last win the FIFA World Cup?What is the population of that country?")]
thread = {"configurable": {"thread_id": "2"}}
for event in abot.graph.stream({"messages": messages}, thread):
    for v in event.values():
        print(v)

这里我们配置了:{“thread_id”:“2”},响应为:

[AIMessage(content='### Last FIFA World Cup Final Result:\nThe last FIFA World Cup final was held on December 18, 2022. Argentina won their third crown by defeating France on penalties in a dramatic match, which ended 3-3 after extra time and was decided 4-2 in the penalty shootout.\n\n### Population of France:\nThe country that Argentina defeated in the last FIFA World Cup final is France. According to the most recent estimates, the current population of France is approximately 67 million people.', response_metadata={'token_usage': {'completion_tokens': 102, 'prompt_tokens': 3115, 'total_tokens': 3217}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_319be4768e', 'finish_reason': 'stop', 'logprobs': None}, id='run-9dd216ff-71d4-4f97-8bbb-edb41120c06d-0')]

上一届国际足联世界杯决赛于2022年12月18日举行。阿根廷队在一场戏剧性的比赛中以点球大战击败法国队,赢得了他们的第三个冠军。..根据最新估计,法国目前的人口约为6700万人

 

这意味着代理会有点困惑,将最后一个问题视为新对话的开始,并相应地做出回应。

结论:

LangGraph的持久性功能改变了LLM工作流的游戏规则。通过启用状态保存,LangGraph使您能够构建高级LLM代理,甚至可以自动化最复杂的业务流程。想象一下,智能代理可以处理周期性任务,与各种系统交互,并随着时间的推移学习,同时提供切实的投资回报率(ROI)。LangGraph使这成为现实,为智能自动化的新时代铺平了道路。

我建议你也尝试一下LangGraph并分享你的想法。

#AI、人工智能、机器学习、深度学习、软件代理、智能代理、对话式AI、聊天机器人、Langchain、LangGraph、LLM工作流

归因:LangGraph示例和文档来自https://github.com/langchain-ai/langgraph/tree/main/examples