Skip to main content

Overview

Integrate LangChain agents with AgentFlow to build sophisticated AI systems that can use tools, access external data, and execute multi-step reasoning workflows.

Prerequisites

LangChain Account

API Key

Generate API key from LangChain settings

Agent Created

Build and deploy a LangChain agent

AgentFlow Access

Admin rights in AgentFlow

Step 1: Create LangChain Agent

Setup LangChain Project

1

Install LangChain

pip install langchain langchain-openai langsmith
2

Configure Environment

export LANGCHAIN_API_KEY="your_api_key"
export LANGCHAIN_TRACING_V2="true"
export OPENAI_API_KEY="your_openai_key"
3

Create Agent

from langchain.agents import create_openai_functions_agent
from langchain_openai import ChatOpenAI
from langchain.tools import Tool
from langchain.prompts import ChatPromptTemplate

# Define tools
tools = [
    Tool(
        name="web_search",
        func=web_search_function,
        description="Search the web for current information"
    ),
    Tool(
        name="calculator",
        func=calculator_function,
        description="Perform mathematical calculations"
    )
]

# Create agent
llm = ChatOpenAI(model="gpt-4", temperature=0.7)
agent = create_openai_functions_agent(
    llm=llm,
    tools=tools,
    prompt=ChatPromptTemplate.from_messages([
        ("system", "You are a helpful AI assistant with access to tools."),
        ("human", "{input}"),
        ("placeholder", "{agent_scratchpad}")
    ])
)
4

Deploy Agent

from langchain.agents import AgentExecutor

agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,
    max_iterations=10
)

# Create API endpoint
from langserve import add_routes
from fastapi import FastAPI

app = FastAPI()
add_routes(app, agent_executor, path="/agent")

Deploy to LangChain Hub

1

Push to Hub

from langchain.hub import push

push(
    "your-username/research-agent",
    agent_executor,
    description="Research agent with web search and calculation tools"
)
2

Get Agent ID

Copy the agent ID from LangSmith dashboard
agent_id: "abc123xyz789"

Step 2: Create AI Connection in AgentFlow

Manual Configuration

  1. Admin DashboardAI ModelsAdd Model
  2. Basic Info:
    • Name: LangChain Research Agent
    • Model ID: langchain-agent-executor
    • Description: Research agent with web search and calculation
  3. API Settings:
    • Endpoint: https://api.langchain.com/v1/agents/{{agent_id}}/invoke
    • Method: POST
  4. Headers:
    {
      "Authorization": "Bearer {{langchain_api_key}}",
      "Content-Type": "application/json",
      "X-LangChain-Version": "0.1.0"
    }
    
  5. Request Schema:
    {
      "agent_id": "{{agent_id}}",
      "input": {
        "message": "{{message}}",
        "user_id": "{{user_id}}",
        "session_id": "{{session_id}}",
        "timestamp": "{{timestamp}}"
      },
      "config": {
        "callbacks": [],
        "tags": ["chat-platform"],
        "metadata": {
          "user_id": "{{user_id}}",
          "session_id": "{{session_id}}"
        }
      },
      "tools": [
        {
          "name": "web_search",
          "description": "Search the web for current information"
        },
        {
          "name": "calculator",
          "description": "Perform mathematical calculations"
        },
        {
          "name": "code_executor",
          "description": "Execute code snippets"
        }
      ],
      "memory": {
        "type": "conversation_buffer",
        "max_token_limit": 2000
      }
    }
    
  6. Response Path: data.output
  7. Save

Step 3: Import via YAML

Complete YAML Configuration

Create langchain-agent-config.yaml:
name: "Hosted LangChain Agent"
model_id: "langchain-agent-executor"
description: "Execute hosted LangChain agents for complex AI workflows and tool usage"
endpoint: "https://api.langchain.com/v1/agents/{{agent_id}}/invoke"
method: "POST"

headers:
  Authorization: "Bearer {{langchain_api_key}}"
  Content-Type: "application/json"
  X-LangChain-Version: "0.1.0"

request_schema:
  agent_id: "{{agent_id}}"
  input:
    message: "{{message}}"
    user_id: "{{user_id}}"
    session_id: "{{session_id}}"
    timestamp: "{{timestamp}}"
  config:
    callbacks: []
    tags: ["chat-platform"]
    metadata:
      user_id: "{{user_id}}"
      session_id: "{{session_id}}"
  tools:
    - name: "web_search"
      description: "Search the web for current information"
    - name: "calculator"
      description: "Perform mathematical calculations"
    - name: "code_executor"
      description: "Execute code snippets"
  memory:
    type: "conversation_buffer"
    max_token_limit: 2000

response_path: "data.output"

message_format:
  preset: "langchain_agent"
  mapping:
    role:
      source: "role"
      target: "input.role"
      transform: "lowercase"
    content:
      source: "content"
      target: "input.message"
      transform: "none"
    timestamp:
      source: "timestamp"
      target: "input.timestamp"
      transform: "iso8601"
    session_id:
      source: "session_id"
      target: "input.session_id"
      transform: "none"
  customFields:
    - name: "langchain_configuration"
      value:
        platform: "langchain"
        version: "0.1.0"
        agent_type: "conversational"
        capabilities: ["tool_usage", "memory", "reasoning"]
      type: "object"
    - name: "agent_settings"
      value:
        max_iterations: 10
        timeout: 300
        memory_type: "conversation_buffer"
        max_token_limit: 2000
      type: "object"

suggestion_prompts:
  - "Create an agent that can research topics and provide comprehensive summaries"
  - "Build an agent for automated data analysis and visualization"
  - "Set up an agent for code review and optimization suggestions"
  - "Create an agent for customer support with access to knowledge base"
  - "Build an agent for automated report generation from multiple data sources"

Import Steps

  1. Update agent_id with your actual ID
  2. Admin DashboardAI ModelsImport Model
  3. Upload YAML file
  4. Enter LangChain API key
  5. Import

Step 4: Assign to Group

  1. Admin DashboardGroups
  2. Select/Create group (e.g., “Research Team”)
  3. Manage Models → Enable LangChain Agent
  4. Configure access:
    • Tool Access: All tools enabled
    • Max Iterations: 10
    • Timeout: 5 minutes
    • Memory: Enabled
  5. Save

Step 5: Use in Chat

Agent Interactions

  1. ChatNew Conversation
  2. Select LangChain Research Agent
  3. Ask questions that require tool usage

Example Prompts

Research the latest developments in quantum computing from 2024 and summarize the key breakthroughs.

Understanding Agent Responses

Agent shows its reasoning process:
Thought: I need to search for recent quantum computing developments.
Action: web_search
Action Input: "quantum computing breakthroughs 2024"
Observation: [Search results...]

Thought: I now have enough information to summarize.
Final Answer: [Comprehensive summary...]

Building Custom Tools

Tool Definition Pattern

from langchain.tools import BaseTool
from pydantic import BaseModel, Field

class SearchInput(BaseModel):
    query: str = Field(description="The search query")
    num_results: int = Field(default=5, description="Number of results")

class WebSearchTool(BaseTool):
    name = "web_search"
    description = "Search the web for current information"
    args_schema = SearchInput

    def _run(self, query: str, num_results: int = 5) -> str:
        # Implement search logic
        results = search_api(query, limit=num_results)
        return format_results(results)

    async def _arun(self, query: str, num_results: int = 5) -> str:
        # Async implementation
        results = await async_search_api(query, limit=num_results)
        return format_results(results)

Common Tool Types

Search Tools

  • Web search (Google, Bing)
  • Knowledge base search
  • Document search

Data Tools

  • SQL queries
  • API calls
  • CSV/Excel parsing

Computation Tools

  • Math calculations
  • Statistical analysis
  • Code execution

Integration Tools

  • CRM access
  • Email sending
  • File operations

Tool Implementation Examples

from langchain.tools import Tool

def query_database(query: str) -> str:
    """Execute SQL query and return results"""
    import psycopg2
    conn = psycopg2.connect(DATABASE_URL)
    cursor = conn.cursor()
    cursor.execute(query)
    results = cursor.fetchall()
    return str(results)

database_tool = Tool(
    name="database_query",
    func=query_database,
    description="Execute SQL queries on the customer database"
)

Advanced Agent Patterns

Multi-Agent Systems

Create specialized agents for different tasks:
# Research Agent
research_agent = create_agent(
    llm=ChatOpenAI(model="gpt-4"),
    tools=[web_search_tool, summarizer_tool]
)

# Analysis Agent
analysis_agent = create_agent(
    llm=ChatOpenAI(model="gpt-4"),
    tools=[calculator_tool, data_viz_tool]
)

# Coordinator Agent
coordinator = create_agent(
    llm=ChatOpenAI(model="gpt-4"),
    tools=[research_agent, analysis_agent]
)

Memory Management

Configure different memory types:
from langchain.memory import ConversationBufferMemory

memory = ConversationBufferMemory(
    memory_key="chat_history",
    return_messages=True
)

Custom Prompts

Optimize agent behavior with custom prompts:
from langchain.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages([
    ("system", """You are an expert data analyst with access to various tools.
    When analyzing data:
    1. First, understand the question
    2. Gather necessary data using tools
    3. Perform calculations/analysis
    4. Provide clear insights
    5. Suggest actionable recommendations

    Be concise but thorough."""),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}")
])

Monitoring & Debugging

LangSmith Tracing

View detailed execution traces:
  1. Go to smith.langchain.com
  2. Navigate to Traces
  3. Filter by your agent
  4. View:
    • Input/Output
    • Tool calls
    • Token usage
    • Latency
    • Errors

Error Handling

Implement robust error handling:
from langchain.agents import AgentExecutor

agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,
    max_iterations=10,
    max_execution_time=300,
    handle_parsing_errors=True,
    return_intermediate_steps=True
)

Performance Optimization

1

Cache Responses

from langchain.cache import InMemoryCache
import langchain
langchain.llm_cache = InMemoryCache()
2

Parallel Tool Execution

agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    max_iterations=10,
    early_stopping_method="generate"
)
3

Optimize Token Usage

  • Use GPT-3.5 for simple tasks
  • Implement conversation summarization
  • Trim unnecessary context

Troubleshooting

Cause: Complex query requires many stepsSolutions:
  • Increase max_iterations to 15-20
  • Simplify the prompt
  • Break into multiple queries
  • Optimize tool descriptions
Check:
  • Tool function implementation
  • API credentials for external services
  • Network connectivity
  • Input parameter format
Debug: Check LangSmith trace logs
Symptoms: Agent forgets previous contextFix:
  • Verify memory configuration
  • Increase token limit
  • Use ConversationSummaryMemory for long conversations
Cause: Agent takes too long to respondSolutions:
  • Increase timeout in AgentFlow config
  • Optimize tool performance
  • Use faster LLM (GPT-3.5)
  • Implement async execution

Security Best Practices

API Keys

Store securely, rotate regularly, use environment variables

Tool Access

Limit tools to necessary operations, implement access control

Input Validation

Validate all inputs before tool execution

Output Sanitization

Filter sensitive information from responses

Rate Limiting

Implement limits to prevent abuse

Audit Logging

Log all agent actions for compliance

Cost Optimization

Token Usage Strategies

  1. Model Selection:
    • GPT-4: Complex reasoning ($0.03/1K tokens)
    • GPT-3.5-Turbo: Simple tasks ($0.002/1K tokens)
  2. Prompt Engineering:
    • Concise system prompts
    • Efficient tool descriptions
    • Clear, specific queries
  3. Memory Management:
    • Use ConversationSummaryMemory
    • Implement conversation trimming
    • Set appropriate token limits
  4. Caching:
    • Cache identical queries
    • Store tool results
    • Reuse computations

Next Steps

I