Documentation Index Fetch the complete documentation index at: https://docs.agentflow.live/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Integrate LangChain agents with AgentFlow to build sophisticated AI systems that can use tools, access external data, and execute multi-step reasoning workflows.
Prerequisites
API Key Generate API key from LangChain settings
Agent Created Build and deploy a LangChain agent
AgentFlow Access Admin rights in AgentFlow
Step 1: Create LangChain Agent
Setup LangChain Project
Install LangChain
pip install langchain langchain-openai langsmith
Configure Environment
export LANGCHAIN_API_KEY = "your_api_key"
export LANGCHAIN_TRACING_V2 = "true"
export OPENAI_API_KEY = "your_openai_key"
Create Agent
from langchain.agents import create_openai_functions_agent
from langchain_openai import ChatOpenAI
from langchain.tools import Tool
from langchain.prompts import ChatPromptTemplate
# Define tools
tools = [
Tool(
name = "web_search" ,
func = web_search_function,
description = "Search the web for current information"
),
Tool(
name = "calculator" ,
func = calculator_function,
description = "Perform mathematical calculations"
)
]
# Create agent
llm = ChatOpenAI( model = "gpt-4" , temperature = 0.7 )
agent = create_openai_functions_agent(
llm = llm,
tools = tools,
prompt = ChatPromptTemplate.from_messages([
( "system" , "You are a helpful AI assistant with access to tools." ),
( "human" , " {input} " ),
( "placeholder" , " {agent_scratchpad} " )
])
)
Deploy Agent
from langchain.agents import AgentExecutor
agent_executor = AgentExecutor(
agent = agent,
tools = tools,
verbose = True ,
max_iterations = 10
)
# Create API endpoint
from langserve import add_routes
from fastapi import FastAPI
app = FastAPI()
add_routes(app, agent_executor, path = "/agent" )
Deploy to LangChain Hub
Push to Hub
from langchain.hub import push
push(
"your-username/research-agent" ,
agent_executor,
description = "Research agent with web search and calculation tools"
)
Get Agent ID
Copy the agent ID from LangSmith dashboard
Step 2: Create AI Connection in AgentFlow
Manual Configuration
Admin Dashboard → AI Models → Add Model
Basic Info:
Name : LangChain Research Agent
Model ID : langchain-agent-executor
Description : Research agent with web search and calculation
API Settings:
Endpoint : https://api.langchain.com/v1/agents/{{agent_id}}/invoke
Method : POST
Headers:
{
"Authorization" : "Bearer {{langchain_api_key}}" ,
"Content-Type" : "application/json" ,
"X-LangChain-Version" : "0.1.0"
}
Request Schema:
{
"agent_id" : "{{agent_id}}" ,
"input" : {
"message" : "{{message}}" ,
"user_id" : "{{user_id}}" ,
"session_id" : "{{session_id}}" ,
"timestamp" : "{{timestamp}}"
},
"config" : {
"callbacks" : [],
"tags" : [ "chat-platform" ],
"metadata" : {
"user_id" : "{{user_id}}" ,
"session_id" : "{{session_id}}"
}
},
"tools" : [
{
"name" : "web_search" ,
"description" : "Search the web for current information"
},
{
"name" : "calculator" ,
"description" : "Perform mathematical calculations"
},
{
"name" : "code_executor" ,
"description" : "Execute code snippets"
}
],
"memory" : {
"type" : "conversation_buffer" ,
"max_token_limit" : 2000
}
}
Response Path: data.output
Save
Step 3: Import via YAML
Complete YAML Configuration
Create langchain-agent-config.yaml:
name : "Hosted LangChain Agent"
model_id : "langchain-agent-executor"
description : "Execute hosted LangChain agents for complex AI workflows and tool usage"
endpoint : "https://api.langchain.com/v1/agents/{{agent_id}}/invoke"
method : "POST"
headers :
Authorization : "Bearer {{langchain_api_key}}"
Content-Type : "application/json"
X-LangChain-Version : "0.1.0"
request_schema :
agent_id : "{{agent_id}}"
input :
message : "{{message}}"
user_id : "{{user_id}}"
session_id : "{{session_id}}"
timestamp : "{{timestamp}}"
config :
callbacks : []
tags : [ "chat-platform" ]
metadata :
user_id : "{{user_id}}"
session_id : "{{session_id}}"
tools :
- name : "web_search"
description : "Search the web for current information"
- name : "calculator"
description : "Perform mathematical calculations"
- name : "code_executor"
description : "Execute code snippets"
memory :
type : "conversation_buffer"
max_token_limit : 2000
response_path : "data.output"
message_format :
preset : "langchain_agent"
mapping :
role :
source : "role"
target : "input.role"
transform : "lowercase"
content :
source : "content"
target : "input.message"
transform : "none"
timestamp :
source : "timestamp"
target : "input.timestamp"
transform : "iso8601"
session_id :
source : "session_id"
target : "input.session_id"
transform : "none"
customFields :
- name : "langchain_configuration"
value :
platform : "langchain"
version : "0.1.0"
agent_type : "conversational"
capabilities : [ "tool_usage" , "memory" , "reasoning" ]
type : "object"
- name : "agent_settings"
value :
max_iterations : 10
timeout : 300
memory_type : "conversation_buffer"
max_token_limit : 2000
type : "object"
suggestion_prompts :
- "Create an agent that can research topics and provide comprehensive summaries"
- "Build an agent for automated data analysis and visualization"
- "Set up an agent for code review and optimization suggestions"
- "Create an agent for customer support with access to knowledge base"
- "Build an agent for automated report generation from multiple data sources"
Import Steps
Update agent_id with your actual ID
Admin Dashboard → AI Models → Import Model
Upload YAML file
Enter LangChain API key
Import
Step 4: Assign to Group
Admin Dashboard → Groups
Select/Create group (e.g., “Research Team”)
Manage Models → Enable LangChain Agent
Configure access:
Tool Access : All tools enabled
Max Iterations : 10
Timeout : 5 minutes
Memory : Enabled
Save
Step 5: Use in Chat
Agent Interactions
Chat → New Conversation
Select LangChain Research Agent
Ask questions that require tool usage
Example Prompts
Research
Calculation
Data Analysis
Code Execution
Research the latest developments in quantum computing from 2024 and summarize the key breakthroughs.
Understanding Agent Responses
Agent shows its reasoning process:
Thought: I need to search for recent quantum computing developments.
Action: web_search
Action Input: "quantum computing breakthroughs 2024"
Observation: [Search results...]
Thought: I now have enough information to summarize.
Final Answer: [Comprehensive summary...]
from langchain.tools import BaseTool
from pydantic import BaseModel, Field
class SearchInput ( BaseModel ):
query: str = Field( description = "The search query" )
num_results: int = Field( default = 5 , description = "Number of results" )
class WebSearchTool ( BaseTool ):
name = "web_search"
description = "Search the web for current information"
args_schema = SearchInput
def _run ( self , query : str , num_results : int = 5 ) -> str :
# Implement search logic
results = search_api(query, limit = num_results)
return format_results(results)
async def _arun ( self , query : str , num_results : int = 5 ) -> str :
# Async implementation
results = await async_search_api(query, limit = num_results)
return format_results(results)
Search Tools
Web search (Google, Bing)
Knowledge base search
Document search
Data Tools
SQL queries
API calls
CSV/Excel parsing
Computation Tools
Math calculations
Statistical analysis
Code execution
Integration Tools
CRM access
Email sending
File operations
Database Tool
API Tool
Email Tool
from langchain.tools import Tool
def query_database ( query : str ) -> str :
"""Execute SQL query and return results"""
import psycopg2
conn = psycopg2.connect( DATABASE_URL )
cursor = conn.cursor()
cursor.execute(query)
results = cursor.fetchall()
return str (results)
database_tool = Tool(
name = "database_query" ,
func = query_database,
description = "Execute SQL queries on the customer database"
)
Advanced Agent Patterns
Multi-Agent Systems
Create specialized agents for different tasks:
# Research Agent
research_agent = create_agent(
llm = ChatOpenAI( model = "gpt-4" ),
tools = [web_search_tool, summarizer_tool]
)
# Analysis Agent
analysis_agent = create_agent(
llm = ChatOpenAI( model = "gpt-4" ),
tools = [calculator_tool, data_viz_tool]
)
# Coordinator Agent
coordinator = create_agent(
llm = ChatOpenAI( model = "gpt-4" ),
tools = [research_agent, analysis_agent]
)
Memory Management
Configure different memory types:
Buffer Memory
Summary Memory
Window Memory
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(
memory_key = "chat_history" ,
return_messages = True
)
Custom Prompts
Optimize agent behavior with custom prompts:
from langchain.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
( "system" , """You are an expert data analyst with access to various tools.
When analyzing data:
1. First, understand the question
2. Gather necessary data using tools
3. Perform calculations/analysis
4. Provide clear insights
5. Suggest actionable recommendations
Be concise but thorough.""" ),
( "human" , " {input} " ),
( "placeholder" , " {agent_scratchpad} " )
])
Monitoring & Debugging
LangSmith Tracing
View detailed execution traces:
Go to smith.langchain.com
Navigate to Traces
Filter by your agent
View:
Input/Output
Tool calls
Token usage
Latency
Errors
Error Handling
Implement robust error handling:
from langchain.agents import AgentExecutor
agent_executor = AgentExecutor(
agent = agent,
tools = tools,
verbose = True ,
max_iterations = 10 ,
max_execution_time = 300 ,
handle_parsing_errors = True ,
return_intermediate_steps = True
)
Cache Responses
from langchain.cache import InMemoryCache
import langchain
langchain.llm_cache = InMemoryCache()
Parallel Tool Execution
agent_executor = AgentExecutor(
agent = agent,
tools = tools,
max_iterations = 10 ,
early_stopping_method = "generate"
)
Optimize Token Usage
Use GPT-3.5 for simple tasks
Implement conversation summarization
Trim unnecessary context
Troubleshooting
Agent Exceeds Max Iterations
Cause : Complex query requires many stepsSolutions :
Increase max_iterations to 15-20
Simplify the prompt
Break into multiple queries
Optimize tool descriptions
Symptoms : Agent forgets previous contextFix :
Verify memory configuration
Increase token limit
Use ConversationSummaryMemory for long conversations
Cause : Agent takes too long to respondSolutions :
Increase timeout in AgentFlow config
Optimize tool performance
Use faster LLM (GPT-3.5)
Implement async execution
Security Best Practices
API Keys Store securely, rotate regularly, use environment variables
Tool Access Limit tools to necessary operations, implement access control
Input Validation Validate all inputs before tool execution
Output Sanitization Filter sensitive information from responses
Rate Limiting Implement limits to prevent abuse
Audit Logging Log all agent actions for compliance
Cost Optimization
Token Usage Strategies
Model Selection :
GPT-4: Complex reasoning ($0.03/1K tokens)
GPT-3.5-Turbo: Simple tasks ($0.002/1K tokens)
Prompt Engineering :
Concise system prompts
Efficient tool descriptions
Clear, specific queries
Memory Management :
Use ConversationSummaryMemory
Implement conversation trimming
Set appropriate token limits
Caching :
Cache identical queries
Store tool results
Reuse computations
Next Steps
OpenAI Assistants Alternative agent framework
Cloud Functions Deploy custom logic
Workflow Automation Combine with n8n
Analytics Monitor agent performance