Documentation Index Fetch the complete documentation index at: https://docs.agentflow.live/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Create sophisticated AI agents using OpenAI’s Assistants API. Build agents with code interpretation, file search, and custom function calling capabilities.
Prerequisites
API Key Generate API key with Assistants access
Assistant Created Build assistant via OpenAI Playground or API
AgentFlow Admin Admin access required
Step 1: Create OpenAI Assistant
Using OpenAI Playground
Configure Assistant
Name : Research & Analysis Agent
Model : gpt-4-turbo
Instructions :
You are a research and analysis expert. Use your tools to:
1. Search through provided files and documents
2. Execute code for data analysis
3. Call functions to retrieve external data
Always cite sources and show your reasoning.
Enable Tools
✅ Code Interpreter : For data analysis and calculations
✅ File Search : For document analysis
✅ Functions : For custom integrations
Add Functions
Define custom functions: {
"name" : "get_customer_data" ,
"description" : "Retrieve customer information from CRM" ,
"parameters" : {
"type" : "object" ,
"properties" : {
"customer_id" : {
"type" : "string" ,
"description" : "The customer ID"
}
},
"required" : [ "customer_id" ]
}
}
Save Assistant
Click Save and copy the Assistant ID
Using OpenAI API
Create programmatically:
from openai import OpenAI
client = OpenAI( api_key = "your_api_key" )
assistant = client.beta.assistants.create(
name = "Research & Analysis Agent" ,
instructions = """You are a research and analysis expert.
Use tools to search documents, analyze data, and retrieve information.""" ,
model = "gpt-4-turbo" ,
tools = [
{ "type" : "code_interpreter" },
{ "type" : "file_search" },
{
"type" : "function" ,
"function" : {
"name" : "get_customer_data" ,
"description" : "Retrieve customer info" ,
"parameters" : {
"type" : "object" ,
"properties" : {
"customer_id" : { "type" : "string" }
},
"required" : [ "customer_id" ]
}
}
}
]
)
print ( f "Assistant ID: { assistant.id } " )
Step 2: Create AI Connection in AgentFlow
Manual Setup
Admin Dashboard → AI Models → Add Model
Configuration:
Name : OpenAI Research Assistant
Model ID : openai-agent-builder
Description : Research agent with code interpretation and file search
API Settings:
Endpoint : https://api.openai.com/v1/assistants/{{assistant_id}}/runs
Method : POST
Headers:
{
"Authorization" : "Bearer {{openai_api_key}}" ,
"Content-Type" : "application/json" ,
"OpenAI-Beta" : "assistants=v2"
}
Request Schema:
{
"assistant_id" : "{{assistant_id}}" ,
"thread_id" : "{{thread_id}}" ,
"instructions" : "{{instructions}}" ,
"additional_instructions" : "{{additional_instructions}}" ,
"tools" : [
{ "type" : "code_interpreter" },
{ "type" : "file_search" },
{
"type" : "function" ,
"function" : {
"name" : "custom_function" ,
"description" : "Custom function for specific tasks"
}
}
],
"metadata" : {
"user_id" : "{{user_id}}" ,
"session_id" : "{{session_id}}" ,
"timestamp" : "{{timestamp}}"
}
}
Response Path: data.run_result
Save
Step 3: Import via YAML
YAML Configuration
Create openai-agent-builder-config.yaml:
name : "OpenAI Agent Builder"
model_id : "openai-agent-builder"
description : "Execute OpenAI Agent Builder agents for intelligent task automation"
endpoint : "https://api.openai.com/v1/assistants/{{assistant_id}}/runs"
method : "POST"
headers :
Authorization : "Bearer {{openai_api_key}}"
Content-Type : "application/json"
OpenAI-Beta : "assistants=v2"
request_schema :
assistant_id : "{{assistant_id}}"
thread_id : "{{thread_id}}"
instructions : "{{instructions}}"
additional_instructions : "{{additional_instructions}}"
tools :
- type : "code_interpreter"
- type : "file_search"
- type : "function"
function :
name : "custom_function"
description : "Custom function for specific tasks"
metadata :
user_id : "{{user_id}}"
session_id : "{{session_id}}"
timestamp : "{{timestamp}}"
response_path : "data.run_result"
message_format :
preset : "openai_assistant"
mapping :
role :
source : "role"
target : "messages[0].role"
transform : "openai_role"
content :
source : "content"
target : "messages[0].content"
transform : "none"
timestamp :
source : "timestamp"
target : "metadata.timestamp"
transform : "iso8601"
customFields :
- name : "agent_configuration"
value :
platform : "openai"
model : "gpt-4-turbo"
version : "assistants-v2"
capabilities : [ "code_interpreter" , "file_search" , "function_calling" ]
type : "object"
- name : "execution_settings"
value :
max_completion_tokens : 4000
temperature : 0.7
timeout : 300
stream : false
type : "object"
suggestion_prompts :
- "Create an agent to analyze customer feedback and generate insights"
- "Build an agent for automated code review and suggestions"
- "Set up an agent for data analysis and report generation"
- "Create an agent for content moderation and classification"
- "Build an agent for automated customer support responses"
Import Process
Update assistant_id with your actual ID
Admin Dashboard → AI Models → Import Model
Upload YAML
Enter OpenAI API key
Import
Step 4: Assign to Group
Admin Dashboard → Groups
Select group (e.g., “Analytics Team”)
Manage Models → Enable OpenAI Assistant
Configure:
Code Interpreter : Enabled
File Upload : 20 files max
Custom Functions : Enabled
Max Tokens : 4000
Save
Step 5: Use in Chat
Interacting with Assistant
Chat → New Conversation
Select OpenAI Research Assistant
Start conversation
Example Interactions
Data Analysis
Code Execution
File Analysis
Custom Function
Analyze this sales data and create visualizations:
Q1: $1.2M, Q2: $1.8M, Q3: $2.1M, Q4: $2.5M
Calculate growth rate, create a trend chart, and forecast Q1 next year.
Code Interpreter
Execute Python code for analysis:
Data Analysis
Statistical Analysis
Machine Learning
import pandas as pd
import matplotlib.pyplot as plt
# Your data
data = pd.DataFrame({
'Quarter' : [ 'Q1' , 'Q2' , 'Q3' , 'Q4' ],
'Revenue' : [ 1200000 , 1800000 , 2100000 , 2500000 ]
})
# Calculate growth
data[ 'Growth' ] = data[ 'Revenue' ].pct_change() * 100
# Create visualization
plt.figure( figsize = ( 10 , 6 ))
plt.plot(data[ 'Quarter' ], data[ 'Revenue' ], marker = 'o' )
plt.title( 'Quarterly Revenue Trend' )
plt.savefig( 'revenue_trend.png' )
File Search
Upload and analyze documents:
Upload Files
Via API or Playground: file = client.files.create(
file = open ( "customer_feedback.pdf" , "rb" ),
purpose = "assistants"
)
client.beta.assistants.update(
assistant_id = "asst_abc123" ,
file_ids = [ file .id]
)
Query Documents
Ask questions about uploaded files: "What are the main themes in the customer feedback?"
"Summarize the key findings from the Q4 report"
"Find all mentions of pricing concerns"
Get Citations
Assistant provides source references: According to page 12 of the feedback report...
Function Calling
Integrate with external systems:
Define Function
{
"name" : "get_customer_data" ,
"description" : "Retrieve customer information from CRM" ,
"parameters" : {
"type" : "object" ,
"properties" : {
"customer_id" : {
"type" : "string" ,
"description" : "Customer ID"
},
"include_history" : {
"type" : "boolean" ,
"description" : "Include purchase history"
}
},
"required" : [ "customer_id" ]
}
}
Handle Function Calls
if run.status == "requires_action" :
tool_call = run.required_action.submit_tool_outputs.tool_calls[ 0 ]
if tool_call.function.name == "get_customer_data" :
args = json.loads(tool_call.function.arguments)
customer_data = fetch_from_crm(args[ "customer_id" ])
client.beta.threads.runs.submit_tool_outputs(
thread_id = thread.id,
run_id = run.id,
tool_outputs = [{
"tool_call_id" : tool_call.id,
"output" : json.dumps(customer_data)
}]
)
Advanced Configuration
Thread Management
Manage conversation threads:
# Create thread
thread = client.beta.threads.create()
# Add message
message = client.beta.threads.messages.create(
thread_id = thread.id,
role = "user" ,
content = "Analyze this data..."
)
# Run assistant
run = client.beta.threads.runs.create(
thread_id = thread.id,
assistant_id = assistant.id
)
# Check status
while run.status != "completed" :
run = client.beta.threads.runs.retrieve(
thread_id = thread.id,
run_id = run.id
)
time.sleep( 1 )
# Get response
messages = client.beta.threads.messages.list(
thread_id = thread.id
)
Streaming Responses
Enable real-time streaming:
with client.beta.threads.runs.stream(
thread_id = thread.id,
assistant_id = assistant.id
) as stream:
for event in stream:
if event.type == "thread.message.delta" :
print (event.data.delta.content[ 0 ].text.value, end = "" )
Custom Instructions
Dynamic instruction override:
run = client.beta.threads.runs.create(
thread_id = thread.id,
assistant_id = assistant.id,
additional_instructions = """
For this analysis:
- Focus on trends from the last quarter
- Highlight any anomalies
- Provide actionable recommendations
"""
)
Use Cases
1. Data Analysis Agent
Assistant Config :
Name : "Data Analysis Expert"
Model : gpt-4-turbo
Tools : [ code_interpreter ]
Instructions : |
Analyze data using Python, pandas, numpy.
Create visualizations with matplotlib.
Provide statistical insights and predictions.
Example Query :
"Analyze customer churn data and identify key factors"
2. Document Research Agent
Assistant Config :
Name : "Research Assistant"
Model : gpt-4-turbo
Tools : [ file_search ]
Instructions : |
Search through documents to answer questions.
Always cite sources.
Provide comprehensive summaries.
Example Query :
"What does our compliance policy say about data retention?"
3. Customer Support Agent
Assistant Config :
Name : "Support Bot"
Model : gpt-4-turbo
Tools : [ file_search , function(get_ticket_info) ]
Instructions : |
Help customers with their issues.
Search knowledge base for solutions.
Escalate complex issues to humans.
Example Query :
"Customer CUST-123 is having login issues"
Monitoring & Debugging
Run Status Tracking
Monitor assistant execution:
run = client.beta.threads.runs.retrieve(
thread_id = thread.id,
run_id = run.id
)
print ( f "Status: { run.status } " )
# queued, in_progress, requires_action, completed, failed
Error Handling
Handle common errors:
try :
run = client.beta.threads.runs.create(
thread_id = thread.id,
assistant_id = assistant.id
)
except openai.APIError as e:
print ( f "API Error: { e } " )
except openai.RateLimitError as e:
print ( f "Rate limit exceeded: { e } " )
except Exception as e:
print ( f "Unexpected error: { e } " )
Usage Tracking
Monitor token usage:
run = client.beta.threads.runs.retrieve(
thread_id = thread.id,
run_id = run.id
)
print ( f "Tokens used: { run.usage.total_tokens } " )
print ( f "Prompt tokens: { run.usage.prompt_tokens } " )
print ( f "Completion tokens: { run.usage.completion_tokens } " )
Troubleshooting
Check :
Assistant is not deleted
API key has Assistants beta access
Thread ID is valid
No rate limits hit
Fix : Recreate assistant or thread
Check :
Function definition is correct
Description is clear
Parameters match expected format
Fix : Improve function description
Check :
Files are uploaded successfully
File IDs are attached to assistant
Supported file types (PDF, TXT, DOCX)
Fix : Verify file upload and attachment
Cause : Invalid Python code or timeoutFix :
Validate code syntax
Reduce computation complexity
Check for infinite loops
Cost Management
Pricing Structure
Component Cost GPT-4-turbo 0.01 / 1 K i n p u t , 0.01/1K input, 0.01/1 K in p u t , 0.03/1K outputCode Interpreter $0.03/session File Search $0.10/GB/day Storage $0.20/GB/month
Optimization Tips
Model Selection Use GPT-3.5-turbo for simple tasks
File Management Delete unused files regularly
Thread Cleanup Archive old threads
Token Limits Set max_tokens appropriately
Best Practices
Clear Instructions : Be specific about assistant behavior
Tool Selection : Only enable necessary tools
Error Handling : Implement robust error handling
Testing : Test with various inputs before deployment
Monitoring : Track usage and costs regularly
Security : Validate all function inputs/outputs
Next Steps
LangChain Agents Alternative agent framework
Cloud Functions Custom serverless logic
Workflow Automation Combine with Make.com
Analytics Monitor assistant performance