2025-04-16
- Before jumping onto the Agentic AI bandwagon,
could we reflect on the evolution that led to the current Agentic AI era?
Data processed in real-time
Data collected and processed in batches
Image Inspiration: Jay Alammar’s Hands-on Large Language Models
#0. Implement the tool
def search_web(search_query)
...
...
return tool_answer
# 1. Describe the tool
tools = [
{
"type": "function",
"function": {
"name": "search_web",
"description": "Searches the web for a factual answer to a question.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The question or term to search for"
}
},
"required": ["query"]
}
}
}
]
# Step 2: Pass the prompt query to the OpenAI
# Let OpenAI decide if it wants to use the tool
messages = [
{"role": "user", "content": "What is the capital of Japan?"}
]
response = openai.ChatCompletion.create(
model="gpt4o",
messages=messages,
tools=tools,
tool_choice="auto"
)
# Step 3: Execute the Tool Call
tool_call = response.choices[0].message.get("tool_calls", [None])[0]
if tool_call:
function_name = tool_call.function.name
arguments = eval(tool_call.function.arguments) # or use `json.loads`
# Step 4: Simulate calling the function (you’d implement it)
if function_name == "search_web":
query = arguments["query"]
tool_result = search_web(query)
# Step 5: Append tool response and ask model to finish
messages += [
response.choices[0].message,
{
"role": "tool",
"tool_call_id": tool_call.id,
"name": function_name,
"content": tool_result
}
]
final_response = openai.ChatCompletion.create(
model="gpt4o",
messages=messages
)
print("🧠 Final Answer:", final_response.choices[0].message["content"])
else:
print("💬 Direct Answer:", response.choices[0].message["content"])
Feature | ReAct (Prompt Text) | Function Calling (Structured JSON) |
---|---|---|
Output Format | "Action: Search('capital of Japan')" |
Structured JSON with function + args |
Parsing Required? | ❌ You parse the text manually | ✅ Handled by OpenAI, LangChain toolkit |
Execution Clarity | ❌ Model can hallucinate tool syntax | ✅ Only valid, defined tools used |
Model Adherence | 🟡 You “hope” it follows format | 🎯 You give it tool schema (e.g.: OpenAPI) |
Robustness for Development | ❌ Fragile | ✅ Very reliable and scalable |
Source 1: Aishwarya Naresh’s Substack
Source 2: A Visual Guide to Reasoning LLMs
Source for the perspective - “Engineering Wrappers around LLMs”:
Aishwarya Naresh in Substack
Before MCP:
After MCP:
Source of the amazing images: Norah Sakal Blog Post
Source of the amazing image: Hirusha Fernando Medium Article