Skip to content

LangChain - From Simple Prompts to Autonomous Agents

As large language models (LLMs) like OpenAI’s GPT-4 continue to evolve, so do the frameworks and techniques that make them easier to use and integrate into real-world applications. Whether you're building a chatbot, automating document analysis, or creating intelligent agents that can reason and use tools, understanding how to interact with LLMs is key. This post walks through a practical journey of using both the OpenAI API and LangChain — exploring everything from basic prompt engineering to building modular, structured, and even parallelized chains of functionality.

Sending Basic Prompts with OpenAI and LangChain

The first step in any LLM-powered app is learning how to send a prompt and receive a response.

Using OpenAI API directly:

import openai

openai.api_key = "your-api-key"

response = openai.ChatCompletion.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain quantum computing in simple terms."}
    ]
)

print(response['choices'][0]['message']['content'])

Using LangChain with OpenAI under the hood:

from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage

chat = ChatOpenAI(model_name="gpt-4")
response = chat([HumanMessage(content="Explain quantum computing in simple terms.")])
print(response.content)

LangChain abstracts away boilerplate while enabling advanced functionality.

Streaming and Batch Processing with LangChain

LangChain simplifies both streaming and batch processing:

Streaming Responses:

from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

chat = ChatOpenAI(
    streaming=True,
    callbacks=[StreamingStdOutCallbackHandler()],
    model_name="gpt-4"
)

chat([HumanMessage(content="Tell me a long story about a brave cat.")])

Batch Processing:

messages = [
    [HumanMessage(content="What is AI?")],
    [HumanMessage(content="Define machine learning.")],
]

responses = chat.batch(messages)
for r in responses:
    print(r.content)

Iterative Prompt Engineering

Prompt engineering is not a one-and-done task. It's an iterative process of experimentation and improvement.

Start simple:

"Summarize this article."

Then refine:

"Summarize this article in bullet points, emphasizing key technical insights and potential implications for developers."

Observe results. Adjust tone, structure, examples, or context as needed. LangChain allows quick iteration by swapping prompt templates or changing message context.

Prompt Templates for Reuse and Abstraction

LangChain provides prompt templates to create reusable, parameterized prompts.

from langchain.prompts import ChatPromptTemplate

template = ChatPromptTemplate.from_template("Translate '{text}' to {language}")
prompt = template.format_messages(text="Hello", language="Spanish")

This modularity is essential as your application grows more complex.

LangChain Expression Language (LCEL)

LCEL enables you to compose reusable, declarative chains like functional pipelines.

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
llm = ChatOpenAI(model="gpt-4")
parser = StrOutputParser()

chain = prompt | llm | parser
print(chain.invoke({"topic": "AI"}))

You can compose chains in a clean, modular way using LCEL's pipe operator.

Custom Runnables for Extensibility

Sometimes, you need to insert custom logic into a chain. LangChain allows this with custom runnables.

from langchain_core.runnables import RunnableLambda

def uppercase(input: str) -> str:
    return input.upper()

uppercase_runnable = RunnableLambda(uppercase)
chain = prompt | uppercase_runnable | llm

Perfect for injecting business logic or data preprocessing into a flow.

Composing Chains and Running in Parallel

Chains can be composed to run sequentially or in parallel:

Parallel example:

from langchain.schema.runnable import RunnableParallel

parallel_chain = RunnableParallel({
    "english": prompt.bind(topic="cats"),
    "spanish": prompt.bind(topic="gatos")
}) | llm | parser

result = parallel_chain.invoke({})
print(result)

This is great for multi-lingual output, comparison tasks, or speeding up multiple independent calls.

Understanding Chat Message Types

Working with system, user, and assistant roles allows for nuanced conversations.

messages = [
    {"role": "system", "content": "You are a kind tutor."},
    {"role": "user", "content": "Help me understand Newton's laws."}
]

You can experiment with few-shot examples, chain-of-thought reasoning, or tightly controlling behavior via the system message.

Storing Messages: Conversation History for Chatbots

Use LangChain’s ConversationBufferMemory to track chat history:

from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain

memory = ConversationBufferMemory()
conversation = ConversationChain(llm=chat, memory=memory)

conversation.predict(input="Hello!")
conversation.predict(input="Can you remember what I just said?")

This enables persistent, context-aware chatbot behavior.

Structured Output from LLMs

LangChain helps enforce response schemas:

from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel

class Info(BaseModel):
    topic: str
    summary: str

parser = PydanticOutputParser(pydantic_object=Info)

chain = prompt | llm | parser
result = chain.invoke({"topic": "cloud computing"})

You get structured, type-safe data instead of freeform text.

Analyzing and Tagging Long Documents

LangChain supports splitting and analyzing long documents:

from langchain.text_splitter import RecursiveCharacterTextSplitter

splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
chunks = splitter.split_text(long_text)

# Process each chunk with a summarization chain

Apply tagging, summarization, sentiment analysis, and more at scale.

Augmenting LLMs with Custom Tools

To overcome the limits of LLMs, you can give them access to tools like search, databases, or calculators.

from langchain.agents import load_tools, initialize_agent

tools = load_tools(["serpapi", "llm-math"], llm=chat)
agent = initialize_agent(tools, chat, agent="zero-shot-react-description")

agent.run("What is the weather in Singapore and what is 3*7?")

LLMs can now act based on real-world data and logic.

Creating Autonomous Agents with Tool Use

Agents go a step further: they reason about when to use tools and how to combine outputs.

LangChain’s agent framework lets you build intelligent systems that think step-by-step and make decisions, improving user experience and application power.

Final Thoughts

We started with simple prompts and ended up creating parallelized, structured, tool-augmented LLM pipelines — all thanks to the power of OpenAI's API and LangChain. Whether you're building a smart assistant, document analyzer, or fully autonomous agent, mastering these tools and patterns gives you a strong foundation to push the boundaries of what’s possible with LLMs.