Product Recommendation Engine with LangGraph
May 21, 2025 No Comments

In this guide, we’ll show you how to leverage LangGraph—a powerful framework for building stateful, multi-actor applications with large language models—to create intelligent recommendation systems that truly understand customer needs. Unlike traditional recommendation engines, LangGraph helps organize complex AI workflows into clear, manageable components.

Now, instead of this being a messy, complicated process behind the scenes, LangGraph provides a clear and organized way to build these intelligent assistants. Think of it as having a well-defined roadmap for creating smart recommendations that learn and adapt.

Why should you care? Because LangGraph helps us build recommendation systems that are more effective at driving sales, increasing customer satisfaction, and ultimately growing your business. We’ll show you a simple example of how this works and how it can lead to better results for you.

Table of Contents

Understanding LangGraph: The Basics

LangGraph is a framework for building stateful, multi-actor applications with LLMs. Built on top of LangChain, it provides a way to orchestrate the flow of information between different components in an LLM application through a graph structure.

What is LangGraph?

At its core, LangGraph is a library that allows you to represent your application’s logic as a directed graph. In this graph:

  • Nodes represent discrete computational units (like LLMs, tools, or custom functions)
  • Edges define the flow of information between these nodes
  • State is maintained throughout the execution, capturing the context of the interaction

This graph-based approach provides several advantages:

1. Modularity: Each node handles a specific task, making your code more maintainable

2. Flexibility: You can easily modify the flow by adding, removing, or rewiring nodes

3. Transparency: The graph structure makes it clear how information flows through your application

4. Reusability: Nodes can be reused across different applications

Key Components of LangGraph

To understand LangGraph, you need to be familiar with a few key components:

1. State

The state represents the information that flows through your graph. It’s typically defined as a TypedDict that specifies what data your nodes will operate on. For example:

				
					class ProductState(TypedDict):
        messages: Annotated[list, add_messages]
        tavily_search: dict
        next_node: str
				
			
2. Nodes

Nodes are functions that take the state as input, perform some computation, and return an updated state. A node might:

  • Generate text with an LLM
  • Retrieve information from a database
  • Process user input
  • Make decisions about what to do next
3. Edges

Edges define how data flows between nodes. They can be:

  • Direct edges: Always go from one specific node to another
  • Conditional edges: Use logic to determine which node to go to next
4. StateGraph

This is the main class used to build your graph. You initialize it with your state type, add nodes and edges, and then compile it into a runnable application.

When to Use LangGraph

LangGraph excels in scenarios that require:

1. Multi-step reasoning: Breaking complex tasks into smaller, more manageable steps

2. Stateful conversations: Maintaining context across multiple turns of a conversation

3. Orchestration of multiple components: Coordinating different tools, models, and data sources

4. Dynamic decision-making: Adapting the flow based on user input or intermediate results

5. Persistent memory: Storing and retrieving information across multiple interactions

Specific use cases include:

  • Conversational agents that need to remember previous interactions
  • Agents that need to use multiple tools to complete a task
  • Multi-step workflows that require different processing at each step
  • Systems that need to make decisions about what actions to take next

Common Confusions and Misconceptions

Before diving into our implementation, let’s address some common points of confusion about LangGraph:

1. LangGraph vs. LangChain

While both frameworks are developed by the same team and work well together, they serve different purposes:

  • LangChain is a framework for developing applications powered by language models, providing components for common operations like prompting, memory, and retrieval.
  • LangGraph focuses specifically on the orchestration of these components in a stateful, graph-based structure.

Think of LangChain as providing the building blocks, while LangGraph helps you organize these blocks into a coherent application flow.

2. Stateless vs. Stateful Agents

Traditional LLM applications often use a stateless approach, where each request is processed independently. LangGraph, in contrast, maintains state throughout the execution, allowing for more context-aware responses and multi-turn interactions.

3. Graph Structure vs. Linear Flow

While you might be tempted to create a linear flow of operations, LangGraph’s true power comes from its ability to create non-linear, dynamic flows. This allows your application to adapt to different scenarios and take different paths based on the input or intermediate results.

4. Node Granularity

Deciding how to divide your application into nodes can be challenging. Too many nodes can make your graph complex and hard to manage, while too few can limit flexibility. A good rule of thumb is to create nodes around logical units of work that might need to be reused or modified independently.

Now that we have a solid understanding of LangGraph’s fundamentals, let’s explore our product recommendation system implementation.

Project Overview

For this blog, we are building a product recommendation project to understand the framework in detail. In this, we will build a workflow as described below.

1. Greet the user and ask for product details

2. Process the user’s product requirements

3. Search for relevant products using an external API

4. Summarize the search results and present recommendations to the user

This workflow is implemented using LangGraph, with different nodes handling each step of the process. Let’s dive into the implementation details. You can check the workflow graph described below.

Implementation: Step-by-Step Guide

Step 1: Setting Up the Environment

First, Let’s install the necessary libraries for the project and set up the environment:

				
					pip install -U langgraph langsmith
pip install -U tavily-python langchain_community
pip install -U langchain-tavily langchain-google-genai
pip install nest-asyncio

				
			

Next, we need to import the necessary libraries into the project file:

				
					from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_tavily import TavilySearch
import os, getpass
from langchain_core.messages import AIMessage, HumanMessage, SystemMessage
from langgraph.graph.message import add_messages
from langgraph.graph import StateGraph, START, END
from pydantic import BaseModel, Field
from typing_extensions import TypedDict
from typing import Literal, cast
from typing import Annotated

from langgraph.types import Command, interrupt
from langgraph.checkpoint.memory import MemorySaver

import nest_asyncio
nest_asyncio.apply()

				
			

This sets up our imports, including:

  • LangChain components for working with Google’s Generative AI and Tavily Search
  • LangGraph components for building our graph
  • Type definitions for our state and response formats
  • Utilities for handling asynchronous code

We also need to set up our API keys:

				
					def _set_env(var: str):
   if not os.environ.get(var):
       os.environ[var] = getpass.getpass(f"{var}: ")

_set_env("TAVILY_API_KEY")
_set_env("GOOGLE_API_KEY")
				
			

This function checks if the required environment variables are set and prompts for them if they’re not.

Step 2: Defining the State

Next, we define the state that will flow through our graph:

				
					class ProductState(TypedDict):
       messages: Annotated[list, add_messages]
       tavily_search: dict
       next_node: str
				
			

Our state consists of:

  • messages: The conversation history
  • tavily_search: Results from our search API
  • next_node: A control variable that determines the next node to execute

Step 3: Setting Up the Tools and Models

We initialize the tools and models we’ll use:

				
					class NodeSelectionResponseFormat(BaseModel):
   node_name: Literal['greetings_node', 'search_node'] = Field(
       description="Name of the node to be activated for further processing."
   )

tavily_search_tool = TavilySearch(
   max_results=5,
   include_answer=True,
   include_raw_content=True,
   time_range="year",
)

llm = ChatGoogleGenerativeAI(
   model = "gemini-2.0-flash",
   max_tokens = None,
   timeout = None,
   max_retries = 0,
)
				
			

Here we:

  • Define a structured output format for our node selection
  • Set up the Tavily Search tool with specific parameters
  • Initialize the Google Generative AI model

Step 4: Defining the Prompts

Now we define the prompts that will guide our LLM’s behavior in different nodes:

				
					NODE_IDENTIFIER_PROMPT = '''We have build a graph utilizing the LangGraph framework \
All the graph details are listed below. Based on the user's conversation history, \
you need yo identify the perfect node which is able to process the input and return the \
name of the node only in the response. Apart from the node name no need to return anything \
else in the response.

Node descriptions
- greetings_node: When a user starts the conversation, activate this node. This node will \
ask user to provide provide the product details he is looking for.
- search_node: when a user directly provide the details about the product he is looking \
for, you need to call the search node directly, no need to ask user for providing the \
name and product specification again.
'''

GREETINGS_NODE_PROMPT = '''You just need to ask the user to provide product details that a \
user is looking to search.
'''

PREPARE_SUMMARY_PROMPT = '''You will be provided with user's message for filtering products \
along with the couple of web pages details from search result. You need to prepare small description \
from the provided web page details for top 5-10 products combined from all the web page details.

Make sure that while preparing the description of a product, prioritize the users requirements \
and focus that along with other details.

Make sure the to highlight the unique properties of all the top products in the description \
for the user.

You should not mention that the product that you are providing is from the web search result or \
anything similar to that. Just say These are the top products that I found for you so \
something similar to that only.
'''
				
			

These prompts define the behavior of our nodes:

  • NODE_IDENTIFIER_PROMPT: Helps determine which node to activate based on user input
  • GREETINGS_NODE_PROMPT: Guides the greeting behavior
  • PREPARE_SUMMARY_PROMPT: Instructs how to summarize search results

Step 5: Implementing the Nodes

Now let’s implement the core functions for each node in our graph:

Node Identifier
				
					def node_identifier(state: ProductState):
       model = llm.with_structured_output(NodeSelectionResponseFormat, include_raw=True)
       system_message = SystemMessage(NODE_IDENTIFIER_PROMPT)
       conversations = [system_message] + [*state['messages']]
       response = cast(AIMessage, model.invoke(conversations))
    
       if response['parsed'].node_name == 'search_node':
           return {
               'next_node': response['parsed'].node_name,
           }
    
       return {'next_node': response['parsed'].node_name}
				
			

This node:

1. Uses the LLM to analyze the conversation history

2. Determines which node should process the user’s input

3. Returns the name of the next node to execute

Greetings Node
				
					def greetings_node(state: ProductState):
       model = llm
       system_message = SystemMessage(GREETINGS_NODE_PROMPT)
       conversations = [system_message] + [*state['messages']]
       response = cast(AIMessage, model.invoke(conversations))
       res = {
           'messages': [response],
           'next_node': 'user_input_node'
       }
       return res
				
			

This node:

1. Generates a greeting message using the LLM

2. Adds the greeting to the conversation history

3. Sets the next node to process user input

User Input Node
				
					def user_input_node(state: ProductState):
       feedback = interrupt("Please provide feedback:")
       return {'next_node': 'search_node'}
				
			

This node:

1. Interrupts the flow to collect user input

2. Directs the flow to the search node once input is received

Search Node
				
					def search_node(state: ProductState):
       search_query = state['messages'][-1].content
       search_result = tavily_search_tool.invoke({'query': search_query})
    
       row_content = ""
       for d in search_result['results']:
           row_content += "Page Content\n"
           row_content += f"URL: {d['url']}\n"
           row_content += f"Title: {d['title']}\n"
           row_content += f"Row Content: {d['raw_content']}\n"
    
       return {'tavily_search': row_content }
				
			

This node:

1. Extracts the search query from the last message

2. Invokes the Tavily search tool to find relevant products

3. Formats the search results for further processing

4. Adds the results to the state

Summary Node
				
					def prepare_summary_node(state: ProductState):
       model = llm
       system_message = SystemMessage(PREPARE_SUMMARY_PROMPT)
       conversations = [system_message] + [*state['messages']] + [HumanMessage(content=state['tavily_search'])]
       response = cast(AIMessage, model.invoke(conversations))
       return {'messages': [response]}
				
			

This node:

1. Combines the conversation history with search results

2. Uses the LLM to summarize the search results

3. Adds the summary to the conversation history

Conditional Router
				
					def call_next_node(state: ProductState) -> Literal['greetings_node', 'search_node']:
       return state['next_node']
				
			

This function:

1. Extracts the next node name from the state

2. Returns it to determine the next step in the flow

Step 6: Building the Graph

Now we assemble our nodes into a complete graph:

				
					graph = StateGraph(ProductState)

graph.add_node('node_identifier', node_identifier)
graph.add_node('greetings_node', greetings_node)
graph.add_node('user_input_node', user_input_node)
graph.add_node('search_node', search_node)
graph.add_node('prepare_summary_node', prepare_summary_node)

graph.add_edge(START, 'node_identifier')
graph.add_conditional_edges('node_identifier', call_next_node)
graph.add_edge('greetings_node', 'user_input_node')
graph.add_edge('user_input_node', 'search_node')
graph.add_edge('search_node', 'prepare_summary_node')

graph.set_finish_point('prepare_summary_node')
app = graph.compile(checkpointer=MemorySaver())
				
			

Here we:

1. Create a new StateGraph with our ProductState

2. Add each node to the graph

3. Add edges to define the flow between nodes

4. Set the finishing point of our graph

5. Compile the graph into a runnable application

Step 7: Running the Application

Finally, we create functions to run our application:

				
					thread = {"configurable": {"thread_id": "1"}}

def stream_graph_updates(user_input: str):
   for event in app.stream({"messages": [HumanMessage(user_input)]}, thread, stream_mode="updates"):
       if '__interrupt__' not in event.keys():
           for value in event.values():
               if 'messages' in value.keys():
                   print(f"AI: {value['messages'][-1].content}")
                   print("------------------------------------------------")

while True:
   user_input = input("User: ")
   if user_input.lower() in ["quit", "exit", "q"]:
       print("Goodbye!")
       break
   stream_graph_updates(user_input)
				
			

This code:

1. Sets up a thread for our conversation

2. Creates a function to stream updates from our graph

3. Implements a simple console interface for interacting with our agent

Sample Conversations

Here are some sample conversations between the user and the agent

Message - 1

Message – 1

User: Hello

AI: Hello! How can I help you today? What are you looking to find or learn about?

Message – 2

User: Find me a mobile with best display quality and performance in india.

AI: Based on the information I gathered, here’s a summary of the top mobile phones in India known for their display quality and performance:

1. iQOO 13: This phone is a powerhouse, driven by the Snapdragon 8 Elite processor and an impressive AnTuTu score. It features a 6.82-inch LTPO AMOLED display with a 144Hz refresh rate, superior thermal management, and a versatile triple 50MP camera system, making it perfect for gaming enthusiasts and power users.

2. Realme GT 7 Pro: Offering exceptional value, the Realme GT 7 Pro is equipped with the Snapdragon 8 Elite processor and delivers remarkable performance. Its 6.78-inch LTPO AMOLED display with a 120Hz refresh rate ensures smooth visuals, while the 120W charging capability powers the 5,800mAh battery quickly. It’s ideal for users seeking flagship performance at a competitive price.

3. OnePlus 13: The OnePlus 13 stands out with its impressive display and overall performance. It runs on the Snapdragon 8 Elite chipset and delivers excellent AnTuTu and Geekbench scores. It also excels in gaming tests, maintaining high average FPS and achieving the best temperature control among tested phones.

4. OPPO Find X8 Pro: Showcasing MediaTek’s Dimensity 9400 chipset, the OPPO Find X8 Pro achieves a remarkable AnTuTu score. It features 16GB of RAM and an advanced cooling system for consistent performance. The device excels in photography with its advanced periscope telephoto lens and a 6.78-inch LTPO AMOLED display with Dolby Vision support.

5. Vivo X200 Pro: Representing the pinnacle of mobile photography innovation, the Vivo X200 Pro is powered by the MediaTek Dimensity 9400 chipset. Its advanced camera system, developed with Zeiss, features a revolutionary variable telephoto lens. The phone also has a 6.8-inch LTPO AMOLED display with a 144Hz refresh rate and a massive 6,000mAh battery.

6. Samsung Galaxy S25 Ultra: With its Snapdragon 8 Gen 3 processor, the S25 Ultra offers a comprehensive flagship experience. Its 6.8-inch Dynamic LTPO AMOLED display provides exceptional brightness, while the 200MP main sensor leads its versatile quad-camera setup. The S Pen integration and impressive battery life make it perfect for productivity-focused users.

7. iPhone 16 Pro Max: This flagship demonstrates exceptional performance with its A18 Pro chip and Super Retina XDR display with ProMotion technology. The upgraded 48MP main sensor provides enhanced low-light photography capabilities, making it ideal for Apple ecosystem users and photography enthusiasts.

8. Xiaomi 14: Equipped with the Snapdragon 8 Gen 3, the Xiaomi 14 achieves a high AnTuTu score. Its compact 6.36-inch AMOLED display offers a 120Hz refresh rate in a sleek design, and the Leica-tuned 50MP main camera delivers natural colors and sharp details. It’s great for users seeking flagship performance in a more compact form factor.

9. Samsung Galaxy Z Fold 6: This foldable flagship features the Snapdragon 8 Gen 3 and a 7.6-inch inner display for excellent multitasking. Enhanced durability, a refined camera system, and productivity features make it stand out in the foldable category, perfect for multitasking professionals.

10. Poco F6: Despite being a year old, the Poco F6 still holds its ground as one of the fastest phones in its segment, thanks to the Snapdragon 8s Gen 3 chipset. It consistently delivers higher average gaming FPS and runs cooler than some of the latest releases.

Message – 3

User: Find me a best laptops in india from lenovo company

AI: Here’s a summary of the top Lenovo laptops in India, designed to cater to various needs and budgets:

1. Lenovo IdeaPad Slim 3: Starting at ₹49,550, this laptop is designed for students and professionals needing a balance of performance and portability. It features up to a 13th Gen Intel Core i7 processor, 16GB RAM, and a fast 512GB SSD. The FHD IPS display ensures clear visuals for both work and entertainment.

2. Lenovo V15: Priced from ₹33,949, the V15 is a budget-friendly option that doesn’t compromise on performance. It comes with an AMD Ryzen 5 processor, 8GB RAM, and a 512GB SSD, making it suitable for everyday tasks and business applications.

3. Lenovo ThinkPad E14: At ₹85,800, this laptop is a high-performance business solution with a 12th Gen Intel Core i7 processor, 16GB RAM, and a 512GB SSD. It offers a durable build, backlit keyboard, and fingerprint reader for enhanced security, ideal for professionals on the go.

4. Lenovo Yoga 9i: This 2-in-1 convertible laptop, powered by a 13th Gen Intel Core i7 processor and featuring a stunning 4K OLED touchscreen, is perfect for creative tasks and professional use. It includes 16GB RAM and a 1TB SSD, offering both performance and ample storage.

5. Lenovo LOQ: Starting at ₹69,990, the LOQ series is aimed at gamers and content creators, equipped with a 12th Gen Intel Core i5 processor, NVIDIA RTX 3050 graphics, and a 144Hz FHD display for smooth gaming visuals.

6. Lenovo Legion Pro 5: Priced at ₹197,951, this powerhouse gaming laptop features an Intel Core i9 processor, NVIDIA RTX 4070 graphics, and a 240Hz WQXGA display for an immersive gaming experience.

These laptops cover a wide range of use cases, from everyday productivity to high-performance gaming and creative tasks, ensuring there’s a Lenovo laptop for every need and budget in India.

Message – 4

User: q

Goodbye!

Understanding the Flow

Let’s walk through how our graph functions in practice:

1. When a user sends a message, it starts at the node_identifier

2. The node_identifier determines whether this is a new conversation that needs a greeting or a product search request

3. If it’s a new conversation, the flow goes to the greetings_node, which generates a greeting and asks for product details

4. After greeting, the flow moves to the user_input_node to collect the user’s response

5. Once the user provides product details, the flow moves to the search_node

6. The search_node searches for relevant products and passes the results to the prepare_summary_node

7. The prepare_summary_node summarizes the search results and presents recommendations to the user

8. The flow then ends, ready for another cycle

This flow provides a structured, maintainable way to handle the conversation and search process.

Benefits of Our Approach

Our implementation offers several advantages:

1. Modularity: Each node handles a specific part of the process, making the code easy to understand and maintain

2. Flexibility: We can easily modify or extend the flow by adding new nodes or changing the connections between them

3. Conversational Intelligence: The system can understand when a user is starting a conversation vs. directly asking for products

4. Efficient Search Processing: By separating search and summarization, we can optimize each step independently

5. Maintainable State: The graph maintains context throughout the conversation, allowing for more coherent interactions

Conclusion

LangGraph transforms how businesses connect customers with products through intelligent recommendations. By structuring AI workflows into clear, manageable components, you can create recommendation systems that truly understand customer intent, leading to higher conversion rates and increased customer satisfaction.

The implementation we’ve explored demonstrates LangGraph’s power in creating stateful, conversational product recommendations that adapt to user needs. This approach provides several key advantages over traditional systems:

  • Enhanced personalization through maintained context across interactions
  • Greater flexibility with modular components you can easily update
  • Improved customer experience through conversational, human-like recommendations
  • Better maintainability with clearly defined node responsibilities

As AI continues evolving, structured frameworks like LangGraph will be essential for businesses looking to gain competitive advantage. Whether you’re optimizing e-commerce sales, enhancing customer support, or building any AI-powered solution, LangGraph’s organized approach helps you create robust, adaptable systems that drive business growth.

Ready to implement LangGraph for your product recommendations? Contact our team today to build your custom LangGraph solution and start delivering smarter recommendations that boost your bottom line.

Frequently Asked Questions

What is the difference between LangGraph and LangChain?

LangChain provides the building blocks for LLM applications, while LangGraph focuses on organizing these components into stateful, graph-based workflows. Think of LangChain as providing the individual parts, and LangGraph as the blueprint for connecting them effectively.

Use LangGraph when you need personalized recommendations that require contextual understanding, multi-step reasoning, or when your recommendation system needs to maintain state across user interactions. It’s ideal for complex e-commerce scenarios where standard recommendation algorithms fall short.

LangGraph maintains state through a structured TypedDict that persists throughout the execution flow. This allows your recommendation system to remember previous interactions, user preferences, and intermediate results as users navigate through your product catalog.

Yes! LangGraph can complement existing recommendation engines by adding conversational capabilities and contextual understanding. You can integrate traditional collaborative filtering or content-based recommendations as nodes within your LangGraph workflow.

Consider API latency when using external LLMs, manage state size to prevent memory issues, and implement proper error handling for network failures. For production systems, use checkpointing and implement retry logic to ensure recommendation continuity even during interruptions.

Write a comment

Your email address will not be published. Required fields are marked *

Thanks!