5 min read
Langgraph agent tool use and visualisation

By now, i hope we know what langgraph is. If not, it is a framework to build highly customisable ai agentic software.

But here, i want to specifically talk about push_ui_message, and what crazy things we can do with this simple yet complex helper.

We already know what agents are and how they use tools to achieve user goals. And tools returns raw json(most cases). With push_ui_message we push the raw json to the client and client can use the json to build a beautiful ui component. I will try to show how this can be achieved in this writing.

The Black Box Experience

When we run the agent, the user experience looks like this:

response = graph.invoke({
    "messages": [{"role": "user", "content": "What's the weather in SanFrancisco?"}]
})

User sees: Loading… (for several seconds)

Then suddenly: “The weather in San Francisco is sunny and 72°F”.

User has no visibility into what’s happening. They dont know if the agent stuck or working. But we(as engineer) know the agent is working as expected, from user’s perspective its just a black box. They sends a message and wait…with no feedback.

Lets see how can we use push_ui_message to solve this issue.

After each tool call we will push the raw response with SSE

from langgraph.graph.ui import AnyUIMessage, ui_message_reducer, push_ui_message

class State(MessageState):
  ui: Annotated[Sequence[AnyUIMessage], ui_message_reducer]

# Defining a mock tools that return json
@tool
def get_weather(location: str) -> str:
    """Get the current weather for a location."""
    return json.dumps({
        "location": location,
        "temperature": 72,
        "condition": "sunny"
    })

def call_model(state: State):
    response = llm.invoke(state["messages"])
    return {"messages": [response]}

def call_tools(state: State):
    # Execute tools
    tool_node = ToolNode(tools)
    result = tool_node.invoke(state)

    # Push tool responses with AIMessage
    for msg in result["messages"]:
        tool_name = msg.name
        raw_tool_response = msg.content

        # Create AIMessage with unique ID
        uuid_message_id = str(uuid.uuid4())
        message = AIMessage(
            id=uuid_message_id,
            content=raw_tool_response,
        )

        # Push UI message with parsed JSON
        push_ui_message(
            name=tool_name,
            props={"content": json.loads(raw_tool_response)},
            message=message
        )
# ...Rest of the code

graph = builder.compile()

This means users get real-time visibility into tool outputs. The raw API response appears in their UI immediately, even while the agent is still processing that data to formulate its answer.

Here is a mental model worked for me

Here’s how I think about it: Imagine a user asks to see their orders. The agent calls the get_order tool, and the moment it get data from tool response, we push that raw JSON to the frontend. The UI instantly renders a clean, interactive table showing all their orders. Meanwhile, the LLM is still working on crafting its natural language response. The result? Users can interact with their data immediately through the table, while also getting a conversational summary moments later.

Now lets see how we can use these response to render an ui component.

we are passing tool_name, props, and message to the client. We will use these info to show the ui component.

The Client Side: Rendering UI Components

Now that we’re pushing structured data, let’s render it:

const ToolUIRenderer = ({ uiMessage }: { uiMessage: AnyUIMessage }) => {
  const { name, props } = uiMessage;

  const componentMap = {
    'get_weather': WeatherCard,
    'get_orders': OrdersTable,
    'search_products': ProductGrid,
  };

  const Component = componentMap[name];
  return Component ? <Component data={props.content} /> : null;
};

const WeatherCard = ({ data }) => (
  <div className="weather-card">
    <h3>{data.location}</h3>
    <div>{data.temperature}°F - {data.condition}</div>
  </div>
);

Streaming UI Messages

const stream = await client.runs.stream(thread_id, "agent", {
  input: { messages: [{ role: "user", content: userMessage }] }
});

for await (const chunk of stream) {
  if (chunk.event === "ui/partial") {
    renderToolUI(chunk.data);  // Render UI component instantly
  }
}

Real-World Example

User: “Show me my pending orders”

What happens:

  1. Agent calls get_orders → User instantly sees interactive table
  2. Agent formulates response → User sees: “You have 3 pending orders”

Users get actionable data through UI components immediately, then context through natural language.

Why This Matters

Traditional chatbots make you choose: structured data (boring) or conversational (slow). With push_ui_message, you get both:

  • Instant feedback — No more black box
  • Rich interactions — Tables, charts, cards
  • Conversational context — LLM still provides summaries

Don’t just build chatbots. Give them life.


What’s your experience with agents? Reply and let me know what you’re building.