CUGA LogoCUGA AGENT
SDK

CugaAgent

The main entry point for interacting with the CUGA agent.

CugaAgent

The CugaAgent class provides a simple, high-level interface for creating and running CUGA agents. It wraps the underlying LangGraph execution complexity, offering an easy way to manage tools, handle conversations, and integrate policies.

Model Configuration

CUGA supports multiple LLM providers (OpenAI, IBM WatsonX, Azure OpenAI, Groq, OpenRouter, and more). Configure the model provider and settings using environment variables in a .env file, or override programmatically by passing a model parameter to CugaAgent.

For detailed configuration instructions and all supported providers, see the Model Configuration documentation.

Initialization

Import

Import the agent and necessary tools from cuga and langchain_core.

from cuga import CugaAgent
from langchain_core.tools import tool

Define Tools

Create the tools your agent will use.

@tool
def get_weather(city: str) -> str:
    """Get weather for a city"""
    return f"Weather in {city}: Sunny, 72°F"

Create Agent

Initialize the CugaAgent with your tools and optional model configuration.

agent = CugaAgent(
    tools=[get_weather],
    # model=... (optional)
)

Parameters

Prop

Type

Tool Provider

The tool_provider parameter allows you to customize how tools are provided to the agent. By default, CUGA uses CombinedToolProvider internally. For advanced use cases, you can implement a custom tool provider or use one of the built-in implementations.

For detailed information about tool providers, including built-in implementations and how to create custom providers, see the Tool Provider documentation.

Core Methods

invoke

Run the agent with a message. Returns an InvokeResult containing the answer, tool calls, thread ID, and any errors.

result = await agent.invoke("What is the weather in Tokyo?")
print(result.answer)  # The agent's response
print(result.thread_id)  # Thread ID used for this invocation
# Enable tool call tracking to see which tools were called
result = await agent.invoke(
    "What is 10 + 5?",
    track_tool_calls=True
)
print(result.answer)      # "15" or similar
print(result.tool_calls)  # List of tool calls with metadata

# Each tool call record contains:
# - name: Tool name
# - operation_id: Function name or OpenAPI operationId
# - arguments: Arguments passed to the tool
# - result: Return value
# - app_name: App/service name (if set)
# - duration_ms: Execution time in milliseconds
# - timestamp: ISO timestamp
# - error: Error message (if any)
from langchain_core.messages import HumanMessage, AIMessage

# Multi-turn with list of messages
result = await agent.invoke([
    HumanMessage(content="My name is Alice"),
    AIMessage(content="Hello Alice"),
    HumanMessage(content="What is my name?")
])
print(result.answer)

# Resume execution (e.g. after tool approval)
result = await agent.invoke(
    None,
    thread_id="user-123",
    action_response=approval_response
)

Signature Parameters

Prop

Type

InvokeResult

The invoke method returns an InvokeResult object with the following fields:

Prop

Type

Backward Compatibility: str(result) returns the answer, so existing code using the result as a string will continue to work.

stream

Stream the agent's execution step-by-step.

async for state in agent.stream("Analyze this dataset"):
    print(state)

Signature Parameters

Prop

Type

Properties

graph

Access the underlying compiled LangGraph StateGraph.

compiled_graph = agent.graph
# Use the graph directly
await compiled_graph.ainvoke(...)

policies

Access the PoliciesManager. See Policies.

Tool Call Tracking

CUGA provides built-in tool call tracking to help with debugging, observability, and auditing. When enabled, every tool invocation is recorded with detailed metadata.

Enabling Tracking

Pass track_tool_calls=True to the invoke method:

result = await agent.invoke(
    "Get the top account by revenue",
    track_tool_calls=True
)

# Access tracked tool calls
for call in result.tool_calls:
    print(f"Tool: {call['name']}")
    print(f"Arguments: {call['arguments']}")
    print(f"Result: {call['result']}")
    print(f"Duration: {call['duration_ms']}ms")

Custom Tool Tracking with @tracked_tool

For custom tool providers or when you want to add tracking metadata to your tools, use the @tracked_tool decorator:

from cuga import CugaAgent, tracked_tool
from langchain_core.tools import tool

# Simple usage - just add the decorator
@tracked_tool
def multiply(a: int, b: int) -> int:
    return a * b

# With optional app_name for grouping
@tracked_tool(app_name="calculator")
def add(a: int, b: int) -> int:
    return a + b

# Combine with LangChain @tool decorator
@tool
@tracked_tool(app_name="math")
def divide(a: int, b: int) -> float:
    '''Divide two numbers'''
    return a / b

The @tracked_tool decorator automatically captures:

  • name: Function name (also used as operation_id)
  • arguments: Arguments passed to the function
  • result: Return value
  • duration_ms: Execution time in milliseconds
  • timestamp: When the call was made
  • error: Error message if the call failed
  • app_name: Optional grouping (if specified)

For more details on custom tool providers with tracking, see the Tool Provider documentation.