Skip to main content

AI Agents API

The ai module in @wacht/backend provides high-level APIs to orchestrate Wacht’s native AI infrastructure from your backend environments. Through this module, you can programmatically provision LLM Agents, bind custom runtime Tools, manage contextual Knowledge Bases, and execute conversational interactions interactively.
import { WachtClient } from "@wacht/backend";

const client = new WachtClient({ apiKey: process.env.WACHT_API_KEY });

AI Agents

Agents represent the foundational identity and behavior profiles of the LLMs deployed in your environments.

createAgent(request)

Provisions a new agent profile equipped with specific instructions.
const supportAgent = await client.ai.createAgent({
  name: "support_bot_v2",
  configuration: {
    instructions: "You are a highly capable technical support assistant who replies strictly in JSON.",
    model: "claude-3-5-sonnet",
    visibility: "private"
  }
});
request
CreateAiAgentRequest

executeAgent(agentName, request)

Interactively triggers an execution block, continuing a conversation or triggering a new system response based on input.
const response = await client.ai.executeAgent("support_bot_v2", {
  execution_type: {
    new_message: {
      message: "How do I reset my password?"
    }
  } // execution_id can be passed as a query param or depending on the backend signature
});

console.log(response.status); // Usually 'running', 'requires_action', or 'completed'
agentName
string
required
The name or ID of the agent to execute.
request
ExecuteAgentRequest
  • client.ai.listAgents(options)
  • client.ai.getAgent(agentId)
  • client.ai.updateAgent(agentId, request)
  • client.ai.deleteAgent(agentId)

Execution Contexts

Execution Contexts maintain the state, history, and active variables of an Agent interacting with a specific user over time. To ensure persistence, you must bind your agent executions to these contextual records.

createExecutionContext(request)

Generate an isolated execution bubble for a specific user to interact within.
const ctx = await client.ai.createExecutionContext({
  title: "Support Session - usr_abc",
  system_instructions: "Priority: high. Intent: billing.",
  context_group: "billing_support"
});
request
CreateExecutionContextRequest

executeAgentInContext(contextId, request)

Executes an agent within a defined stateful context instead of a generic execution thread.
await client.ai.executeAgentInContext(ctx.id, {
  execution_type: {
    new_message: {
      message: "Referencing our last message..."
    }
  }
});
contextId
string
required
The unique identifier of the Execution Context.
request
ExecuteAgentRequest
  • client.ai.listExecutionContexts(options)
  • client.ai.updateExecutionContext(contextId, request)
  • client.ai.deleteExecutionContext(contextId)

Tools

AI Tools define deterministic backend logic that LLM Agents can choose to invoke if required.

createTool(request)

Define a JSON Schema describing a backend capability the Agent can utilize.
await client.ai.createTool({
  name: "search_customer_receipts",
  description: "Looks up a customer's stripe receipts.",
  tool_type: "api",
  configuration: {
    input_schema: {
       type: "object",
       properties: {
          email: { type: "string", description: "The customer's email address" }
       },
       required: ["email"]
    }
  }
});
request
CreateToolRequest
  • client.ai.listTools(options)
  • client.ai.getTool(toolId)
  • client.ai.updateTool(toolId, request)
  • client.ai.deleteTool(toolId)

Agent Integrations

Agents rarely operate in isolation. You can bind specific Tools and Knowledge Bases directly to an Agent via Integrations.

createAgentIntegration(agentId, request)

Bind a specific capability to an Agent.
await client.ai.createAgentIntegration("agt_123", {
  integration_type: "slack",
  name: "Slack Support Channel Bot",
  config: {
    slack_channel_id: "C12345678"
  }
});
agentId
string
required
The unique identifier of the Agent to bind to.
request
CreateAgentIntegrationRequest
  • client.ai.listAgentIntegrations(agentId, options)
  • client.ai.getAgentIntegration(agentId, integrationId)
  • client.ai.updateAgentIntegration(agentId, integrationId, request)
  • client.ai.deleteAgentIntegration(agentId, integrationId)

Knowledge Bases

Knowledge Bases allow you to assemble RAG (Retrieval-Augmented Generation) clusters, giving an LLM Agent access to your proprietary documents.

createKnowledgeBase(request)

Provision an empty knowledge store.
const kb = await client.ai.createKnowledgeBase({
  name: "API Documentation Store",
  description: "Markdown documents covering our REST APIs."
});
request
CreateKnowledgeBaseRequest

uploadKnowledgeBaseDocument(kbId, file)

Directly upload a document into the Knowledge Base for vectorization.
// Assuming `req.file` from a traditional express multipart/form-data upload
await client.ai.uploadKnowledgeBaseDocument(
  kb.id, 
  req.file, 
  "Billing API Guide", 
  "Explains the v2 Billing API."
);
kbId
string
required
The unique identifier of the target Knowledge Base.
file
File | Blob | Buffer
required
The binary representation of the document to vectorize.
title
string
An optional title to override the uploaded file’s name.
description
string
An optional description providing context on the document contents.
  • client.ai.listKnowledgeBases(options)
  • client.ai.getKnowledgeBase(kbId)
  • client.ai.updateKnowledgeBase(kbId, request)
  • client.ai.deleteKnowledgeBase(kbId)
  • client.ai.listKnowledgeBaseDocuments(kbId, options)
  • client.ai.deleteKnowledgeBaseDocument(kbId, documentId)