AI Agents API
Theai module in @wacht/backend provides high-level APIs to orchestrate Wacht’s native AI infrastructure from your backend environments.
Through this module, you can programmatically provision LLM Agents, bind custom runtime Tools, manage contextual Knowledge Bases, and execute conversational interactions interactively.
AI Agents
Agents represent the foundational identity and behavior profiles of the LLMs deployed in your environments.createAgent(request)
Provisions a new agent profile equipped with specific instructions.
executeAgent(agentName, request)
Interactively triggers an execution block, continuing a conversation or triggering a new system response based on input.
The name or ID of the agent to execute.
client.ai.listAgents(options)client.ai.getAgent(agentId)client.ai.updateAgent(agentId, request)client.ai.deleteAgent(agentId)
Execution Contexts
Execution Contexts maintain the state, history, and active variables of an Agent interacting with a specific user over time. To ensure persistence, you must bind your agent executions to these contextual records.createExecutionContext(request)
Generate an isolated execution bubble for a specific user to interact within.
executeAgentInContext(contextId, request)
Executes an agent within a defined stateful context instead of a generic execution thread.
The unique identifier of the Execution Context.
client.ai.listExecutionContexts(options)client.ai.updateExecutionContext(contextId, request)client.ai.deleteExecutionContext(contextId)
Tools
AI Tools define deterministic backend logic that LLM Agents can choose to invoke if required.createTool(request)
Define a JSON Schema describing a backend capability the Agent can utilize.
client.ai.listTools(options)client.ai.getTool(toolId)client.ai.updateTool(toolId, request)client.ai.deleteTool(toolId)
Agent Integrations
Agents rarely operate in isolation. You can bind specific Tools and Knowledge Bases directly to an Agent via Integrations.createAgentIntegration(agentId, request)
Bind a specific capability to an Agent.
The unique identifier of the Agent to bind to.
client.ai.listAgentIntegrations(agentId, options)client.ai.getAgentIntegration(agentId, integrationId)client.ai.updateAgentIntegration(agentId, integrationId, request)client.ai.deleteAgentIntegration(agentId, integrationId)
Knowledge Bases
Knowledge Bases allow you to assemble RAG (Retrieval-Augmented Generation) clusters, giving an LLM Agent access to your proprietary documents.createKnowledgeBase(request)
Provision an empty knowledge store.
uploadKnowledgeBaseDocument(kbId, file)
Directly upload a document into the Knowledge Base for vectorization.
The unique identifier of the target Knowledge Base.
The binary representation of the document to vectorize.
An optional title to override the uploaded file’s name.
An optional description providing context on the document contents.
client.ai.listKnowledgeBases(options)client.ai.getKnowledgeBase(kbId)client.ai.updateKnowledgeBase(kbId, request)client.ai.deleteKnowledgeBase(kbId)client.ai.listKnowledgeBaseDocuments(kbId, options)client.ai.deleteKnowledgeBaseDocument(kbId, documentId)
