Building the Agent
The agent is your Trik’s implementation. It handles conversations (conversational mode) or executes tools (tool mode).
Conversational Mode
Conversational triks run an LLM agent that receives the conversation via handoff from the main agent. Use wrapAgent() from @trikhub/sdk to create the agent.
Basic Structure
import { wrapAgent, transferBackTool, TrikContext } from '@trikhub/sdk';
import { ChatAnthropic } from '@langchain/anthropic';
import { createReactAgent } from '@langchain/langgraph/prebuilt';
export const agent = wrapAgent(async (context: TrikContext) => {
const model = new ChatAnthropic({
model: 'claude-sonnet-4-6',
apiKey: context.config.get('ANTHROPIC_API_KEY'),
});
return createReactAgent({
llm: model,
tools: [/* your tools */, transferBackTool],
});
});wrapAgent() accepts a factory function (context: TrikContext) => InvokableAgent and returns a TrikAgent. The factory is called once on first use — the resolved agent is then reused across sessions.
The transferBackTool is a LangChain tool that signals the conversation should be handed back to the main agent. Always include it in your agent’s tool set so the LLM can decide when to transfer back.
Full Example
import { wrapAgent, transferBackTool, TrikContext } from '@trikhub/sdk';
import { ChatAnthropic } from '@langchain/anthropic';
import { createReactAgent } from '@langchain/langgraph/prebuilt';
import { tool } from '@langchain/core/tools';
import { z } from 'zod';
// Define your tools
const searchTool = tool(async ({ topic }) => {
const results = await searchDatabase(topic);
return JSON.stringify({
count: results.length,
articles: results.map(r => ({ id: r.id, title: r.title })),
});
}, {
name: 'search',
description: 'Search for articles by topic',
schema: z.object({
topic: z.string().describe('The topic to search for'),
}),
});
const getDetailsTool = tool(async ({ articleId }) => {
const article = await fetchArticle(articleId);
if (!article) return JSON.stringify({ error: 'Article not found' });
return JSON.stringify({
title: article.title,
content: article.body,
author: article.author,
});
}, {
name: 'getDetails',
description: 'Get full article details by ID',
schema: z.object({
articleId: z.string().describe('The article ID (e.g., art-001)'),
}),
});
// Create the agent
export const agent = wrapAgent(async (context: TrikContext) => {
const model = new ChatAnthropic({
model: 'claude-sonnet-4-6',
apiKey: context.config.get('ANTHROPIC_API_KEY'),
});
return createReactAgent({
llm: model,
tools: [searchTool, getDetailsTool, transferBackTool],
});
});How wrapAgent Works
Under the hood, wrapAgent() handles:
- Agent creation — Calls your factory function on first use and caches the result.
- Message history — Maintains per-session conversation history across turns.
- Tool call extraction — Extracts
ToolCallRecord[]from LangGraph message history for log template filling. - Transfer-back detection — Watches for the
transfer_backtool call and sets thetransferBackflag on the response.
You don’t need to manage any of this yourself. Just return a LangGraph-compatible agent from the factory, and wrapAgent() handles the rest.
Pre-built Agent Pattern
If your agent doesn’t need config values at creation time, you can pass a pre-built agent directly:
import { wrapAgent, transferBackTool } from '@trikhub/sdk';
import { ChatAnthropic } from '@langchain/anthropic';
import { createReactAgent } from '@langchain/langgraph/prebuilt';
const model = new ChatAnthropic({ model: 'claude-sonnet-4-6' });
const reactAgent = createReactAgent({
llm: model,
tools: [/* your tools */, transferBackTool],
});
export const agent = wrapAgent(reactAgent);This is simpler, but the factory pattern is preferred when you need API keys from context.config.
System Prompt Loading
The systemPromptFile field in the manifest tells the gateway where to find your prompt file (resolved
relative to the manifest directory). In your code, you load the file yourself relative to your entry point.
import { readFileSync } from 'node:fs';
import { join, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
const __dirname = dirname(fileURLToPath(import.meta.url));
const systemPrompt = readFileSync(join(__dirname, '../src/prompts/system.md'), 'utf-8');
export default wrapAgent((context: TrikContext) => {
return createReactAgent({
llm: model,
tools,
messageModifier: systemPrompt, // <-- pass the loaded prompt here
});
});The systemPromptFile in the manifest and the readFileSync/Path in code should resolve to the same file. The manifest declaration is for the gateway’s reference; the code loading is what your agent actually uses at runtime.
Tool Mode
Tool-mode triks export native tools that the main agent calls directly. There is no handoff, no session, and no LLM — just pure function handlers. Use wrapToolHandlers() from @trikhub/sdk.
Basic Structure
import { wrapToolHandlers, TrikContext } from '@trikhub/sdk';
export const agent = wrapToolHandlers({
async myTool(input: Record<string, unknown>, context: TrikContext) {
// Process input and return structured output
return { result: 'value' };
},
});wrapToolHandlers() accepts a map of handler functions and returns a TrikAgent. Each handler name must match a tool declared in your manifest’s tools block.
Full Example
import { wrapToolHandlers, TrikContext } from '@trikhub/sdk';
import { createHash } from 'crypto';
export const agent = wrapToolHandlers({
async computeHash(input: Record<string, unknown>, context: TrikContext) {
const { text, algorithm } = input as { text: string; algorithm: string };
const hash = createHash(algorithm).update(text).digest('hex');
return { hash, algorithm, inputLength: text.length };
},
async compareHashes(input: Record<string, unknown>, context: TrikContext) {
const { hash1, hash2 } = input as { hash1: string; hash2: string };
const match = hash1 === hash2;
return { match, hash1, hash2 };
},
});Each handler receives the validated input (matching the inputSchema from the manifest) and returns an object matching the outputSchema. The gateway fills the outputTemplate with the returned values and passes the result to the main agent.
How wrapToolHandlers Works
- The main agent calls the tool (e.g.,
computeHash) with input matching theinputSchema. - The gateway validates the input, then calls
executeTool(toolName, input, context)on yourTrikAgent. wrapToolHandlers()dispatches to the matching handler.- The handler returns structured output. The gateway validates it against
outputSchema, fillsoutputTemplate, and returns the result to the main agent.
Using Context
Both modes receive a TrikContext with access to configuration and storage.
Configuration
Access user-configured values (API keys, tokens):
// In a factory (conversational)
export const agent = wrapAgent(async (context: TrikContext) => {
const apiKey = context.config.get('ANTHROPIC_API_KEY');
const webhookUrl = context.config.get('WEBHOOK_URL');
// ...
});
// In a tool handler (tool mode)
async myTool(input: Record<string, unknown>, context: TrikContext) {
const apiKey = context.config.get('SERVICE_API_KEY');
// ...
}Storage
Use persistent key-value storage:
// Store data
await context.storage.set('last-query', { topic: 'AI', timestamp: Date.now() });
// Retrieve data
const lastQuery = await context.storage.get('last-query');
// List keys
const keys = await context.storage.list('user-');
// Delete
await context.storage.delete('last-query');
// Batch operations
await context.storage.setMany({ key1: 'value1', key2: 'value2' });
const values = await context.storage.getMany(['key1', 'key2']);Storage must be enabled in the manifest (capabilities.storage.enabled: true).
Error Handling
Conversational Mode
Errors in your tools are naturally handled by the LLM agent — it sees the error output and responds accordingly. For critical failures in the factory or unexpected errors, the gateway catches them and returns an error response.
const riskyTool = tool(async ({ query }) => {
try {
const result = await externalAPI.search(query);
return JSON.stringify({ results: result.items });
} catch (error) {
// Return error as tool output — the LLM will handle it gracefully
return JSON.stringify({ error: 'Search service unavailable. Please try again.' });
}
}, {
name: 'search',
description: 'Search an external service',
schema: z.object({ query: z.string() }),
});Tool Mode
Throw errors or return error-indicating output. Thrown errors become error responses to the main agent.
export const agent = wrapToolHandlers({
async riskyOperation(input: Record<string, unknown>, context: TrikContext) {
const { query } = input as { query: string };
try {
const result = await externalService.fetch(query);
return { status: 'success', data: result.summary };
} catch (error) {
console.error('Tool error:', error);
// Option 1: Return an error in the output structure
return { status: 'error', data: 'Service unavailable' };
// Option 2: Throw — the gateway will return an error response
// throw new Error('Service unavailable');
}
},
});Best Practices
- Always include
transferBackTool— Conversational agents must give the LLM a way to hand back when the user’s request is outside their domain. - Use the factory pattern — Prefer
wrapAgent(async (context) => ...)over pre-built agents when you need config values. - Keep tool handlers focused — Each handler should do one thing well. Complex logic should be extracted into helper functions.
- Return structured output — Tool-mode handlers must return objects matching the declared
outputSchema. - Log errors — Use
console.error()for debugging. The gateway captures stderr output. - Validate early — Check inputs before processing, even though the gateway validates against schemas.
- Handle edge cases — Empty results, missing data, API timeouts, rate limits.
Next: Learn about Testing Locally.