Component Details
Platform Adapter
Handles incoming messages from Telegram (Phase 1) or X (Phase 2)
Manages platform-specific formatting
Implements rate limiting and queueing
Handles media processing (images, files, etc.)
Agent Runtime
Core processing engine that manages conversation flow
Implements context window management
Handles message formatting and tokenization
Manages agent state and session tracking
LLM Provider Integration
Abstracts away provider-specific API differences
Implements fallback mechanisms for API failures
Optimizes prompt construction for different models
Manages token usage and quota monitoring
State Storage
Maintains conversation history and agent state
Implements configurable retention policies
Provides persistence across container restarts
Options for local, IPFS, or Arweave storage
// Agent Runtime Interface
interface AgentRuntime {
initialize(): Promise<void>;
handleMessage(message: IncomingMessage): Promise<OutgoingMessage[]>;
getState(): Promise<AgentState>;
setState(partial: Partial<AgentState>): Promise<void>;
shutdown(): Promise<void>;
}
// LLM Provider Interface
interface LLMProvider {
generateResponse(
prompt: string,
parameters: {
temperature: number;
maxTokens: number;
topP?: number;
frequencyPenalty?: number;
presencePenalty?: number;
}
): Promise<{
text: string;
usage: {
promptTokens: number;
completionTokens: number;
totalTokens: number;
}
}>;
countTokens(text: string): number;
}
Last updated