Local AI is Here: Ollama Integration in OmniMon
Chat with your system using natural language. OmniMon's new AI module supports Ollama for private, local inference with tool calling capabilities.
Why Local AI Matters
Cloud-based AI providers are powerful, but they come with trade-offs: latency, cost, and privacy. With OmniMon’s new Ollama integration, your system monitoring data never leaves your machine.
How It Works
OmniMon connects to Ollama running locally on localhost:11434. No API key needed — just install Ollama and start chatting.
Supported model: llama3.2 (configurable)
The AI module injects real-time system context into every prompt:
- Current RAM usage (GB used, total, percentage)
- CPU utilization percentage
- Top 10 processes by resource consumption (PID, name, memory, CPU)
- Network throughput (RX/TX per second)
Tool Calling: AI That Acts
OmniMon’s AI doesn’t just observe — it can take action. The tool calling engine supports three operations:
| Tool | Description |
|---|---|
kill_process | Terminate a process by PID |
kill_by_name | Kill all processes matching a name pattern |
close_tabs | Close browser tabs by URL pattern |
Every tool call is validated against OmniMon’s security system:
- Protected process enforcement — Kernel processes,
launchd, and system services cannot be killed. - Hardcoded allowlist — Only approved tools can execute.
- Audit trail — All AI actions are logged.
Supported Providers
OmniMon v4.1 supports five AI providers:
| Provider | API Key Required | Runs Locally |
|---|---|---|
| Ollama | No | Yes |
| OpenAI | Yes | No |
| Anthropic | Yes | No |
| Gemini | Yes | No |
| OpenRouter | Yes | No |
All credentials are stored in your OS native keychain — never in plaintext.
Getting Started
- Install Ollama:
brew install ollama(macOS) or visit ollama.com - Pull a model:
ollama pull llama3.2 - Start serving:
ollama serve - Open OmniMon and navigate to the AI Chat panel
Prompt Injection Defense
Every user message passes through OmniMon’s detectPromptInjection() filter before being sent to any provider. This prevents malicious prompts from exploiting the tool calling system.