releaseainetworkreliability

OmniMon v5.0.1 - Smarter AI Chat, Conversation History & Real Network Data

v5.0.1 overhauls the AI chat with conversation history, clickable PIDs, multi-provider support, and exposes real per-process network telemetry from the Rust backend.

OmniMon v5.0.1

A patch release packed with meaningful improvements. v5.0.1 transforms the AI chat from a single-turn tool into a real assistant with memory, and connects the network view to live backend telemetry.

AI Chat Overhaul

The AI chat received the most attention in this release - 7 commits focused on making it robust, reliable, and genuinely useful.

Conversation history - Messages now persist across turns. The AI remembers what you asked before and can follow up on previous context. History is capped at 10 messages to keep token usage efficient.

Multi-provider support - Connect to Ollama, OpenAI, Anthropic, OpenRouter, or Gemini. Switch providers on the fly from the AI Config bar. Default model: meta-llama/llama-3.2-3b-instruct:free via OpenRouter.

Clickable PIDs - When the AI mentions a process (e.g., “firefox at PID 2841 is using 3.2GB”), the PID is now a clickable link that opens the process detail modal. No more copy-pasting PIDs.

Dual chat surfaces - The AI Actions bar handles system actions (kill, close tabs, inspect), while the AI Config bar manages settings and alert rules. Each has its own clear tooltip explaining what it does.

Thinking animation - A Claude Code-style blinking dots animation while the AI processes your request, with a Cancel button if it takes too long.

Reliability Fixes

The infamous “error decoding response body” bug is gone. Every instance of unsafe resp.json().await in the Rust backend was replaced with a safe text() + from_str() pattern that handles malformed responses, 5xx errors, and empty bodies gracefully.

Timeout protection - A 45-second frontend timeout prevents the chat from hanging indefinitely. If a model is slow or unresponsive, you get a clear error message instead of an infinite spinner.

Debug logging - Comprehensive logging added to every AI request: provider, model, prompt size, response status, and body length. Makes diagnosing issues straightforward.

Real Network Telemetry

The network view now uses real per-process data from the Rust backend via Tauri IPC, instead of browser-tab-only estimates. The new get_network_data command exposes:

  • Per-process throughput (bytes in/out)
  • Recent connection events (IP, port, protocol)
  • Real-time updates via IPC bridge

Browser tab data is still available as a fallback when backend data isn’t available.

Smart Tab Management

The close_tabs AI command gained an exclusion mode:

"Close all tabs except GitHub and YouTube"

The AI now understands except patterns, so you can clean up browser clutter without losing important tabs.

Full Internationalization

113+ new i18n keys per language (EN/ES) covering all AI chat UI, profile descriptions, toolbar labels, and suggestion buttons. The AI even responds in your configured language.

Install

# macOS
brew tap chochy2001/omnimon && brew install --cask omnimon

# Linux
curl -fsSL https://get.omnimon.com.mx | bash

# Windows
winget install chochy2001.omnimon

Full changelog on GitHub