How AI EdgeLabs defends your runtime
A single container, four detection pipelines, one unified ML engine — running entirely on the host. Network, workload, vulnerability, and AI agent runtime protection in one binary. Here's how every component fits together to detect, block, and respond to threats in real time.
Most security tools tell you what happened after the fact. EdgeLabs tells you what's happening now — before it becomes an incident.
Four runtime detection pipelines, one unified engine
Network, workload, vulnerability, and AI agent security pipelines run in parallel and feed a shared correlation layer. Each operates independently — any one can run alone or in combination. The AI agent pipeline (powered by the open-source Parallax engine) inspects every LLM tool call and message in microseconds. All detection, correlation, and response happens on the host.
Runtime Network Protection
Captures raw packet data at the kernel level before it reaches any application layer. Flows through feature extraction into parallel detection engines: deep neural network classifiers, behavioral anomaly models, and threat intelligence matching. Anomalies are detected and blocked in the same pipeline with no cloud round-trip.
Runtime Workload Protection
eBPF probes instrument the kernel directly, capturing syscalls, process lifecycle events, and filesystem changes as they happen. No polling. No sampling. Continuous execution context. Events pass through normalization into signature scanning, hash verification, and behavioral APT detection algorithms.
Runtime Vulnerability Management
Automated discovery scans host filesystems and container images continuously. Pre-processing evaluates exploitability and runtime state — so you see which vulnerabilities are actually reachable in production, not just theoretically present. SBOM generation feeds into centralized risk scoring for prioritized remediation.
Cloud Coordination Layer
Agents self-register via outbound-only connection — no open ports, no firewall rules, no manual configuration. Communication is event-driven: alerts stream in real time, telemetry is batched, model updates are pulled only when available. All detection logic operates fully independently of cloud connectivity.
Runtime AI Agent Protection — powered by Parallax
An agentic-security extension for LLM workloads. A single Rust binary intercepts every agent lifecycle event — user messages, tool calls before execution, tool results after execution, and model parameters — and evaluates them in microseconds (typically < 0.2 ms). Five evaluator engines (regex, keyword pattern, Sigma, CEL expressions, SQL temporal analysis) run in cost order and short-circuit on the first block. Decisions — block, redact, detect, allow — flow into the same audit, correlation, and SIEM channels as the rest of the agent. Framework-agnostic: works in server mode (HTTP /evaluate) or proxy mode in front of any LLM API — first-class integrations for OpenClaw and Claude Code; LangChain, CrewAI, and OpenAI Agents SDK on the roadmap.
Host-local processing — nothing leaves your infrastructure
Every stage from data ingestion to response execution runs on the host. Raw data never leaves your infrastructure.
Runs independently. Protects continuously.
Every capability runs at the edge without cloud dependency. The agent operates as a self-contained security appliance — the cloud provides coordination, not computation.
On-Device AI/ML
ONNX-optimized models execute locally. No external API calls. Models are pre-loaded and versioned independently of the agent binary.
Zero-Touch Registration
Agent self-registers via outbound-only connection. No open ports, no firewall rules, no manual configuration. Per-agent cryptographic attestation secures the registration flow.
Automated Blocking
High-severity threats trigger automatic IP blocking via native OS firewall. Configurable playbooks define custom response chains per threat category.
Offline Operation
Full detection and prevention stack operates without cloud connectivity. Events buffer locally and sync on reconnect. No degradation in security posture during network outages.
Kernel Instrumentation
eBPF probes provide low-overhead visibility into syscalls, process creation, network connections, and file operations. Kernel module support for broader Linux version compatibility.
OTA Model Refresh
ML models update independently via cloud-coordinated delivery. Supports generic models (all tenants) and personalized models pre-trained on environment-specific traffic patterns.
Multi-Interface Capture
Simultaneous sniffing across multiple NICs per host. Standard pcap for general use, DPDK for multi-Gbps environments requiring line-rate inspection with direct NIC mapping.
Network Asset Mapping
Protocol-level discovery via ARP, DNS, DHCP builds a continuously updated inventory of connected devices, services, and network topology without active scanning overhead.
Three deployment profiles
Select the profile that matches your infrastructure constraints, security requirements, and performance targets.
Full Runtime Protection
Maximum visibility. Combined network and workload detection with automated prevention. Requires NET_ADMIN and privileged mode.
- Multi-interface network analysis
- eBPF kernel monitoring
- Automated blocking + playbooks
- Malware scanning + quarantine
- Full vulnerability discovery
Inline Accelerated
A DPDK-based agent built for high-bandwidth deployments. Optimized for multi-Gbps environments, it maps directly to NICs via a userspace driver and acts as an L2/L3 inline inspection point with prevention capabilities.
- Line-rate inspection at multi-Gbps
- Direct NIC-to-agent data path
- Inline blocking and traffic filtering
- Scalable core allocation (2n CPU cores)
- Privileged mode + NIC binding required
Passive Mirrored
Least-intrusive option. Agent receives a copy of traffic via mirrored port on a separate host. Detection-only — no inline prevention, no host instrumentation required.
- Zero modification to production hosts
- Hardware-isolated on dedicated machine
- Receives mirrored pcap stream via UDP
- Network anomaly detection only
- No special permissions required
Runs Everywhere Containers Run
Native container image supports all major orchestration platforms and Linux-based runtimes. Single image, any architecture.