Technical Workflow

How Everest Works

A step-by-step walkthrough of the Everest data pipeline — from VMS write detection through cloud archival, admin visibility, and transparent playback restoration.

Pipeline Overview

The Everest data pipeline

Seven components working in sequence to move video from local VMS storage to cloud archival and back.

VMS / NVR

Nx Witness

File System Layer

FUSE / Minifilter

Ring Buffer

Shared Memory

Everest Agent

.NET Worker

Local Spool

Retry Queue

Cloud Upload

S3 / Wasabi

Admin Portal

Metrics & Billing

File Interception
Data Pipeline
Cloud Storage
Administration
Step by Step

Detailed workflow walkthrough

01

VMS / NVR writes surveillance files

Nx Witness Media Server writes video recordings to the configured local storage path as it normally would. The VMS has no knowledge of cloud operations or Everest activity.

Technical Detail

Nx Witness records to standard DAS volumes (main drive or backup drive). File formats include .avi and other VMS media files. The storage path is the integration point for Everest.

02

Everest filesystem layer detects writes

The Everest FUSE driver (Linux) or minifilter driver (Windows, planned) intercepts file write and close operations at the VMS storage path. The VMS experiences a standard filesystem with zero additional latency.

Technical Detail

FUSE driver mounts over the Nx Witness storage path. Supports up to 8 drive instances via systemd template units. File operations pass through transparently; only monitored file types (e.g., .avi) are tracked.

03

Data passes through ring buffer

File write events are passed from the FUSE driver to the .NET worker via a high-throughput shared memory SPSC ring buffer, avoiding disk I/O round-trips and minimizing CPU overhead.

Technical Detail

FUSE driver is the ring buffer producer. .NET worker is the consumer. Uses eventfd/epoll-based notification. Write latency under 10ms under production camera loads. Supports 260 Mbps aggregate throughput.

04

Everest Agent processes and spools data

The Everest .NET worker processes incoming file events, records them in the SQLite metadata database, and enqueues them for upload in the local spool queue. Files remain as full local copies during this phase.

Technical Detail

SQLite database tracks lifecycle state: is_replicated, is_stub, is_replicating, current_phase. WAL mode enabled for concurrent access. Spool queue handles retry with exponential backoff.

05

Scheduled uploader moves files to cloud

The Replication Worker checks every hour (or at the configured scheduled time) for files that have exceeded their L1 threshold and uploads them to the configured S3-compatible bucket.

Technical Detail

Files under 100MB use single PUT. Files 100MB+ use multipart upload with 64MB chunks. STANDARD storage class. MD5 checksum verification after upload. Configurable max_concurrent_uploads (default: 4).

06

Cloud storage holds the archive

Uploaded files reside in AWS S3 or Wasabi under a structured key that mirrors the Nx Witness directory hierarchy. Files remain in STANDARD storage class for predictable restore latency.

Technical Detail

S3 key: {license_key}/{drive_label}/{server_guid}/{quality}/{camera_mac}/{YYYY}/{MM}/{DD}/{HH}/{filename}. Separate metadata folder for database and encryption key backups. Region validation ensures all buckets in same region.

07

Admin portal tracks usage and licenses

The locally hosted Everest dashboard provides real-time statistics, lifecycle state visibility, billing metrics, health monitoring, and configuration management across all configured drives and sites.

Technical Detail

Blazor Server dashboard at localhost:<port>. JWT authentication with RBAC. Billing tab captures peak camera channels and gross managed capacity monthly. Export billing data as CSV.

08

Hydration and reclaim on demand

When Nx Witness requests a stubbed file for playback, Everest intercepts the read, downloads the cloud copy, and restores it in-place before the VMS read completes. Local storage is reclaimed again after the grace period.

Technical Detail

Stub files are 100–150 bytes with cloud key, original size, and checksum. Detection uses file size check (<200 bytes) then stub marker validation — no database lookup on every open(). getattr returns original file size to VMS.

Lifecycle Model

Three-phase video lifecycle

Everest manages video files through configurable lifecycle phases, balancing local performance with cloud efficiency.

L1Local only

Active Recording

File exists on local DAS only. High-performance access for active recording and frequent review. Default: 7 days.

  • Full local access
  • No cloud operations
  • VMS performs normally
L2Local + Cloud replica

Cloud Backup Phase

File replicated to S3/Wasabi. Full local copy remains for zero-latency access. Cloud replica provides disaster recovery.

  • Local copy retained
  • Cloud replica established
  • DR-ready state
L3Cloud + 100-byte stub

Cloud Primary

Local file replaced with 100–150 byte stub. Cloud holds the recording. Nx Witness sees the original file size via getattr.

  • Local storage reclaimed
  • Transparent playback via hydration
  • Self-contained stub for DR

Phase thresholds are configurable in hours, days, months, or storage percentage. Real-time replication mode bypasses L1 and L2 wait periods.

Operational Benefits

Built for production VMS environments

Everest is designed to operate continuously under real surveillance loads without impacting VMS recording or playback performance.

Zero VMS modifications required — deploy without touching Nx Witness

Write latency under 10ms under production camera loads

Configurable lifecycle thresholds in hours, days, months, or storage %

Scheduled replication windows to avoid business-hour bandwidth contention

Real-time replication mode for zero-RPO environments

Automatic recovery from transient cloud connectivity failures

Transparent playback of archived recordings through existing VMS interface

Full site recovery from cloud after hardware failure or ransomware

Start deploying

Ready to configure Everest?

Download the agent for Ubuntu 22.04 or 24.04 and follow the integration guide to connect your Nx Witness deployment.