ServerBee

Storage & Sizing

Database sizing formulas and 30-day storage estimates for common ServerBee deployments.

This page explains how ServerBee grows its SQLite database over time and gives a 30-day estimate for common 20-server deployments.

These numbers are based on the current schema and retention defaults, measured from temporary SQLite databases generated against the repository code in April 2026. They describe the main database file after 30 days from a fresh deployment. In production, reserve another 10% to 20% for SQLite WAL headroom and transient growth.

Assumptions

All estimates on this page use the same baseline:

  • 20 servers
  • 30 days from a fresh deployment
  • Agents connected continuously
  • Agent report interval of 3 seconds
  • Server raw metric writer persisting one record per server every 60 seconds
  • Default retention:
    • Raw metrics: 7 days
    • Hourly aggregates: 90 days
    • GPU records: 7 days
    • Ping records: 7 days
    • Network probe raw records: 7 days
    • Network probe hourly records: 90 days
    • Service monitor records: 30 days
  • Disk I/O enabled and temperature enabled
  • No extra growth from audit log bursts, large task output, or unusual Docker event churn unless called out explicitly

What Actually Changes Database Size

Not every capability creates ongoing database growth.

FeatureDefault cadenceRetentionMain table(s)Notes
Base monitoringAgent reports every 3s, persisted every 60s7d raw, 90d hourlyrecords, records_hourly, traffic_*, uptime_dailyAlways present for connected servers
Ping tasks60s per task7dping_recordsScales with task count x server count
Network probes60s per target7d raw, 90d hourlynetwork_probe_record, network_probe_record_hourlyScales with server-target assignments
Service monitors300s per monitor30dservice_monitor_recordScales with monitor count
GPU monitoring3s per GPU device7dgpu_recordsThis is one of the largest multipliers
Docker eventsEvent-driven7ddocker_eventSmall unless containers churn frequently
Terminal / Exec / File ManagerUser-drivenmixedaudit_logs, task_resultsUsually negligible compared with time-series data

Turning on a capability such as Terminal, Exec, File Manager, or Docker Management does not automatically create large time-series tables. Continuous growth comes from features that are actively writing sampled results.

30-Day Sizing Formula

For the assumptions above, the 30-day database size can be estimated with:

S_30d =
70,541,312
+ 48,974,507 × P
+ 2,373,018 × T
+ 1,593,958 × M
+ 41,525,248 × G
+ 285 × E

Where:

  • P = number of 60-second ping tasks applied to all 20 servers
  • T = total server-target network probe assignments
    • If every server has the same number of targets, then T = 20 × targets_per_server
  • M = total 300-second service monitors
  • G = total GPU devices
  • E = total Docker events written during the 30-day window

Feature Multipliers

These coefficients come from measured SQLite files, not rough guesswork:

ComponentExact size contributionApprox size
Base monitoring for 20 servers70,541,312 B67.27 MiB
1 ping task across all 20 servers48,974,507 B46.71 MiB
1 network probe assignment2,373,018 B2.26 MiB
1 service monitor1,593,958 B1.52 MiB
1 GPU device41,525,248 B39.60 MiB
1 Docker event285 B0.28 KiB

Base Monitoring Breakdown

With no extra ping tasks, network probes, service monitors, or GPUs, most of the 30-day footprint comes from the raw metrics table and its index:

ObjectExact sizeApprox size
records48,701,440 B46.45 MiB
idx_records_server_id_time16,482,304 B15.72 MiB
records_hourly3,485,696 B3.32 MiB
idx_records_hourly_server_time1,163,264 B1.11 MiB
traffic_hourly + index512,000 B0.49 MiB
traffic_daily + index90,112 B0.09 MiB
uptime_daily + index86,016 B0.08 MiB

Scenario Catalog

The table below lists common 30-day scenarios for a 20-server deployment.

ScenarioAssumptionsExact sizeApprox size
Base monitoring onlyNo ping tasks, no network probes, no service monitors, no GPU70,541,312 B67.27 MiB
Base + 1 ping task1 x 60s ping task across all 20 servers119,515,819 B113.98 MiB
Base + 3 ping tasksICMP + TCP + HTTP across all 20 servers217,464,833 B207.39 MiB
Base + 1 target per server20 total network probe assignments118,001,672 B112.54 MiB
Base + 5 targets per server100 total network probe assignments307,843,112 B293.58 MiB
Base + 20 targets per server400 total network probe assignments, which is the current max1,019,748,512 B972.51 MiB
Base + 1 monitor per server20 total service monitors at 300s102,420,472 B97.68 MiB
Base + 5 monitors per server100 total service monitors at 300s229,937,112 B219.29 MiB
Base + 1 GPU per server20 total GPU devices901,046,272 B859.30 MiB
Small production stack3 ping tasks + 1 target per server + 1 monitor per server296,804,353 B283.05 MiB
Medium production stack3 ping tasks + 5 targets per server + 2 monitors per server518,524,953 B494.50 MiB
Heavy stack without GPU3 ping tasks + 20 targets per server + 1 monitor per server1,198,551,193 B1.12 GiB
Heavy stack with GPUHeavy stack without GPU + 1 GPU per server2,029,056,153 B1.89 GiB
Maxed practical stack3 ping tasks + 20 targets per server + 5 monitors per server + 1 GPU per server2,156,572,793 B2.01 GiB

Disk Budget Recommendation

If you are capacity-planning a real deployment, reserve more than the database file itself:

  • Use the table above for the base database size
  • Add 10% to 20% for WAL growth and transient write bursts
  • Add extra space if you expect:
    • large Docker event volume
    • large task output retained in task_results
    • unusually chatty service monitors with large detail_json payloads

Examples:

  • A 494.50 MiB medium stack should reserve about 550 MiB to 600 MiB
  • A 1.89 GiB heavy stack with GPU should reserve about 2.1 GiB to 2.3 GiB
  • A 2.01 GiB maxed practical stack should reserve about 2.2 GiB to 2.5 GiB

Notes and Caveats

  • These estimates are for the first 30 days after a fresh deployment. Some hourly tables retain up to 90 days, so a long-running deployment will continue to grow beyond the 30-day numbers shown here.
  • The base monitoring estimate already includes traffic_hourly, traffic_daily, and uptime_daily.
  • GPU growth is per device, not per server. A server with 4 GPUs multiplies the GPU portion by 4.
  • Network probe growth is per assignment, not per target definition. A target shared by 20 servers counts as 20 assignments.
  • Docker growth is event-driven. Quiet fleets add almost nothing; container-heavy hosts can add a measurable amount.

On this page