Benchmark Methodology
How we measure what actually matters when comparing VPS and cloud servers.
Test Environment
- • All instances provisioned fresh — no shared or warmed-up state
- • Operating system: Ubuntu 24.04 LTS (minimal install)
- • All packages updated before benchmarking
- • Tests run 3 times; median value recorded
- • Benchmarks run between 02:00–04:00 UTC to minimise noisy-neighbour effects
CPU — sysbench
We use the sysbench cpu benchmark to measure single-threaded and multi-threaded CPU performance. Higher scores indicate faster integer computation.
sysbench cpu --cpu-max-prime=20000 --threads=4 run
Score = events per second × threads. Higher is better.
Disk I/O — fio
We run three fio tests: sequential read, sequential write, and random 4K IOPS. The IOPS figure is the most relevant for database workloads.
# Sequential read
fio --name=seq-read --ioengine=libaio --direct=1 \
--rw=read --bs=1M --size=4G --numjobs=4 --runtime=60
# Random 4K IOPS
fio --name=rand-iops --ioengine=libaio --direct=1 \
--rw=randread --bs=4k --size=4G --numjobs=4 \
--iodepth=32 --runtime=60Network — iperf3
Network throughput is tested with iperf3 against a reference server in the same datacentre. We record inbound and outbound throughput in Gbps.
iperf3 -c [reference-server] -t 30 -P 4
Latency
Time to first byte measured with wget --server-response against a static file served from the same region. Reflects real-world HTTP response latency from the instance.
Data Sources
- • Internal — benchmarks we run ourselves on provisioned instances
- • cloudperf — community benchmark data from cloudperf.run
- • vpsbenchmarks — community submissions from vpsbenchmarks.com
Internal benchmarks are weighted higher in rankings. Community data is marked with its source.
Update Frequency
Internal benchmarks are re-run weekly (Sunday 02:00 UTC). Community data is synced daily. Benchmark dates are shown per result row so you know how fresh the data is.