Сравнение производительности локальных дисков ESXi и iSCSI-хранилища на TrueNAS

Виртуализация сегодня является неотъемлемой частью IT-инфраструктуры, и производительность дисковой подсистемы играет ключевую роль в стабильной работе виртуальных машин. В этой статье мы сравним локальные диски гипервизора ESXi с дисками, подключёнными через iSCSI к серверу TrueNAS. Разберём, как выбор между локальным хранением и сетевым влияет на скорость чтения и записи, задержки и общую производительность виртуальных сред.

Домашняя лаборатория

VMware ESXi, 6.7.0, 15160138

  • CPU: 2x Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz
  • RAM: 144 Gb
  • Motherboard: X10DRi
  • Disk: NVME Samsung SSD 970 EVO Plus 500GB

Тесты проводились на виртуальной машине со следующими параметрами:

  • ОС: Ubuntu 22.04
  • CPU: 8
  • RAM: 16Gb
  • DIsk: 128Gb

Подготовка

Обновление пакетов:

				
					apt update
				
			

Установка fio:

				
					apt install fio
				
			

Создание файла с параметрами тестирования nvme0n1:

				
					[global]
ioengine=libaio
iodepth=64
direct=1
numjobs=4
filesize=20g
size=100g
# time_based
# runtime=10m
norandommap
group_reporting
###Output logs to draw graphs - optional###
#write_bw_log=write_bw_log
#write_lat_log=write_lat_log
#write_iops_log=write_iops_log
#log_avg_msec=10

#####################
#Test Specifications#
#####################

# Two RANDOM 4k tests, READ and WRITE

[random_read_4k]
rw=randread
blocksize=4k
filename=/mnt/bench
stonewall

[random_write_4k]
rw=randwrite
blocksize=4k
filename=/mnt/bench
stonewall
 
# Two SEQUENTIAL 512k tests, READ and WRITE

[seq_read_512k]
rw=read
blocksize=512k
filename=/mnt/bench
stonewall

[seq_write_512k]
rw=write
blocksize=512k
filename=/mnt/bench
stonewall
				
			

На диске должно быть 100Gb свободного места

Запуск тестирования:

				
					fio nvme0n1
				
			

Результаты

Samsung SSD 970 EVO Plus

Локально

Результаты тестирования fio:

				
					random_read_4k: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
...
random_write_4k: (g=1): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
...
seq_read_512k: (g=2): rw=read, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=64
...
seq_write_512k: (g=3): rw=write, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=64
...
fio-3.28
Starting 16 processes
random_read_4k: Laying out IO file (1 file / 20480MiB)
Jobs: 4 (f=4): [_(12),W(4)][100.0%][w=2536MiB/s][w=5071 IOPS][eta 00m:00s]
random_read_4k: (groupid=0, jobs=4): err= 0: pid=1447: Thu Oct 23 22:43:56 2025
  read: IOPS=150k, BW=587MiB/s (616MB/s)(80.0GiB/139499msec)
    slat (usec): min=6, max=10710, avg=18.38, stdev=14.82
    clat (usec): min=71, max=17251, avg=1344.38, stdev=659.12
     lat (usec): min=85, max=17262, avg=1362.92, stdev=667.51
    clat percentiles (usec):
     |  1.00th=[ 1029],  5.00th=[ 1074], 10.00th=[ 1106], 20.00th=[ 1123],
     | 30.00th=[ 1156], 40.00th=[ 1172], 50.00th=[ 1188], 60.00th=[ 1205],
     | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1369], 95.00th=[ 3064],
     | 99.00th=[ 4228], 99.50th=[ 4424], 99.90th=[ 6783], 99.95th=[ 8455],
     | 99.99th=[10028]
   bw (  KiB/s): min=240105, max=912543, per=100.00%, avg=765282.84, stdev=46836.37, samples=888
   iops        : min=60026, max=228135, avg=191320.00, stdev=11709.10, samples=888
  lat (usec)   : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.04%, 1000=0.42%
  lat (msec)   : 2=93.84%, 4=3.61%, 10=2.08%, 20=0.01%
  cpu          : usr=9.52%, sys=89.64%, ctx=116573, majf=0, minf=306
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=20971520,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64
random_write_4k: (groupid=1, jobs=4): err= 0: pid=1451: Thu Oct 23 22:43:56 2025
  write: IOPS=138k, BW=540MiB/s (566MB/s)(80.0GiB/151659msec); 0 zone resets
    slat (usec): min=7, max=20289, avg=24.55, stdev=22.05
    clat (usec): min=28, max=93508, avg=1812.22, stdev=650.49
     lat (usec): min=50, max=93516, avg=1837.00, stdev=656.59
    clat percentiles (usec):
     |  1.00th=[ 1237],  5.00th=[ 1467], 10.00th=[ 1516], 20.00th=[ 1549],
     | 30.00th=[ 1565], 40.00th=[ 1582], 50.00th=[ 1614], 60.00th=[ 1631],
     | 70.00th=[ 1663], 80.00th=[ 1745], 90.00th=[ 2868], 95.00th=[ 3130],
     | 99.00th=[ 3818], 99.50th=[ 4424], 99.90th=[ 5669], 99.95th=[ 7177],
     | 99.99th=[13435]
   bw (  KiB/s): min=259186, max=647426, per=100.00%, avg=557541.81, stdev=21548.31, samples=1199
   iops        : min=64795, max=161855, avg=139384.70, stdev=5387.08, samples=1199
  lat (usec)   : 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%, 750=0.09%
  lat (usec)   : 1000=0.32%
  lat (msec)   : 2=83.29%, 4=15.60%, 10=0.68%, 20=0.01%, 50=0.01%
  lat (msec)   : 100=0.01%
  cpu          : usr=7.42%, sys=69.88%, ctx=16212371, majf=0, minf=53
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,20971520,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64
seq_read_512k: (groupid=2, jobs=4): err= 0: pid=1457: Thu Oct 23 22:43:56 2025
  read: IOPS=1744, BW=872MiB/s (915MB/s)(80.0GiB/93924msec)
    slat (usec): min=28, max=1055, avg=42.98, stdev=23.32
    clat (msec): min=18, max=549, avg=146.68, stdev=16.43
     lat (msec): min=18, max=549, avg=146.73, stdev=16.43
    clat percentiles (msec):
     |  1.00th=[  109],  5.00th=[  144], 10.00th=[  144], 20.00th=[  146],
     | 30.00th=[  146], 40.00th=[  146], 50.00th=[  146], 60.00th=[  148],
     | 70.00th=[  148], 80.00th=[  148], 90.00th=[  148], 95.00th=[  150],
     | 99.00th=[  159], 99.50th=[  163], 99.90th=[  472], 99.95th=[  550],
     | 99.99th=[  550]
   bw (  KiB/s): min=862638, max=910220, per=100.00%, avg=893827.64, stdev=1789.28, samples=748
   iops        : min= 1684, max= 1776, avg=1744.37, stdev= 3.56, samples=748
  lat (msec)   : 20=0.01%, 50=0.06%, 100=0.77%, 250=98.89%, 500=0.19%
  lat (msec)   : 750=0.08%
  cpu          : usr=0.19%, sys=2.13%, ctx=128617, majf=0, minf=32823
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=163840,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64
seq_write_512k: (groupid=3, jobs=4): err= 0: pid=1471: Thu Oct 23 22:43:56 2025
  write: IOPS=5028, BW=2514MiB/s (2636MB/s)(80.0GiB/32584msec); 0 zone resets
    slat (usec): min=29, max=79571, avg=76.00, stdev=685.01
    clat (msec): min=3, max=5018, avg=50.83, stdev=112.53
     lat (msec): min=3, max=5019, avg=50.90, stdev=112.53
    clat percentiles (msec):
     |  1.00th=[   27],  5.00th=[   31], 10.00th=[   36], 20.00th=[   45],
     | 30.00th=[   47], 40.00th=[   48], 50.00th=[   50], 60.00th=[   51],
     | 70.00th=[   51], 80.00th=[   52], 90.00th=[   54], 95.00th=[   57],
     | 99.00th=[   78], 99.50th=[   97], 99.90th=[  232], 99.95th=[ 2299],
     | 99.99th=[ 5000]
   bw (  MiB/s): min= 2278, max= 2776, per=100.00%, avg=2516.49, stdev=22.39, samples=260
   iops        : min= 4556, max= 5552, avg=5032.95, stdev=44.79, samples=260
  lat (msec)   : 4=0.01%, 10=0.07%, 20=0.09%, 50=60.19%, 100=39.21%
  lat (msec)   : 250=0.36%, >=2000=0.07%
  cpu          : usr=4.68%, sys=4.76%, ctx=53219, majf=0, minf=59
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,163840,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=587MiB/s (616MB/s), 587MiB/s-587MiB/s (616MB/s-616MB/s), io=80.0GiB (85.9GB), run=139499-139499msec

Run status group 1 (all jobs):
  WRITE: bw=540MiB/s (566MB/s), 540MiB/s-540MiB/s (566MB/s-566MB/s), io=80.0GiB (85.9GB), run=151659-151659msec

Run status group 2 (all jobs):
   READ: bw=872MiB/s (915MB/s), 872MiB/s-872MiB/s (915MB/s-915MB/s), io=80.0GiB (85.9GB), run=93924-93924msec

Run status group 3 (all jobs):
  WRITE: bw=2514MiB/s (2636MB/s), 2514MiB/s-2514MiB/s (2636MB/s-2636MB/s), io=80.0GiB (85.9GB), run=32584-32584msec

Disk stats (read/write):
    dm-0: ios=21135476/21135026, merge=0/0, ticks=27394132/11001792, in_queue=38395924, util=99.87%, aggrios=21066852/21061993, aggrmerge=68624/73614, aggrticks=17314123/7349875, aggrin_queue=24663997, aggrutil=99.86%
  sda: ios=21066852/21061993, merge=68624/73614, ticks=17314123/7349875, in_queue=24663997, util=99.86%

				
			

Графики нагрузки на дисковую систему в ESXi:

iSCSI-хранилище

Тесты будут позже

Leave a Comment