Die folgenden Benchmarks wurden auf einem frisch installiertem Armbian 5.75 (Debian Stretch) ausgeführt. Es laufen daher keine Dienste wie Web- oder Datenbankserver im Hintergrund. Als Kernel kommt Linux Mainline 4.20.7 zum Einsatz.
CPU Performance
Als CPU-Benchmark nutze ich sysbench Version 0.4.12 aus den Debian-Paketquellen. Der Prozessor des BPI-M2 Berry ist ein ARM Cortex-A7 Quad-Core mit 1,5 GHz.
Single-Core Performance
sysbench --test=cpu --cpu-max-prime=20000 run
sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Doing CPU performance benchmark Threads started! Done. Maximum prime number checked in CPU test: 20000 Test execution summary: total time: 540.1434s total number of events: 10000 total time taken by event execution: 540.1337 per-request statistics: min: 53.99ms avg: 54.01ms max: 59.12ms approx. 95 percentile: 54.03ms Threads fairness: events (avg/stddev): 10000.0000/0.00 execution time (avg/stddev): 540.1337/0.00
Multi-Core Performance (alle 4 Kerne)
sysbench --test=cpu --cpu-max-prime=20000 --num-threads=4 run
sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 4 Doing CPU performance benchmark Threads started! Done. Maximum prime number checked in CPU test: 20000 Test execution summary: total time: 135.6720s total number of events: 10000 total time taken by event execution: 542.5683 per-request statistics: min: 53.99ms avg: 54.26ms max: 95.83ms approx. 95 percentile: 54.22ms Threads fairness: events (avg/stddev): 2500.0000/13.40 execution time (avg/stddev): 135.6421/0.02
Netzwerk Performance
Der BPI-M2 Berry besitzt einen Gigabit-Ethernet Port. Ich verwende zum Testen iperf3. Der Berry ist über einen Switch mit meinem PC verbunden, der die iperf3-Serverrolle übernimmt.
iperf3 TX (Senderichtung)
iperf3 -c 192.168.178.10
Connecting to host 192.168.178.10, port 5201 [ 4] local 192.168.178.14 port 33090 connected to 192.168.178.10 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 99.4 MBytes 832 Mbits/sec 0 375 KBytes [ 4] 1.00-2.00 sec 98.8 MBytes 828 Mbits/sec 0 375 KBytes [ 4] 2.00-3.00 sec 98.7 MBytes 828 Mbits/sec 0 375 KBytes [ 4] 3.00-4.00 sec 98.6 MBytes 827 Mbits/sec 0 375 KBytes [ 4] 4.00-5.00 sec 98.6 MBytes 827 Mbits/sec 0 392 KBytes [ 4] 5.00-6.00 sec 98.4 MBytes 827 Mbits/sec 0 392 KBytes [ 4] 6.00-7.00 sec 98.4 MBytes 826 Mbits/sec 0 392 KBytes [ 4] 7.00-8.00 sec 98.6 MBytes 826 Mbits/sec 0 392 KBytes [ 4] 8.00-9.00 sec 98.6 MBytes 826 Mbits/sec 0 392 KBytes [ 4] 9.00-10.00 sec 98.2 MBytes 828 Mbits/sec 0 392 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 986 MBytes 827 Mbits/sec 0 sender [ 4] 0.00-10.00 sec 985 MBytes 826 Mbits/sec receiver iperf Done.
iperf3 RX (Empfangsrichtung)
iperf3 -c 192.168.178.10 -R
Connecting to host 192.168.178.10, port 5201 Reverse mode, remote host 192.168.178.10 is sending [ 4] local 192.168.178.14 port 33100 connected to 192.168.178.10 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 109 MBytes 918 Mbits/sec [ 4] 1.00-2.00 sec 109 MBytes 919 Mbits/sec [ 4] 2.00-3.00 sec 110 MBytes 919 Mbits/sec [ 4] 3.00-4.00 sec 110 MBytes 919 Mbits/sec [ 4] 4.00-5.00 sec 110 MBytes 919 Mbits/sec [ 4] 5.00-6.00 sec 110 MBytes 919 Mbits/sec [ 4] 6.00-7.00 sec 109 MBytes 919 Mbits/sec [ 4] 7.00-8.00 sec 110 MBytes 919 Mbits/sec [ 4] 8.00-9.00 sec 110 MBytes 919 Mbits/sec [ 4] 9.00-10.00 sec 110 MBytes 919 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 1.07 GBytes 921 Mbits/sec 0 sender [ 4] 0.00-10.00 sec 1.07 GBytes 919 Mbits/sec receiver iperf Done.
SATA Performance
Zum Testen der Geschwindigkeit des SATA-Ports benutze ich eine ziemlich alte (2011) Kingston SSDNow V100 mit 64 GB (SATA II). Daher bitte die Messwerte mit Vorsicht genießen! Das Dateisystem habe ich frisch mit mkfs.ext4 formatiert. Da ich nicht die SSD, sondern die Schnittstelle testen will, messe ich nur die Geschwindingkeit beim sequentiellen Lesen.
Getestet wird mit 32 GB an Daten, um Caching zu vermeiden:
sysbench --test=fileio --file-total-size=32G prepare
sysbench --test=fileio --file-total-size=32G --file-test-mode=seqrd run
sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Extra file open flags: 0 128 files, 256Mb each 32Gb total file size Block size 16Kb Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing sequential read test Threads started! Done. Operations performed: 2097152 Read, 0 Write, 0 Other = 2097152 Total Read 32Gb Written 0b Total transferred 32Gb (132.56Mb/sec) 8483.82 Requests/sec executed Test execution summary: total time: 247.1944s total number of events: 2097152 total time taken by event execution: 243.5119 per-request statistics: min: 0.03ms avg: 0.12ms max: 11.79ms approx. 95 percentile: 0.69ms Threads fairness: events (avg/stddev): 2097152.0000/0.00 execution time (avg/stddev): 243.5119/0.00
pappnaas
sehr hilfreich und nachvollziehbar.