2.2. Performance Guide

2.2.1. Kernel Performance Guide

Read This First

All performance numbers provided in this document are gathered using following Evaluation Modules unless otherwise specified.

Name Description
J721S2 EVM J721S2 Evaluation Module rev E2 with ARM running at 2GHz, DDR data rate 2666 MT/S, L3 Cache size 3MB

Table: Evaluation Modules


About This Manual

This document provides performance data for each of the device drivers which are part of the Process SDK Linux package. This document should be used in conjunction with release notes and user guides provided with the Process SDK Linux package for information on specific issues present with drivers included in a particular release.

If You Need Assistance

For further information or to report any problems, contact http://e2e.ti.com/ or http://support.ti.com/

2.2.1.1. System Benchmarks

2.2.1.1.1. LMBench

LMBench is a collection of microbenchmarks of which the memory bandwidth and latency related ones are typically used to estimate processor memory system performance. More information about lmbench at http://lmbench.sourceforge.net/whatis_lmbench.html and http://lmbench.sourceforge.net/man/lmbench.8.html

Latency: lat_mem_rd-stride128-szN, where N is equal to or smaller than the cache size at given level measures the cache miss penalty. N that is at least double the size of last level cache is the latency to external memory.

Bandwidth: bw_mem_bcopy-N, where N is equal to or smaller than the cache size at a given level measures the achievable memory bandwidth from software doing a memcpy() type operation. Typical use is for external memory bandwidth calculation. The bandwidth is calculated as byte read and written counts as 1 which should be roughly half of STREAM copy result.

Benchmarks j721s2-evm: perf
af_unix_sock_stream_latency (microsec) 19.75
af_unix_socket_stream_bandwidth (MBs) 3111.29
bw_file_rd-io-1mb (MB/s) 2793.30
bw_file_rd-o2c-1mb (MB/s) 1113.38
bw_mem-bcopy-16mb (MB/s) 3588.65
bw_mem-bcopy-1mb (MB/s) 4928.91
bw_mem-bcopy-2mb (MB/s) 3818.78
bw_mem-bcopy-4mb (MB/s) 3682.56
bw_mem-bcopy-8mb (MB/s) 3621.00
bw_mem-bzero-16mb (MB/s) 10705.92
bw_mem-bzero-1mb (MB/s) 8936.09 (min 4928.91, max 12943.26)
bw_mem-bzero-2mb (MB/s) 7758.45 (min 3818.78, max 11698.11)
bw_mem-bzero-4mb (MB/s) 7242.79 (min 3682.56, max 10803.02)
bw_mem-bzero-8mb (MB/s) 7142.95 (min 3621.00, max 10664.89)
bw_mem-cp-16mb (MB/s) 2147.36
bw_mem-cp-1mb (MB/s) 7877.47 (min 2481.39, max 13273.54)
bw_mem-cp-2mb (MB/s) 6896.21 (min 2087.68, max 11704.74)
bw_mem-cp-4mb (MB/s) 6481.66 (min 2148.61, max 10814.71)
bw_mem-cp-8mb (MB/s) 6450.73 (min 2143.34, max 10758.11)
bw_mem-fcp-16mb (MB/s) 3425.02
bw_mem-fcp-1mb (MB/s) 8721.77 (min 4500.28, max 12943.26)
bw_mem-fcp-2mb (MB/s) 7651.94 (min 3605.77, max 11698.11)
bw_mem-fcp-4mb (MB/s) 7157.44 (min 3511.85, max 10803.02)
bw_mem-fcp-8mb (MB/s) 7068.31 (min 3471.72, max 10664.89)
bw_mem-frd-16mb (MB/s) 4316.16
bw_mem-frd-1mb (MB/s) 5095.22 (min 4500.28, max 5690.16)
bw_mem-frd-2mb (MB/s) 4197.44 (min 3605.77, max 4789.10)
bw_mem-frd-4mb (MB/s) 3939.33 (min 3511.85, max 4366.81)
bw_mem-frd-8mb (MB/s) 3892.97 (min 3471.72, max 4314.22)
bw_mem-fwr-16mb (MB/s) 10677.34
bw_mem-fwr-1mb (MB/s) 9481.85 (min 5690.16, max 13273.54)
bw_mem-fwr-2mb (MB/s) 8246.92 (min 4789.10, max 11704.74)
bw_mem-fwr-4mb (MB/s) 7590.76 (min 4366.81, max 10814.71)
bw_mem-fwr-8mb (MB/s) 7536.17 (min 4314.22, max 10758.11)
bw_mem-rd-16mb (MB/s) 5044.14
bw_mem-rd-1mb (MB/s) 8800.30 (min 7648.52, max 9952.08)
bw_mem-rd-2mb (MB/s) 4548.67 (min 3170.83, max 5926.51)
bw_mem-rd-4mb (MB/s) 3758.33 (min 2355.37, max 5161.29)
bw_mem-rd-8mb (MB/s) 3671.54 (min 2285.39, max 5057.69)
bw_mem-rdwr-16mb (MB/s) 2237.45
bw_mem-rdwr-1mb (MB/s) 3810.95 (min 2481.39, max 5140.51)
bw_mem-rdwr-2mb (MB/s) 2421.96 (min 2087.68, max 2756.24)
bw_mem-rdwr-4mb (MB/s) 2284.59 (min 2148.61, max 2420.57)
bw_mem-rdwr-8mb (MB/s) 2212.57 (min 2143.34, max 2281.80)
bw_mem-wr-16mb (MB/s) 2253.84
bw_mem-wr-1mb (MB/s) 6394.52 (min 5140.51, max 7648.52)
bw_mem-wr-2mb (MB/s) 2963.54 (min 2756.24, max 3170.83)
bw_mem-wr-4mb (MB/s) 2387.97 (min 2355.37, max 2420.57)
bw_mem-wr-8mb (MB/s) 2283.60 (min 2281.80, max 2285.39)
bw_mmap_rd-mo-1mb (MB/s) 9132.42
bw_mmap_rd-o2c-1mb (MB/s) 1262.17
bw_pipe (MB/s) 926.24
bw_unix (MB/s) 3111.29
lat_connect (us) 40.10
lat_ctx-2-128k (us) 3.11
lat_ctx-2-256k (us) 2.99
lat_ctx-4-128k (us) 3.36
lat_ctx-4-256k (us) 2.61
lat_fs-0k (num_files) 566.00
lat_fs-10k (num_files) 218.00
lat_fs-1k (num_files) 294.00
lat_fs-4k (num_files) 336.00
lat_mem_rd-stride128-sz1000k (ns) 13.12
lat_mem_rd-stride128-sz125k (ns) 5.16
lat_mem_rd-stride128-sz250k (ns) 5.16
lat_mem_rd-stride128-sz31k (ns) 4.78
lat_mem_rd-stride128-sz50 (ns) 2.00
lat_mem_rd-stride128-sz500k (ns) 5.16
lat_mem_rd-stride128-sz62k (ns) 4.59
lat_mmap-1m (us) 21.00
lat_ops-double-add (ns) 0.32
lat_ops-double-mul (ns) 2.00
lat_ops-float-add (ns) 0.32
lat_ops-float-mul (ns) 2.00
lat_ops-int-add (ns) 0.50
lat_ops-int-bit (ns) 0.33
lat_ops-int-div (ns) 4.00
lat_ops-int-mod (ns) 4.67
lat_ops-int-mul (ns) 1.52
lat_ops-int64-add (ns) 0.50
lat_ops-int64-bit (ns) 0.33
lat_ops-int64-div (ns) 3.00
lat_ops-int64-mod (ns) 5.67
lat_pagefault (us) 0.47
lat_pipe (us) 11.28
lat_proc-exec (us) 510.20
lat_proc-fork (us) 453.00
lat_proc-proccall (us) 0.00
lat_select (us) 14.30
lat_sem (us) 1.55
lat_sig-catch (us) 2.56
lat_sig-install (us) 0.49
lat_sig-prot (us) 0.36
lat_syscall-fstat (us) 0.73
lat_syscall-null (us) 0.34
lat_syscall-open (us) 149.72
lat_syscall-read (us) 0.46
lat_syscall-stat (us) 1.56
lat_syscall-write (us) 0.45
lat_tcp (us) 0.77
lat_unix (us) 19.75
latency_for_0.50_mb_block_size (nanosec) 5.16
latency_for_1.00_mb_block_size (nanosec) 6.56 (min 0.00, max 13.12)
pipe_bandwidth (MBs) 926.24
pipe_latency (microsec) 11.28
procedure_call (microsec) 0.00
select_on_200_tcp_fds (microsec) 14.30
semaphore_latency (microsec) 1.55
signal_handler_latency (microsec) 0.49
signal_handler_overhead (microsec) 2.56
tcp_ip_connection_cost_to_localhost (microsec) 40.10
tcp_latency_using_localhost (microsec) 0.77

Table: LM Bench Metrics

2.2.1.1.2. Dhrystone

Dhrystone is a core only benchmark that runs from warm L1 caches in all modern processors. It scales linearly with clock speed. For standard ARM cores the DMIPS/MHz score will be identical with the same compiler and flags.

Execute the benchmark with the following:

runDhrystone
Benchmarks j721s2-evm: perf
cpu_clock (MHz) 2000.00
dhrystone_per_mhz (DMIPS/MHz) 5.70
dhrystone_per_second (DhrystoneP) 20000000.00

Table: Dhrystone Benchmark

2.2.1.1.3. Whetstone

Whetstone is a benchmark primarily measuring floating-point arithmetic performance.

Execute the benchmark with the following:

runWhetstone
Benchmarks j721s2-evm: perf
whetstone (MIPS) 10000.00

Table: Whetstone Benchmark

2.2.1.1.4. Linpack

Linpack measures peak double precision (64 bit) floating point performance in solving a dense linear system.

Benchmarks j721s2-evm: perf
linpack (Kflops) 2643059.00

Table: Linpack Benchmark

2.2.1.1.5. NBench

NBench which stands for Native Benchmark is used to measure macro benchmarks for commonly used operations such as sorting and analysis algorithms. More information about NBench at https://en.wikipedia.org/wiki/NBench and https://nbench.io/articles/index.html

Benchmarks j721s2-evm: perf
assignment (Iterations) 29.68
fourier (Iterations) 56579.00
fp_emulation (Iterations) 250.01
huffman (Iterations) 2426.10
idea (Iterations) 7996.80
lu_decomposition (Iterations) 1391.00
neural_net (Iterations) 26.14
numeric_sort (Iterations) 877.24
string_sort (Iterations) 429.17

Table: NBench Benchmarks

2.2.1.1.6. CoreMarkPro

CoreMark®-Pro is a comprehensive, advanced processor benchmark that works with and enhances the market-proven industry-standard EEMBC CoreMark® benchmark. While CoreMark stresses the CPU pipeline, CoreMark-Pro tests the entire processor, adding comprehensive support for multicore technology, a combination of integer and floating-point workloads, and data sets for utilizing larger memory subsystems.

Benchmarks j721s2-evm: perf
cjpeg-rose7-preset (workloads/) 81.97
core (workloads/) 0.78
coremark-pro () 2445.75
linear_alg-mid-100x100-sp (workloads/) 81.57
loops-all-mid-10k-sp (workloads/) 2.48
nnet_test (workloads/) 3.61
parser-125k (workloads/) 11.63
radix2-big-64k (workloads/) 216.03
sha-test (workloads/) 158.73
zip-test (workloads/) 47.62

Table: CoreMarkPro

Benchmarks j721s2-evm: perf
cjpeg-rose7-preset (workloads/) 163.93
core (workloads/) 1.54
coremark-pro () 4370.99
linear_alg-mid-100x100-sp (workloads/) 161.29
loops-all-mid-10k-sp (workloads/) 3.87
nnet_test (workloads/) 7.23
parser-125k (workloads/) 21.28
radix2-big-64k (workloads/) 261.23
sha-test (workloads/) 312.50
zip-test (workloads/) 83.33

Table: CoreMarkPro for Two Cores

2.2.1.1.7. MultiBench

MultiBench™ is a suite of benchmarks that allows processor and system designers to analyze, test, and improve multicore processors. It uses three forms of concurrency: Data decomposition: multiple threads cooperating on achieving a unified goal and demonstrating a processor’s support for fine grain parallelism. Processing multiple data streams: uses common code running over multiple threads and demonstrating how well a processor scales over scalable data inputs. Multiple workload processing: shows the scalability of general-purpose processing, demonstrating concurrency over both code and data. MultiBench combines a wide variety of application-specific workloads with the EEMBC Multi-Instance-Test Harness (MITH), compatible and portable with most any multicore processors and operating systems. MITH uses a thread-based API (POSIX-compliant) to establish a common programming model that communicates with the benchmark through an abstraction layer and provides a flexible interface to allow a wide variety of thread-enabled workloads to be tested.

Benchmarks j721s2-evm: perf
4m-check (workloads/) 983.48
4m-check-reassembly (workloads/) 159.24
4m-check-reassembly-tcp (workloads/) 94.34
4m-check-reassembly-tcp-cmykw2-rotatew2 (workloads/) 43.10
4m-check-reassembly-tcp-x264w2 (workloads/) 2.72
4m-cmykw2 (workloads/) 311.04
4m-cmykw2-rotatew2 (workloads/) 59.00
4m-reassembly (workloads/) 129.37
4m-rotatew2 (workloads/) 71.48
4m-tcp-mixed (workloads/) 271.19
4m-x264w2 (workloads/) 2.76
idct-4m (workloads/) 34.95
idct-4mw1 (workloads/) 34.97
ippktcheck-4m (workloads/) 979.24
ippktcheck-4mw1 (workloads/) 981.16
ipres-4m (workloads/) 193.55
ipres-4mw1 (workloads/) 191.57
md5-4m (workloads/) 48.52
md5-4mw1 (workloads/) 49.26
rgbcmyk-4m (workloads/) 162.47
rgbcmyk-4mw1 (workloads/) 162.47
rotate-4ms1 (workloads/) 53.42
rotate-4ms1w1 (workloads/) 53.30
rotate-4ms64 (workloads/) 54.23
rotate-4ms64w1 (workloads/) 54.00
x264-4mq (workloads/) 1.43
x264-4mqw1 (workloads/) 1.43

Table: Multibench

2.2.1.2. Boot-time Measurement

2.2.1.2.1. Boot media: MMCSD

Boot Configuration j721s2-evm: boot time (sec)
Kernel boot time test when bootloader, kernel and sdk-rootfs are in mmc-sd 18.78 (min 18.57, max 19.08)
Kernel boot time test when init is /bin/sh and bootloader, kernel and sdk-rootfs are in mmc-sd 4.55 (min 4.53, max 4.58)

Table: Boot time MMC/SD

2.2.1.3. ALSA SoC Audio Driver

  1. Access type - RW_INTERLEAVED
  2. Channels - 2
  3. Format - S16_LE
  4. Period size - 64

Sampling Rate (Hz) j721s2-evm: Throughput (bits/sec) j721s2-evm: CPU Load (%)
11025 352937.00 0.41
16000 512200.00 0.44
22050 705873.00 0.52
24000 768292.00 0.54
32000 1024400.00 0.64
44100 1411648.00 0.72
48000 1536435.00 0.78
88200 1536452.00 0.83
96000 1536428.00 0.82

Table: Audio Playback


2.2.1.4. Graphics SGX/RGX Driver

2.2.1.4.1. GFXBench

Run GFXBench and capture performance reported (Score and Display rate in fps). All display outputs (HDMI, Displayport and/or LCD) are connected when running these tests

Benchmark j721s2-evm: Score j721s2-evm: Fps
GFXBench 4.x gl_4_off
263.94 4.47
GFXBench 5.x gl_5_high_off
114.17 1.78

Table: GFXBench

2.2.1.4.2. Glmark2

Run Glmark2 and capture performance reported (Score). All display outputs (HDMI, Displayport and/or LCD) are connected when running these tests

Benchmark j721s2-evm: Score
Glmark2-DRM 9.00
Glmark2-Wayland 879.00

Table: Glmark2


2.2.1.5. Ethernet

Ethernet performance benchmarks were measured using Netperf 2.7.1 https://hewlettpackard.github.io/netperf/doc/netperf.html Test procedures were modeled after those defined in RFC-2544: https://tools.ietf.org/html/rfc2544, where the DUT is the TI device and the “tester” used was a Linux PC. To produce consistent results, it is recommended to carry out performance tests in a private network and to avoid running NFS on the same interface used in the test. In these results, CPU utilization was captured as the total percentage used across all cores on the device, while running the performance test over one external interface.

UDP Throughput (0% loss) was measured by the procedure defined in RFC-2544 section 26.1: Throughput. In this scenario, netperf options burst_size (-b) and wait_time (-w) are used to limit bandwidth during different trials of the test, with the goal of finding the highest rate at which no loss is seen. For example, to limit bandwidth to 500Mbits/sec with 1472B datagram:

burst_size = <bandwidth (bits/sec)> / 8 (bits -> bytes) / <UDP datagram size> / 100 (seconds -> 10 ms)
burst_size = 500000000 / 8 / 1472 / 100 = 425

wait_time = 10 milliseconds (minimum supported by Linux PC used for testing)

UDP Throughput (possible loss) was measured by capturing throughput and packet loss statistics when running the netperf test with no bandwidth limit (remove -b/-w options).

In order to start a netperf client on one device, the other device must have netserver running. To start netserver:

netserver [-p <port_number>] [-4 (IPv4 addressing)] [-6 (IPv6 addressing)]

Running the following shell script from the DUT will trigger netperf clients to measure bidirectional TCP performance for 60 seconds and report CPU utilization. Parameter -k is used in client commands to summarize selected statistics on their own line and -j is used to gain additional timing measurements during the test.

#!/bin/bash
for i in 1
do
   netperf -H <tester ip> -j -c -l 60 -t TCP_STREAM --
      -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE &

   netperf -H <tester ip> -j -c -l 60 -t TCP_MAERTS --
      -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE &
done

Running the following commands will trigger netperf clients to measure UDP burst performance for 60 seconds at various burst/datagram sizes and report CPU utilization.

  • For UDP egress tests, run netperf client from DUT and start netserver on tester.
netperf -H <tester ip> -j -c -l 60 -t UDP_STREAM -b <burst_size> -w <wait_time> -- -m <UDP datagram size>
   -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE
  • For UDP ingress tests, run netperf client from tester and start netserver on DUT.
netperf -H <DUT ip> -j -C -l 60 -t UDP_STREAM -b <burst_size> -w <wait_time> -- -m <UDP datagram size>
   -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE

2.2.1.5.1. CPSW/CPSW2g/CPSW3g Ethernet Driver

  • CPSW2g: AM65x, J7200, J721e, J721S2, J784S4
  • CPSW3g: AM64x

TCP Bidirectional Throughput

Command Used j721s2-evm: THROUGHPUT (Mbits/sec) j721s2-evm: CPU Load % (LOCAL_CPU_UTIL)
netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_MAERTS 1857.42 72.82

Table: CPSW TCP Bidirectional Throughput |

UDP Throughput

Frame Size(bytes) j721s2-evm: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE) j721s2-evm: THROUGHPUT (Mbits/sec) j721s2-evm: CPU Load % (LOCAL_CPU_UTIL)
64 18.00 30.41 93.25
128 82.00 133.33 89.43
256 210.00 200.80 65.92
1024 978.00 780.54 50.43
1518 1472.00 956.89 40.32

Table: CPSW UDP Egress Throughput |

Frame Size(bytes) j721s2-evm: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE) j721s2-evm: THROUGHPUT (Mbits/sec) j721s2-evm: CPU Load % (LOCAL_CPU_UTIL)
64 18.00 14.96 50.79
128 82.00 91.71 71.35
256 210.00 282.75 73.30
1024 978.00 496.82 38.08
1518 1472.00 954.12 64.48

Table: CPSW UDP Ingress Throughput (0% loss)


Frame Size(bytes) j721s2-evm: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE) j721s2-evm: THROUGHPUT (Mbits/sec) j721s2-evm: CPU Load % (LOCAL_CPU_UTIL) j721s2-evm: Packet Loss %
64 18.00 20.29 68.53 0.01
128 82.00 91.71 71.35 0.00
256 210.00 282.75 73.30 0.00
1024 978.00 450.35 58.93 51.87
1518 1472.00 954.12 64.48 0.00

Table: CPSW UDP Ingress Throughput (possible loss)


2.2.1.6. PCIe Driver

2.2.1.6.1. PCIe-NVMe-SSD

2.2.1.6.1.1. J721S2-EVM
Buffer size (bytes) j721s2-evm: Write EXT4 Throughput (Mbytes/sec) j721s2-evm: Write EXT4 CPU Load (%) j721s2-evm: Read EXT4 Throughput (Mbytes/sec) j721s2-evm: Read EXT4 CPU Load (%)
1m 749.00 15.65 786.00 3.61
4m 708.00 13.76 764.00 3.73
4k 202.00 50.52 290.00 50.50
256k 749.00 14.49 786.00 6.67
  • Filesize used is: 10G
  • FIO command options: –ioengine=libaio –iodepth=4 –numjobs=1 –direct=1 –runtime=60 –time_based
  • Platform: Speed 8GT/s, Width x2
  • SSD being used: PLEXTOR PX-128M8PeY

2.2.1.7. UBoot QSPI/OSPI Driver

2.2.1.7.1. J721S2-EVM

File size (bytes in hex) j721s2-evm: Write Throughput (Kbytes/sec) j721s2-evm: Read Throughput (Kbytes/sec)
400000 365.13 204800.00
800000 367.50 248242.42
1000000 366.35 277694.92
2000000 364.15 300623.85

2.2.1.8. EMMC Driver

Warning

IMPORTANT: The performance numbers can be severely affected if the media is mounted in sync mode. Hot plug scripts in the filesystem mount removable media in sync mode to ensure data integrity. For performance sensitive applications, umount the auto-mounted filesystem and re-mount in async mode.


2.2.1.8.1. J721S2-EVM


Buffer size (bytes) j721s2-evm: Write EXT4 Throughput (Mbytes/sec) j721s2-evm: Write EXT4 CPU Load (%) j721s2-evm: Read EXT4 Throughput (Mbytes/sec) j721s2-evm: Read EXT4 CPU Load (%)
1m 44.90 1.50 299.00 2.02
4m 45.00 1.47 299.00 1.49
4k 5.23 2.82 36.10 14.93
256k 36.10 1.58 282.00 3.21

2.2.1.9. UBoot EMMC Driver


2.2.1.9.1. J721S2-EVM


File size (bytes in hex) j721s2-evm: Write Throughput (Kbytes/sec) j721s2-evm: Read Throughput (Kbytes/sec)
2000000 59362.32 292571.43
4000000 59578.18 309132.08

2.2.1.10. MMC/SD Driver

Warning

IMPORTANT: The performance numbers can be severely affected if the media is mounted in sync mode. Hot plug scripts in the filesystem mount removable media in sync mode to ensure data integrity. For performance sensitive applications, umount the auto-mounted filesystem and re-mount in async mode.


2.2.1.10.1. J721S2-EVM


Buffer size (bytes) j721s2-evm: Write EXT4 Throughput (Mbytes/sec) j721s2-evm: Write EXT4 CPU Load (%) j721s2-evm: Read EXT4 Throughput (Mbytes/sec) j721s2-evm: Read EXT4 CPU Load (%)
1m 18.00 0.99 43.70 0.74
4m 18.20 0.77 41.70 0.74
4k 4.80 3.49 14.00 6.87
256k 18.70 1.00 43.20 1.01


The performance numbers were captured using the following:

  • SanDisk 8GB MicroSDHC Class 10 Memory Card
  • Partition was mounted with async option

2.2.1.11. UBoot MMC/SD Driver


2.2.1.11.1. J721S2-EVM

File size (bytes in hex) j721s2-evm: Write Throughput (Kbytes/sec) j721s2-evm: Read Throughput (Kbytes/sec)
400000 16000.00 39766.99
800000 19458.43 42445.60
1000000 19320.75 44043.01

The performance numbers were captured using the following:

  • SanDisk 8GB MicroSDHC Class 10 Memory Card

2.2.1.12. USB Driver

2.2.1.12.1. USB Device Controller

Number of Blocks j721s2-evm: Throughput (MB/sec)
150 32.70

Table: USBDEVICE HIGHSPEED SLAVE READ THROUGHPUT



Number of Blocks j721s2-evm: Throughput (MB/sec)
150 29.10

Table: USBDEVICE HIGHSPEED SLAVE WRITE THROUGHPUT



2.2.1.13. CRYPTO Driver

2.2.1.13.1. OpenSSL Performance

Algorithm Buffer Size (in bytes) j721s2-evm: throughput (KBytes/Sec)
aes-128-cbc 1024 46065.66
aes-128-cbc 16 897.21
aes-128-cbc 16384 183528.11
aes-128-cbc 256 14019.67
aes-128-cbc 64 3621.31
aes-128-cbc 8192 151164.25
aes-192-cbc 1024 44900.35
aes-192-cbc 16 900.88
aes-192-cbc 16384 176608.60
aes-192-cbc 256 14036.74
aes-192-cbc 64 3633.64
aes-192-cbc 8192 147461.46
aes-256-cbc 1024 45125.97
aes-256-cbc 16 905.40
aes-256-cbc 16384 164342.44
aes-256-cbc 256 14058.07
aes-256-cbc 64 3604.89
aes-256-cbc 8192 136181.08
des-cbc 1024 47080.11
des-cbc 16 9869.16
des-cbc 16384 49801.90
des-cbc 256 39686.31
des-cbc 64 24619.07
des-cbc 8192 49588.91
des3 1024 40186.88
des3 16 897.43
des3 16384 95808.17
des3 256 13582.51
des3 64 3632.64
des3 8192 87323.99
md5 1024 88586.58
md5 16 1940.30
md5 16384 260177.92
md5 256 27882.33
md5 64 7445.44
md5 8192 228330.15
sha1 1024 55045.12
sha1 16 912.47
sha1 16384 457250.13
sha1 256 14376.53
sha1 64 3639.51
sha1 8192 307027.97
sha224 1024 101911.21
sha224 16 1771.26
sha224 16384 585788.07
sha224 256 27642.62
sha224 64 7070.95
sha224 8192 446854.49
sha256 1024 54648.15
sha256 16 899.26
sha256 16384 447436.12
sha256 256 14292.57
sha256 64 3629.38
sha256 8192 299636.05
sha384 1024 68889.60
sha384 16 1795.36
sha384 16384 158460.59
sha384 256 24517.97
sha384 64 7181.93
sha384 8192 146270.89
sha512 1024 43683.50
sha512 16 909.01
sha512 16384 146800.64
sha512 256 13114.11
sha512 64 3627.65
sha512 8192 126492.67


Algorithm j721s2-evm: CPU Load
aes-128-cbc 35.00
aes-192-cbc 36.00
aes-256-cbc 36.00
des-cbc 99.00
des3 32.00
md5 99.00
sha1 99.00
sha224 99.00
sha256 99.00
sha384 99.00
sha512 99.00
Listed for each algorithm are the code snippets used to run each
benchmark test.
::
time -v openssl speed -elapsed -evp aes-128-cbc

2.2.1.14. DCAN Driver

Performance and Benchmarks not available in this release.