2.2. Performance Guide

2.2.1. Kernel Performance Guide

Read This First

All performance numbers provided in this document are gathered using following Evaluation Modules unless otherwise specified.

Name Description
J721e EVM J721e Evaluation Module rev E2 with ARM running at 2GHz, DDR data rate 4266 MT/S, L3 Cache size 3MB

Table: Evaluation Modules


About This Manual

This document provides performance data for each of the device drivers which are part of the Process SDK Linux package. This document should be used in conjunction with release notes and user guides provided with the Process SDK Linux package for information on specific issues present with drivers included in a particular release.

If You Need Assistance

For further information or to report any problems, contact http://e2e.ti.com/ or http://support.ti.com/

2.2.1.1. System Benchmarks

2.2.1.1.1. LMBench

LMBench is a collection of microbenchmarks of which the memory bandwidth and latency related ones are typically used to estimate processor memory system performance. More information about lmbench at http://lmbench.sourceforge.net/whatis_lmbench.html and http://lmbench.sourceforge.net/man/lmbench.8.html

Latency: lat_mem_rd-stride128-szN, where N is equal to or smaller than the cache size at given level measures the cache miss penalty. N that is at least double the size of last level cache is the latency to external memory.

Bandwidth: bw_mem_bcopy-N, where N is is equal to or smaller than the cache size at a given level measures the achievable memory bandwidth from software doing a memcpy() type operation. Typical use is for external memory bandwidth calculation. The bandwidth is calculated as byte read and written counts as 1 which should be roughly half of STREAM copy result.

Benchmarks j721e-idk-gw: perf
af_unix_sock_stream_latency (microsec) 19.42
af_unix_socket_stream_bandwidth (MBs) 2595.97
bw_file_rd-io-1mb (MB/s) 3718.16
bw_file_rd-o2c-1mb (MB/s) 1967.80
bw_mem-bcopy-16mb (MB/s) 2613.10
bw_mem-bcopy-1mb (MB/s) 5048.68
bw_mem-bcopy-2mb (MB/s) 3282.28
bw_mem-bcopy-4mb (MB/s) 3206.41
bw_mem-bcopy-8mb (MB/s) 2763.39
bw_mem-bzero-16mb (MB/s) 10254.77
bw_mem-bzero-1mb (MB/s) 8858.24 (min 5048.68, max 12667.80)
bw_mem-bzero-2mb (MB/s) 7905.69 (min 3282.28, max 12529.09)
bw_mem-bzero-4mb (MB/s) 7840.21 (min 3206.41, max 12474.01)
bw_mem-bzero-8mb (MB/s) 7586.45 (min 2763.39, max 12409.51)
bw_mem-cp-16mb (MB/s) 1080.72
bw_mem-cp-1mb (MB/s) 6825.52 (min 981.51, max 12669.52)
bw_mem-cp-2mb (MB/s) 6782.77 (min 1040.94, max 12524.60)
bw_mem-cp-4mb (MB/s) 6846.16 (min 1222.12, max 12470.20)
bw_mem-cp-8mb (MB/s) 6727.97 (min 1067.24, max 12388.70)
bw_mem-fcp-16mb (MB/s) 2534.85
bw_mem-fcp-1mb (MB/s) 8835.63 (min 5003.45, max 12667.80)
bw_mem-fcp-2mb (MB/s) 7876.58 (min 3224.07, max 12529.09)
bw_mem-fcp-4mb (MB/s) 7824.62 (min 3175.23, max 12474.01)
bw_mem-fcp-8mb (MB/s) 7540.76 (min 2672.01, max 12409.51)
bw_mem-frd-16mb (MB/s) 4867.66
bw_mem-frd-1mb (MB/s) 5469.09 (min 5003.45, max 5934.72)
bw_mem-frd-2mb (MB/s) 4544.59 (min 3224.07, max 5865.10)
bw_mem-frd-4mb (MB/s) 4393.65 (min 3175.23, max 5612.07)
bw_mem-frd-8mb (MB/s) 4063.59 (min 2672.01, max 5455.17)
bw_mem-fwr-16mb (MB/s) 10251.48
bw_mem-fwr-1mb (MB/s) 9302.12 (min 5934.72, max 12669.52)
bw_mem-fwr-2mb (MB/s) 9194.85 (min 5865.10, max 12524.60)
bw_mem-fwr-4mb (MB/s) 9041.14 (min 5612.07, max 12470.20)
bw_mem-fwr-8mb (MB/s) 8921.94 (min 5455.17, max 12388.70)
bw_mem-rd-16mb (MB/s) 5190.59
bw_mem-rd-1mb (MB/s) 10997.91 (min 10294.64, max 11701.17)
bw_mem-rd-2mb (MB/s) 4110.54 (min 1479.51, max 6741.57)
bw_mem-rd-4mb (MB/s) 3779.09 (min 1438.85, max 6119.33)
bw_mem-rd-8mb (MB/s) 3646.12 (min 1394.70, max 5897.53)
bw_mem-rdwr-16mb (MB/s) 1247.76
bw_mem-rdwr-1mb (MB/s) 3812.40 (min 981.51, max 6643.29)
bw_mem-rdwr-2mb (MB/s) 1183.71 (min 1040.94, max 1326.48)
bw_mem-rdwr-4mb (MB/s) 1272.44 (min 1222.12, max 1322.75)
bw_mem-rdwr-8mb (MB/s) 1191.84 (min 1067.24, max 1316.44)
bw_mem-wr-16mb (MB/s) 1978.24
bw_mem-wr-1mb (MB/s) 9172.23 (min 6643.29, max 11701.17)
bw_mem-wr-2mb (MB/s) 1403.00 (min 1326.48, max 1479.51)
bw_mem-wr-4mb (MB/s) 1380.80 (min 1322.75, max 1438.85)
bw_mem-wr-8mb (MB/s) 1355.57 (min 1316.44, max 1394.70)
bw_mmap_rd-mo-1mb (MB/s) 8634.09
bw_mmap_rd-o2c-1mb (MB/s) 1896.09
bw_pipe (MB/s) 1738.40
bw_unix (MB/s) 2595.97
lat_connect (us) 27.57
lat_ctx-2-128k (us) 3.98
lat_ctx-2-256k (us) 4.24
lat_ctx-4-128k (us) 5.51
lat_ctx-4-256k (us) 6.09
lat_fs-0k (num_files) 626.00
lat_fs-10k (num_files) 211.00
lat_fs-1k (num_files) 207.00
lat_fs-4k (num_files) 203.00
lat_mem_rd-stride128-sz1000k (ns) 8.70
lat_mem_rd-stride128-sz125k (ns) 5.15
lat_mem_rd-stride128-sz250k (ns) 5.15
lat_mem_rd-stride128-sz31k (ns) 2.00
lat_mem_rd-stride128-sz50 (ns) 2.00
lat_mem_rd-stride128-sz500k (ns) 5.15
lat_mem_rd-stride128-sz62k (ns) 5.15
lat_mmap-1m (us) 9.09
lat_ops-double-add (ns) 0.32
lat_ops-double-mul (ns) 2.00
lat_ops-float-add (ns) 0.32
lat_ops-float-mul (ns) 2.00
lat_ops-int-add (ns) 0.50
lat_ops-int-bit (ns) 0.33
lat_ops-int-div (ns) 4.00
lat_ops-int-mod (ns) 4.67
lat_ops-int-mul (ns) 1.52
lat_ops-int64-add (ns) 0.50
lat_ops-int64-bit (ns) 0.33
lat_ops-int64-div (ns) 3.00
lat_ops-int64-mod (ns) 5.67
lat_pagefault (us) 1.16
lat_pipe (us) 10.93
lat_proc-exec (us) 719.25
lat_proc-fork (us) 606.90
lat_proc-proccall (us) 0.00
lat_select (us) 9.49
lat_sem (us) 1.43
lat_sig-catch (us) 2.39
lat_sig-install (us) 0.44
lat_sig-prot (us) 0.34
lat_syscall-fstat (us) 0.58
lat_syscall-null (us) 0.27
lat_syscall-open (us) 132.79
lat_syscall-read (us) 0.37
lat_syscall-stat (us) 1.43
lat_syscall-write (us) 0.33
lat_tcp (us) 0.57
lat_unix (us) 19.42
latency_for_0.50_mb_block_size (nanosec) 5.15
latency_for_1.00_mb_block_size (nanosec) 4.35 (min 0.00, max 8.70)
pipe_bandwidth (MBs) 1738.40
pipe_latency (microsec) 10.93
procedure_call (microsec) 0.00
select_on_200_tcp_fds (microsec) 9.49
semaphore_latency (microsec) 1.43
signal_handler_latency (microsec) 0.44
signal_handler_overhead (microsec) 2.39
tcp_ip_connection_cost_to_localhost (microsec) 27.57
tcp_latency_using_localhost (microsec) 0.57

Table: LM Bench Metrics

2.2.1.1.2. Dhrystone

Dhrystone is a core only benchmark that runs from warm L1 caches in all modern processors. It scales linearly with clock speed. For standard ARM cores the DMIPS/MHz score will be identical with the same compiler and flags.

Benchmarks j721e-idk-gw: perf
cpu_clock (MHz) 2000.00
dhrystone_per_mhz (DMIPS/MHz) 5.7
dhrystone_per_second (DhrystoneP) 20000000.00

Table: Dhrystone Benchmark

2.2.1.1.3. Whetstone

Benchmarks j721e-idk-gw: perf
whetstone (MIPS) 10000.00

Table: Whetstone Benchmark

2.2.1.1.4. Linpack

Linpack measures peak double precision (64 bit) floating point performance in solving a dense linear system.

Benchmarks j721e-idk-gw: perf
linpack (Kflops) 2645095.00

Table: Linpack Benchmark

2.2.1.1.5. NBench

NBench which stands for Native Benchmark is used to measure macro benchmarks for commonly used operations such as sorting and analysis algorithms. More information about NBench at https://en.wikipedia.org/wiki/NBench and https://nbench.io/articles/index.html

Benchmarks j721e-idk-gw: perf
assignment (Iterations) 29.69
fourier (Iterations) 49615.00
fp_emulation (Iterations) 250.10
huffman (Iterations) 2430.70
idea (Iterations) 7997.70
lu_decomposition (Iterations) 1431.60
neural_net (Iterations) 27.34
numeric_sort (Iterations) 879.45
string_sort (Iterations) 429.64

Table: NBench Benchmarks

2.2.1.1.6. Stream

STREAM is a microbenchmark for measuring data memory system performance without any data reuse. It is designed to miss on caches and exercise data prefetcher and speculative accesses. It uses double precision floating point (64bit) but in most modern processors the memory access will be the bottleneck. The four individual scores are copy, scale as in multiply by constant, add two numbers, and triad for multiply accumulate. For bandwidth, a byte read counts as one and a byte written counts as one, resulting in a score that is double the bandwidth LMBench will show.

Benchmarks j721e-idk-gw: perf
add (MB/s) 5625.20
copy (MB/s) 5945.10
scale (MB/s) 5621.30
triad (MB/s) 5528.70

Table: Stream

2.2.1.1.7. CoreMarkPro

CoreMark®-Pro is a comprehensive, advanced processor benchmark that works with and enhances the market-proven industry-standard EEMBC CoreMark® benchmark. While CoreMark stresses the CPU pipeline, CoreMark-Pro tests the entire processor, adding comprehensive support for multicore technology, a combination of integer and floating-point workloads, and data sets for utilizing larger memory subsystems.

Benchmarks j721e-idk-gw: perf
cjpeg-rose7-preset (workloads/) 83.33
core (workloads/) 0.78
coremark-pro () 2543.12
linear_alg-mid-100x100-sp (workloads/) 82.64
loops-all-mid-10k-sp (workloads/) 2.50
nnet_test (workloads/) 3.67
parser-125k (workloads/) 12.20
radix2-big-64k (workloads/) 268.24
sha-test (workloads/) 156.25
zip-test (workloads/) 50.00

Table: CoreMarkPro

Benchmarks j721e-idk-gw: perf
cjpeg-rose7-preset (workloads/) 166.67
core (workloads/) 1.56
coremark-pro () 4466.88
linear_alg-mid-100x100-sp (workloads/) 164.47
loops-all-mid-10k-sp (workloads/) 3.77
nnet_test (workloads/) 7.37
parser-125k (workloads/) 23.26
radix2-big-64k (workloads/) 255.36
sha-test (workloads/) 312.50
zip-test (workloads/) 90.91

Table: CoreMarkPro for Two Cores

2.2.1.1.8. MultiBench

MultiBench™ is a suite of benchmarks that allows processor and system designers to analyze, test, and improve multicore processors. It uses three forms of concurrency: Data decomposition: multiple threads cooperating on achieving a unified goal and demonstrating a processor’s support for fine grain parallelism. Processing multiple data streams: uses common code running over multiple threads and demonstrating how well a processor scales over scalable data inputs. Multiple workload processing: shows the scalability of general-purpose processing, demonstrating concurrency over both code and data. MultiBench combines a wide variety of application-specific workloads with the EEMBC Multi-Instance-Test Harness (MITH), compatible and portable with most any multicore processors and operating systems. MITH uses a thread-based API (POSIX-compliant) to establish a common programming model that communicates with the benchmark through an abstraction layer and provides a flexible interface to allow a wide variety of thread-enabled workloads to be tested.

Benchmarks j721e-idk-gw: perf
4m-check (workloads/) 1086.01
4m-check-reassembly (workloads/) 234.74
4m-check-reassembly-tcp (workloads/) 126.90
4m-check-reassembly-tcp-cmykw2-rotatew2 (workloads/) 46.95
4m-check-reassembly-tcp-x264w2 (workloads/) 2.76
4m-cmykw2 (workloads/) 321.54
4m-cmykw2-rotatew2 (workloads/) 65.57
4m-reassembly (workloads/) 243.31
4m-rotatew2 (workloads/) 75.19
4m-tcp-mixed (workloads/) 266.67
4m-x264w2 (workloads/) 2.82
idct-4m (workloads/) 34.98
idct-4mw1 (workloads/) 35.12
ippktcheck-4m (workloads/) 1090.27
ippktcheck-4mw1 (workloads/) 1088.38
ipres-4m (workloads/) 208.91
ipres-4mw1 (workloads/) 208.33
md5-4m (workloads/) 51.95
md5-4mw1 (workloads/) 53.05
rgbcmyk-4m (workloads/) 164.34
rgbcmyk-4mw1 (workloads/) 164.34
rotate-4ms1 (workloads/) 57.08
rotate-4ms1w1 (workloads/) 57.08
rotate-4ms64 (workloads/) 57.54
rotate-4ms64w1 (workloads/) 57.41
x264-4mq (workloads/) 1.45
x264-4mqw1 (workloads/) 1.44

Table: Multibench

2.2.1.2. Boot-time Measurement

2.2.1.2.1. Boot media: MMCSD

Boot Configuration j721e-idk-gw: boot time (sec)
Kernel boot time test when bootloader, kernel and sdk-rootfs are in mmc-sd 15.70 (min 15.57, max 15.82)
Kernel boot time test when init is only /bin/sh and bootloader, kernel and sdk-rootfs are in mmc-sd 6.99 (min 6.94, max 7.14)

Table: Boot time MMC/SD

2.2.1.3. ALSA SoC Audio Driver

  1. Access type - RW_INTERLEAVED
  2. Channels - 2
  3. Format - S16_LE
  4. Period size - 64
Sampling Rate (Hz) j721e-idk-gw: Throughput (bits/sec) j721e-idk-gw: CPU Load (%)
11025 352800.00 0.17
16000 512000.00 0.11
22050 705600.00 0.13
24000 705600.00 0.29
32000 1023999.00 0.16
44100 1411199.00 0.20
48000 1535999.00 0.22
88200 2822395.00 0.92
96000 3071994.00 0.35

Table: Audio Capture


Sampling Rate (Hz) j721e-idk-gw: Throughput (bits/sec) j721e-idk-gw: CPU Load (%)
11025 352945.00 0.08
16000 512211.00 0.09
22050 705891.00 0.27
24000 705891.00 0.28
32000 1024422.00 0.13
44100 1411781.00 0.43
48000 1536632.00 0.23
88200 2823560.00 0.91
96000 3073262.00 0.37

Table: Audio Playback


2.2.1.4. Graphics SGX/RGX Driver

2.2.1.4.1. GLBenchmark

Run GLBenchmark and capture performance reported Display rate (Fps), Fill rate, Vertex Throughput, etc. All display outputs (HDMI, Displayport and/or LCD) are connected when running these tests

2.2.1.4.1.1. Performance (Fps)
Benchmark j721e-idk-gw: Test Number j721e-idk-gw: Fps
GLB25_EgyptTestC24Z16_ETC1_Offscreen test 2501011.00 46.00

Table: GLBenchmark 2.5 Performance

2.2.1.4.2. GFXBench

Run GFXBench and capture performance reported (Score and Display rate in fps). All display outputs (HDMI, Displayport and/or LCD) are connected when running these tests

Benchmark j721e-idk-gw: Score j721e-idk-gw: Fps
GFXBench 3.x gl_trex_off
1943.33 34.70
GFXBench 4.x gl_4_off
425.87 7.21
GFXBench 5.x gl_5_high_off
173.90 2.70

Table: GFXBench

2.2.1.4.3. Glmark2

Run Glmark2 and capture performance reported (Score). All display outputs (HDMI, Displayport and/or LCD) are connected when running these tests

Benchmark j721e-idk-gw: Score
Glmark2-Wayland 1298.00

Table: Glmark2


2.2.1.5. Ethernet

Ethernet performance benchmarks were measured using Netperf 2.7.1 https://hewlettpackard.github.io/netperf/doc/netperf.html Test procedures were modeled after those defined in RFC-2544: https://tools.ietf.org/html/rfc2544, where the DUT is the TI device and the “tester” used was a Linux PC. To produce consistent results, it is recommended to carry out performance tests in a private network and to avoid running NFS on the same interface used in the test. In these results, CPU utilization was captured as the total percentage used across all cores on the device, while running the performance test over one external interface.

UDP Throughput (0% loss) was measured by the procedure defined in RFC-2544 section 26.1: Throughput. In this scenario, netperf options burst_size (-b) and wait_time (-w) are used to limit bandwidth during different trials of the test, with the goal of finding the highest rate at which no loss is seen. For example, to limit bandwidth to 500Mbits/sec with 1472B datagram:

burst_size = <bandwidth (bits/sec)> / 8 (bits -> bytes) / <UDP datagram size> / 100 (seconds -> 10 ms)
burst_size = 500000000 / 8 / 1472 / 100 = 425

wait_time = 10 milliseconds (minimum supported by Linux PC used for testing)

UDP Throughput (possible loss) was measured by capturing throughput and packet loss statistics when running the netperf test with no bandwidth limit (remove -b/-w options).

In order to start a netperf client on one device, the other device must have netserver running. To start netserver:

netserver [-p <port_number>] [-4 (IPv4 addressing)] [-6 (IPv6 addressing)]

Running the following shell script from the DUT will trigger netperf clients to measure bidirectional TCP performance for 60 seconds and report CPU utilization. Parameter -k is used in client commands to summarize selected statistics on their own line and -j is used to gain additional timing measurements during the test.

#!/bin/bash
for i in 1
do
   netperf -H <tester ip> -j -c -l 60 -t TCP_STREAM --
      -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE &

   netperf -H <tester ip> -j -c -l 60 -t TCP_MAERTS --
      -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE &
done

Running the following commands will trigger netperf clients to measure UDP burst performance for 60 seconds at various burst/datagram sizes and report CPU utilization.

  • For UDP egress tests, run netperf client from DUT and start netserver on tester.
netperf -H <tester ip> -j -c -l 60 -t UDP_STREAM -b <burst_size> -w <wait_time> -- -m <UDP datagram size>
   -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE
  • For UDP ingress tests, run netperf client from tester and start netserver on DUT.
netperf -H <DUT ip> -j -C -l 60 -t UDP_STREAM -b <burst_size> -w <wait_time> -- -m <UDP datagram size>
   -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE

2.2.1.5.1. CPSW/CPSW2g/CPSW3g Ethernet Driver

  • CPSW2g: AM65x, J7200, J721e
  • CPSW3g: AM64x

TCP Bidirectional Throughput

Command Used j721e-idk-gw: THROUGHPUT (Mbits/sec) j721e-idk-gw: CPU Load % (LOCAL_CPU_UTIL)
netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_MAERTS 1843.63 61.06

Table: CPSW TCP Bidirectional Throughput


UDP Throughput

Frame Size(bytes) j721e-idk-gw: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE) j721e-idk-gw: THROUGHPUT (Mbits/sec) j721e-idk-gw: CPU Load % (LOCAL_CPU_UTIL)
64 18.00 15.97 50.04
128 82.00 73.69 50.05
256 210.00 211.31 53.75
512 466.00 607.19 58.20
1024 978.00 926.24 49.95
1518 1472.00 957.08 36.36

Table: CPSW UDP Egress Throughput


Frame Size(bytes) j721e-idk-gw: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE) j721e-idk-gw: THROUGHPUT (Mbits/sec) j721e-idk-gw: CPU Load % (LOCAL_CPU_UTIL)
64 18.00 1.40 4.75
256 210.00 21.17 6.54
1518 1472.00 957.04 50.18

Table: CPSW UDP Ingress Throughput (0% loss)


Frame Size(bytes) j721e-idk-gw: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE) j721e-idk-gw: THROUGHPUT (Mbits/sec) j721e-idk-gw: CPU Load % (LOCAL_CPU_UTIL) j721e-idk-gw: Packet Loss %
64 18.00 45.11 70.33 0.40
256 210.00 486.15 73.93 0.59
1518 1472.00 957.04 50.18 0.00

Table: CPSW UDP Ingress Throughput (possible loss)


2.2.1.5.2. CPSW5g/CPSW9g Virtual Ethernet Driver

  • CPSW5g: J7200
  • CPSW9g: J721e

TCP Bidirectional Throughput

Command Used j721e-idk-gw: THROUGHPUT (Mbits/sec) j721e-idk-gw: CPU Load % (LOCAL_CPU_UTIL)
netperf -H 192.168.1.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.1.1 -j -c -C -l 60 -t TCP_MAERTS 1832.16 52.10

Table: CPSW9g Virtual Ethernet Driver - TCP Bidirectional Throughput


UDP Throughput

Frame Size(bytes) j721e-idk-gw: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE) j721e-idk-gw: THROUGHPUT (Mbits/sec) j721e-idk-gw: CPU Load % (LOCAL_CPU_UTIL)
64 18.00 39.66 96.38
128 82.00 82.70 50.08
256 210.00 205.35 48.11
1024 978.00 936.63 50.29
1518 1472.00 957.02 37.56

Table: CPSW5g/9g Virtual Ethernet Driver - UDP Egress Throughput


Frame Size(bytes) j721e-idk-gw: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE) j721e-idk-gw: THROUGHPUT (Mbits/sec) j721e-idk-gw: CPU Load % (LOCAL_CPU_UTIL)
64 18.00 1.66 5.19
128 82.00 9.25 7.02
256 210.00 29.57 10.18
1518 1472.00 957.08 49.54

Table: CPSW5g/9g Virtual Ethernet Driver - UDP Ingress Throughput (0% loss)


Frame Size(bytes) j721e-idk-gw: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE) j721e-idk-gw: THROUGHPUT (Mbits/sec) j721e-idk-gw: CPU Load % (LOCAL_CPU_UTIL) j721e-idk-gw: Packet Loss %
64 18.00 52.00 73.01 0.53
128 82.00 238.79 75.99 0.12
256 210.00 635.78 80.36 4.70
1518 1472.00 957.08 49.54 0.00

Table: CPSW5g/9g Virtual Ethernet Driver - UDP Ingress Throughput (possible loss)


2.2.1.6. PCIe Driver

2.2.1.6.1. PCIe-ETH

TCP Window Size(Kbytes) j721e-idk-gw: Bandwidth (Mbits/sec)
128 0.00
256 0.00

Table: PCI Ethernet

2.2.1.6.2. PCIe-NVMe-SSD

2.2.1.6.2.1. J721E-IDK-GW
Buffer size (bytes) j721e-idk-gw: Write EXT4 Throughput (Mbytes/sec) j721e-idk-gw: Write EXT4 CPU Load (%) j721e-idk-gw: Read EXT4 Throughput (Mbytes/sec) j721e-idk-gw: Read EXT4 CPU Load (%)
1m 715.00 11.15 1526.00 4.01
4m 723.00 10.92 1532.00 3.28
4k 191.00 47.30 168.00 35.45
256k 718.00 10.21 1480.00 7.44
  • Filesize used is: 10G
  • FIO command options: –ioengine=libaio –iodepth=4 –numjobs=1 –direct=1 –runtime=60 –time_based
  • Platform: Speed 8GT/s, Width x2
  • SSD being used: PLEXTOR PX-128M8PeY

2.2.1.7. OSPI Flash Driver

2.2.1.7.1. J721E-IDK-GW

2.2.1.7.1.1. UBIFS
Buffer size (bytes) j721e-idk-gw: Write UBIFS Throughput (Mbytes/sec) j721e-idk-gw: Write UBIFS CPU Load (%) j721e-idk-gw: Read UBIFS Throughput (Mbytes/sec) j721e-idk-gw: Read UBIFS CPU Load (%)
102400 0.62 (min 0.49, max 1.11) 21.16 (min 19.54, max 22.97) 33.75 7.69
262144 0.48 (min 0.36, max 0.54) 21.10 (min 19.67, max 22.76) 33.68 0.00
524288 0.48 (min 0.36, max 0.54) 21.42 (min 20.11, max 22.93) 33.32 15.38
1048576 0.47 (min 0.36, max 0.53) 20.78 (min 17.74, max 22.15) 33.36 0.00
2.2.1.7.1.2. RAW
File size (Mbytes) j721e-idk-gw: Raw Read Throughput (Mbytes/sec)
50 38.76

2.2.1.8. UBoot QSPI/OSPI Driver

2.2.1.8.1. J721E-IDK-GW

File size (bytes in hex) j721e-idk-gw: Write Throughput (Kbytes/sec) j721e-idk-gw: Read Throughput (Kbytes/sec)
400000 1616.42 37577.98
800000 1617.69 39009.52
1000000 1619.77 39863.75
2000000 1620.01 40255.53

2.2.1.9. SPI Flash Driver

2.2.1.10. UFS Driver

Warning

IMPORTANT: The performance numbers can be severely affected if the media is mounted in sync mode. Hot plug scripts in the filesystem mount removable media in sync mode to ensure data integrity. For performance sensitive applications, umount the auto-mounted filesystem and re-mount in async mode.


2.2.1.11. UBoot UFS Driver


2.2.1.11.1. J721E-IDK-GW


File size (bytes in hex) j721e-idk-gw: Write Throughput (Kbytes/sec) j721e-idk-gw: Read Throughput (Kbytes/sec)
400000 93090.91 341333.33
800000 103696.20 390095.24
1000000 99296.97 564965.52

2.2.1.12. EMMC Driver

Warning

IMPORTANT: The performance numbers can be severely affected if the media is mounted in sync mode. Hot plug scripts in the filesystem mount removable media in sync mode to ensure data integrity. For performance sensitive applications, umount the auto-mounted filesystem and re-mount in async mode.


2.2.1.12.1. J721E-IDK-GW


Buffer size (bytes) j721e-idk-gw: Write EXT4 Throughput (Mbytes/sec) j721e-idk-gw: Write EXT4 CPU Load (%) j721e-idk-gw: Read EXT4 Throughput (Mbytes/sec) j721e-idk-gw: Read EXT4 CPU Load (%)
1m 61.30 1.01 174.00 0.60
4m 61.50 1.00 174.00 0.52
4k 51.70 20.11 56.30 20.70
256k 61.30 1.25 173.00 1.28

2.2.1.13. UBoot EMMC Driver


2.2.1.13.1. J721E-IDK-GW


File size (bytes in hex) j721e-idk-gw: Write Throughput (Kbytes/sec) j721e-idk-gw: Read Throughput (Kbytes/sec)
2000000 59578.18 172463.16
4000000 62534.35 177604.34

2.2.1.14. SATA Driver

  • Filesize used is : 1G
  • SATA II Harddisk used is: Seagate ST3500514NS 500G

2.2.1.14.1. mSATA Driver

  • Filesize used is : 1G
  • MSATA Harddisk used is: SMS200S3/30G Kingston mSATA SSD drive

2.2.1.15. MMC/SD Driver

Warning

IMPORTANT: The performance numbers can be severely affected if the media is mounted in sync mode. Hot plug scripts in the filesystem mount removable media in sync mode to ensure data integrity. For performance sensitive applications, umount the auto-mounted filesystem and re-mount in async mode.


2.2.1.15.1. J721E-IDK-GW


Buffer size (bytes) j721e-idk-gw: Write EXT4 Throughput (Mbytes/sec) j721e-idk-gw: Write EXT4 CPU Load (%) j721e-idk-gw: Read EXT4 Throughput (Mbytes/sec) j721e-idk-gw: Read EXT4 CPU Load (%)
1m 12.80 0.45 46.60 0.39
4m 14.60 0.39 46.90 0.34
4k 5.36 2.81 13.20 5.51
256k 13.00 0.42 47.60 0.62

The performance numbers were captured using the following:

  • SanDisk 8GB MicroSDHC Class 10 Memory Card
  • Partition was mounted with async option

2.2.1.16. UBoot MMC/SD Driver


2.2.1.16.1. J721E-IDK-GW


File size (bytes in hex) j721e-idk-gw: Write Throughput (Kbytes/sec) j721e-idk-gw: Read Throughput (Kbytes/sec)
400000 18962.96 21787.23
800000 21005.13 22692.52
1000000 18788.99 23108.60

2.2.1.17. USB Driver

2.2.1.17.1. USB Host Controller

Warning

IMPORTANT: For Mass-storage applications, the performance numbers can be severely affected if the media is mounted in sync mode. Hot plug scripts in the filesystem mount removable media in sync mode to ensure data integrity. For performance sensitive applications, umount the auto-mounted filesystem and re-mount in async mode.


Setup : Inateck ASM1153E USB hard disk is connected to usb0 port. File read/write performance data on usb0 port is captured.


2.2.1.17.2. USB Device Controller

Number of Blocks j721e-idk-gw: Throughput (MB/sec)
150 38.90

Table: USBDEVICE HIGHSPEED SLAVE READ THROUGHPUT


Number of Blocks j721e-idk-gw: Throughput (MB/sec)
150 38.40

Table: USBDEVICE HIGHSPEED SLAVE WRITE THROUGHPUT


2.2.1.18. CRYPTO Driver

2.2.1.18.1. OpenSSL Performance

Algorithm Buffer Size (in bytes) j721e-idk-gw: throughput (KBytes/Sec)
aes-128-cbc 1024 52761.26
aes-128-cbc 16 1133.67
aes-128-cbc 16384 207672.66
aes-128-cbc 256 17465.51
aes-128-cbc 64 4562.69
aes-128-cbc 8192 178577.41
aes-192-cbc 1024 52353.37
aes-192-cbc 16 1071.69
aes-192-cbc 16384 198240.94
aes-192-cbc 256 17482.84
aes-192-cbc 64 4561.47
aes-192-cbc 8192 169492.48
aes-256-cbc 1024 50931.71
aes-256-cbc 16 1115.14
aes-256-cbc 16384 178804.05
aes-256-cbc 256 17335.81
aes-256-cbc 64 4417.00
aes-256-cbc 8192 155598.85
des-cbc 1024 47576.41
des-cbc 16 11180.66
des-cbc 16384 50053.12
des-cbc 256 41317.21
des-cbc 64 27204.59
des-cbc 8192 49872.90
des3 1024 46236.33
des3 16 1044.65
des3 16384 102105.09
des3 256 16380.67
des3 64 4283.56
des3 8192 94262.61
md5 1024 96179.20
md5 16 2198.31
md5 16384 271324.50
md5 256 31678.29
md5 64 8561.90
md5 8192 241175.21
sha1 1024 23866.71
sha1 16 420.67
sha1 16384 124316.33
sha1 256 6708.39
sha1 64 1691.14
sha1 8192 97288.19
sha224 1024 113622.02
sha224 16 2007.43
sha224 16384 661886.29
sha224 256 31087.53
sha224 64 8048.73
sha224 8192 497183.40
sha256 1024 23901.53
sha256 16 419.46
sha256 16384 143234.39
sha256 256 6453.76
sha256 64 1667.52
sha256 8192 109464.23
sha384 1024 74619.90
sha384 16 2027.15
sha384 16384 163179.18
sha384 256 27471.10
sha384 64 8138.39
sha384 8192 150937.60
sha512 1024 23232.17
sha512 16 421.79
sha512 16384 164014.76
sha512 256 6504.96
sha512 64 1752.55
sha512 8192 117325.82


Algorithm j721e-idk-gw: CPU Load
aes-128-cbc 33.00
aes-192-cbc 31.00
aes-256-cbc 32.00
des-cbc 99.00
des3 27.00
md5 99.00
sha1 60.00
sha224 99.00
sha256 60.00
sha384 99.00
sha512 61.00

Listed for each algorithm are the code snippets used to run each benchmark test.

time -v openssl speed -elapsed -evp aes-128-cbc

2.2.1.18.2. IPSec Hardware Performance

Note: queue_len is set to 300 and software fallback threshold set to 9 to enable software support for optimal performance

2.2.1.18.3. IPSec Software Performance

Algorithm j721e-idk-gw: Throughput (Mbps) j721e-idk-gw: Packets/Sec j721e-idk-gw: CPU Load
3des 183.60 16.00 29.11
aes128 758.30 67.00 56.75
aes192 750.30 66.00 56.28
aes256 757.20 67.00 56.64