2.2. Performance Guide

2.2.1. Kernel Performance Guide

2.2.1.1. Linux 08.01.00 Performance Guide

Read This First

All performance numbers provided in this document are gathered using following Evaluation Modules unless otherwise specified.

Name Description
J7200 EVM J7200 Evaluation Module rev E1 with ARM running at 2GHz, DDR data rate 2666 MT/S, L3 Cache size 3MB

Table: Evaluation Modules


About This Manual

This document provides performance data for each of the device drivers which are part of the Process SDK Linux package. This document should be used in conjunction with release notes and user guides provided with the Process SDK Linux package for information on specific issues present with drivers included in a particular release.

If You Need Assistance

For further information or to report any problems, contact http://e2e.ti.com/ or http://support.ti.com/

2.2.1.1.1. System Benchmarks

2.2.1.1.1.1. LMBench

LMBench is a collection of microbenchmarks of which the memory bandwidth and latency related ones are typically used to estimate processor memory system performance. More information about lmbench at http://lmbench.sourceforge.net/whatis_lmbench.html and http://lmbench.sourceforge.net/man/lmbench.8.html

Latency: lat_mem_rd-stride128-szN, where N is equal to or smaller than the cache size at given level measures the cache miss penalty. N that is at least double the size of last level cache is the latency to external memory.

Bandwidth: bw_mem_bcopy-N, where N is is equal to or smaller than the cache size at a given level measures the achievable memory bandwidth from software doing a memcpy() type operation. Typical use is for external memory bandwidth calculation. The bandwidth is calculated as byte read and written counts as 1 which should be roughly half of STREAM copy result.

Benchmarks j7200-evm: perf
af_unix_sock_stream_latency (microsec) 19.71
af_unix_socket_stream_bandwidth (MBs) 912.09
bw_file_rd-io-1mb (MB/s) 2467.73
bw_file_rd-o2c-1mb (MB/s) 1187.45
bw_mem-bcopy-16mb (MB/s) 1117.16
bw_mem-bcopy-1mb (MB/s) 1140.03
bw_mem-bcopy-2mb (MB/s) 1120.03
bw_mem-bcopy-4mb (MB/s) 1116.07
bw_mem-bcopy-8mb (MB/s) 1119.51
bw_mem-bzero-16mb (MB/s) 1140.17
bw_mem-bzero-1mb (MB/s) 1366.99 (min 1140.03, max 1593.94)
bw_mem-bzero-2mb (MB/s) 1149.55 (min 1120.03, max 1179.07)
bw_mem-bzero-4mb (MB/s) 1128.33 (min 1116.07, max 1140.58)
bw_mem-bzero-8mb (MB/s) 1130.94 (min 1119.51, max 1142.37)
bw_mem-cp-16mb (MB/s) 687.94
bw_mem-cp-1mb (MB/s) 1173.77 (min 704.60, max 1642.94)
bw_mem-cp-2mb (MB/s) 933.37 (min 688.71, max 1178.03)
bw_mem-cp-4mb (MB/s) 916.11 (min 690.01, max 1142.20)
bw_mem-cp-8mb (MB/s) 914.90 (min 687.76, max 1142.04)
bw_mem-fcp-16mb (MB/s) 1199.49
bw_mem-fcp-1mb (MB/s) 1408.86 (min 1223.78, max 1593.94)
bw_mem-fcp-2mb (MB/s) 1190.59 (min 1179.07, max 1202.10)
bw_mem-fcp-4mb (MB/s) 1169.99 (min 1140.58, max 1199.40)
bw_mem-fcp-8mb (MB/s) 1170.89 (min 1142.37, max 1199.40)
bw_mem-frd-16mb (MB/s) 5584.64
bw_mem-frd-1mb (MB/s) 3898.96 (min 1223.78, max 6574.14)
bw_mem-frd-2mb (MB/s) 3422.42 (min 1202.10, max 5642.74)
bw_mem-frd-4mb (MB/s) 3392.02 (min 1199.40, max 5584.64)
bw_mem-frd-8mb (MB/s) 3393.97 (min 1199.40, max 5588.54)
bw_mem-fwr-16mb (MB/s) 1141.15
bw_mem-fwr-1mb (MB/s) 4108.54 (min 1642.94, max 6574.14)
bw_mem-fwr-2mb (MB/s) 3410.39 (min 1178.03, max 5642.74)
bw_mem-fwr-4mb (MB/s) 3363.42 (min 1142.20, max 5584.64)
bw_mem-fwr-8mb (MB/s) 3365.29 (min 1142.04, max 5588.54)
bw_mem-rd-16mb (MB/s) 5084.21
bw_mem-rd-1mb (MB/s) 3639.50 (min 976.09, max 6302.90)
bw_mem-rd-2mb (MB/s) 2978.58 (min 760.55, max 5196.60)
bw_mem-rd-4mb (MB/s) 2919.23 (min 752.16, max 5086.29)
bw_mem-rd-8mb (MB/s) 2911.06 (min 742.74, max 5079.37)
bw_mem-rdwr-16mb (MB/s) 745.05
bw_mem-rdwr-1mb (MB/s) 841.38 (min 704.60, max 978.15)
bw_mem-rdwr-2mb (MB/s) 725.89 (min 688.71, max 763.07)
bw_mem-rdwr-4mb (MB/s) 719.19 (min 690.01, max 748.36)
bw_mem-rdwr-8mb (MB/s) 716.56 (min 687.76, max 745.36)
bw_mem-wr-16mb (MB/s) 744.71
bw_mem-wr-1mb (MB/s) 977.12 (min 976.09, max 978.15)
bw_mem-wr-2mb (MB/s) 761.81 (min 760.55, max 763.07)
bw_mem-wr-4mb (MB/s) 750.26 (min 748.36, max 752.16)
bw_mem-wr-8mb (MB/s) 744.05 (min 742.74, max 745.36)
bw_mmap_rd-mo-1mb (MB/s) 6347.95
bw_mmap_rd-o2c-1mb (MB/s) 1563.11
bw_pipe (MB/s) 3242.13
bw_unix (MB/s) 912.09
lat_connect (us) 31.87
lat_ctx-2-128k (us) 10.24
lat_ctx-2-256k (us) 22.66
lat_ctx-4-128k (us) 21.20
lat_ctx-4-256k (us) 27.20
lat_fs-0k (num_files) 528.00
lat_fs-10k (num_files) 75.00
lat_fs-1k (num_files) 63.00
lat_fs-4k (num_files) 75.00
lat_mem_rd-stride128-sz1000k (ns) 18.59
lat_mem_rd-stride128-sz125k (ns) 5.15
lat_mem_rd-stride128-sz250k (ns) 5.15
lat_mem_rd-stride128-sz31k (ns) 2.00
lat_mem_rd-stride128-sz50 (ns) 2.00
lat_mem_rd-stride128-sz500k (ns) 5.37
lat_mem_rd-stride128-sz62k (ns) 5.15
lat_mmap-1m (us) 8.62
lat_ops-double-add (ns) 0.32
lat_ops-double-mul (ns) 2.00
lat_ops-float-add (ns) 0.32
lat_ops-float-mul (ns) 2.00
lat_ops-int-add (ns) 0.50
lat_ops-int-bit (ns) 0.33
lat_ops-int-div (ns) 4.00
lat_ops-int-mod (ns) 4.67
lat_ops-int-mul (ns) 1.52
lat_ops-int64-add (ns) 0.50
lat_ops-int64-bit (ns) 0.33
lat_ops-int64-div (ns) 3.00
lat_ops-int64-mod (ns) 5.67
lat_pagefault (us) 1.27
lat_pipe (us) 11.33
lat_proc-exec (us) 1433.50
lat_proc-fork (us) 1311.75
lat_proc-proccall (us) 0.00
lat_select (us) 9.49
lat_sem (us) 1.36
lat_sig-catch (us) 2.31
lat_sig-install (us) 0.42
lat_sig-prot (us) 0.43
lat_syscall-fstat (us) 0.60
lat_syscall-null (us) 0.29
lat_syscall-open (us) 199.61
lat_syscall-read (us) 0.35
lat_syscall-stat (us) 1.35
lat_syscall-write (us) 0.33
lat_tcp (us) 0.58
lat_unix (us) 19.71
latency_for_0.50_mb_block_size (nanosec) 5.37
latency_for_1.00_mb_block_size (nanosec) 9.29 (min 0.00, max 18.59)
pipe_bandwidth (MBs) 3242.13
pipe_latency (microsec) 11.33
procedure_call (microsec) 0.00
select_on_200_tcp_fds (microsec) 9.49
semaphore_latency (microsec) 1.36
signal_handler_latency (microsec) 0.42
signal_handler_overhead (microsec) 2.31
tcp_ip_connection_cost_to_localhost (microsec) 31.87
tcp_latency_using_localhost (microsec) 0.58

Table: LM Bench Metrics

2.2.1.1.1.2. Dhrystone

Dhrystone is a core only benchmark that runs from warm L1 caches in all modern processors. It scales linearly with clock speed. For standard ARM cores the DMIPS/MHz score will be identical with the same compiler and flags.

Table: Dhrystone Benchmark

2.2.1.1.1.3. Whetstone
Benchmarks j7200-evm: perf
whetstone (MIPS) 10000.00

Table: Whetstone Benchmark

2.2.1.1.1.4. Linpack

Linpack measures peak double precision (64 bit) floating point performance in solving a dense linear system.

Benchmarks j7200-evm: perf
linpack (Kflops) 2634945.00

Table: Linpack Benchmark

2.2.1.1.1.5. NBench

NBench which stands for Native Benchmark is used to measure macro benchmarks for commonly used operations such as sorting and analysis algorithms. More information about NBench at https://en.wikipedia.org/wiki/NBench and https://nbench.io/articles/index.html

Benchmarks j7200-evm: perf
assignment (Iterations) 29.68
fourier (Iterations) 59191.00
fp_emulation (Iterations) 250.04
huffman (Iterations) 2431.70
idea (Iterations) 7997.20
lu_decomposition (Iterations) 1430.00
neural_net (Iterations) 27.11
numeric_sort (Iterations) 879.25
string_sort (Iterations) 431.92

Table: NBench Benchmarks

2.2.1.1.1.6. Stream

STREAM is a microbenchmark for measuring data memory system performance without any data reuse. It is designed to miss on caches and exercise data prefetcher and speculative accesses. It uses double precision floating point (64bit) but in most modern processors the memory access will be the bottleneck. The four individual scores are copy, scale as in multiply by constant, add two numbers, and triad for multiply accumulate. For bandwidth, a byte read counts as one and a byte written counts as one, resulting in a score that is double the bandwidth LMBench will show.

Benchmarks j7200-evm: perf
add (MB/s) 3862.80
copy (MB/s) 3081.40
scale (MB/s) 3108.20
triad (MB/s) 3868.30

Table: Stream

2.2.1.1.1.7. CoreMarkPro

CoreMark®-Pro is a comprehensive, advanced processor benchmark that works with and enhances the market-proven industry-standard EEMBC CoreMark® benchmark. While CoreMark stresses the CPU pipeline, CoreMark-Pro tests the entire processor, adding comprehensive support for multicore technology, a combination of integer and floating-point workloads, and data sets for utilizing larger memory subsystems.

Benchmarks j7200-evm: perf
cjpeg-rose7-preset (workloads/) 82.64
core (workloads/) 0.78
coremark-pro () 2282.94
linear_alg-mid-100x100-sp (workloads/) 82.51
loops-all-mid-10k-sp (workloads/) 2.23
nnet_test (workloads/) 3.67
parser-125k (workloads/) 11.49
radix2-big-64k (workloads/) 126.18
sha-test (workloads/) 158.73
zip-test (workloads/) 47.62

Table: CoreMarkPro

Benchmarks j7200-evm: perf
cjpeg-rose7-preset (workloads/) 158.73
core (workloads/) 1.55
coremark-pro () 3224.25
linear_alg-mid-100x100-sp (workloads/) 165.02
loops-all-mid-10k-sp (workloads/) 3.19
nnet_test (workloads/) 7.32
parser-125k (workloads/) 8.23
radix2-big-64k (workloads/) 56.86
sha-test (workloads/) 312.50
zip-test (workloads/) 76.92

Table: CoreMarkPro for Two Cores

2.2.1.1.1.8. MultiBench

MultiBench™ is a suite of benchmarks that allows processor and system designers to analyze, test, and improve multicore processors. It uses three forms of concurrency: Data decomposition: multiple threads cooperating on achieving a unified goal and demonstrating a processor’s support for fine grain parallelism. Processing multiple data streams: uses common code running over multiple threads and demonstrating how well a processor scales over scalable data inputs. Multiple workload processing: shows the scalability of general-purpose processing, demonstrating concurrency over both code and data. MultiBench combines a wide variety of application-specific workloads with the EEMBC Multi-Instance-Test Harness (MITH), compatible and portable with most any multicore processors and operating systems. MITH uses a thread-based API (POSIX-compliant) to establish a common programming model that communicates with the benchmark through an abstraction layer and provides a flexible interface to allow a wide variety of thread-enabled workloads to be tested.

Benchmarks j7200-evm: perf
4m-check (workloads/) 761.50
4m-check-reassembly (workloads/) 123.15
4m-check-reassembly-tcp (workloads/) 89.61
4m-check-reassembly-tcp-cmykw2-rotatew2 (workloads/) 33.71
4m-check-reassembly-tcp-x264w2 (workloads/) 2.57
4m-cmykw2 (workloads/) 308.17
4m-cmykw2-rotatew2 (workloads/) 53.10
4m-reassembly (workloads/) 125.95
4m-rotatew2 (workloads/) 62.89
4m-tcp-mixed (workloads/) 262.30
4m-x264w2 (workloads/) 2.62
idct-4m (workloads/) 34.81
idct-4mw1 (workloads/) 34.79
ippktcheck-4m (workloads/) 773.52
ippktcheck-4mw1 (workloads/) 762.20
ipres-4m (workloads/) 146.48
ipres-4mw1 (workloads/) 146.63
md5-4m (workloads/) 35.00
md5-4mw1 (workloads/) 34.73
rgbcmyk-4m (workloads/) 163.27
rgbcmyk-4mw1 (workloads/) 163.13
rotate-4ms1 (workloads/) 52.30
rotate-4ms1w1 (workloads/) 52.25
rotate-4ms64 (workloads/) 52.74
rotate-4ms64w1 (workloads/) 52.69
x264-4mq (workloads/) 1.40
x264-4mqw1 (workloads/) 1.40

Table: Multibench

2.2.1.1.1.9. Spec2K6

CPU2006 is a set of benchmarks designed to test the CPU performance of a modern server computer system. It is split into two components, the first being CINT2006, the other being CFP2006 (SPECfp), for floating point testing.

SPEC defines a base runtime for each of the 12 benchmark programs. For SPECint2006, that number ranges from 1000 to 3000 seconds. The timed test is run on the system, and the time of the test system is compared to the reference time, and a ratio is computed. That ratio becomes the SPECint score for that test. (This differs from the rating in SPECINT2000, which multiplies the ratio by 100.)

As an example for SPECint2006, consider a processor which can run 400.perlbench in 2000 seconds. The time it takes the reference machine to run the benchmark is 9770 seconds. Thus the ratio is 4.885. Each ratio is computed, and then the geometric mean of those ratios is computed to produce an overall value.

Rate (Multiple Cores)

Table: Spec2K6

Speed (Single Core)

Table: Spec2K6 Speed

2.2.1.1.2. Boot-time Measurement

2.2.1.1.2.1. Boot media: MMCSD
Boot Configuration j7200-evm: boot time (sec)
Kernel boot time test when bootloader, kernel and sdk-rootfs are in mmc-sd 18.21 (min 17.50, max 19.06)
Kernel boot time test when init is /bin/sh and bootloader, kernel and sdk-rootfs are in mmc-sd 3.80 (min 3.77, max 3.82)

Table: Boot time MMC/SD

2.2.1.1.2.2. Boot media: NAND

Table: Boot time MMC/SD

2.2.1.1.3. ALSA SoC Audio Driver

  1. Access type - RW_INTERLEAVED
  2. Channels - 2
  3. Format - S16_LE
  4. Period size - 64

Table: Audio Capture


Table: Audio Playback


2.2.1.1.4. Sensor Capture

Capture video frames (MMAP buffers) with v4l2c-ctl and record the reported fps

Table: Sensor Capture


2.2.1.1.5. Display Driver





2.2.1.1.6. Graphics SGX/RGX Driver

2.2.1.1.6.1. GLBenchmark

Run GLBenchmark and capture performance reported Display rate (Fps), Fill rate, Vertex Throughput, etc. All display outputs (HDMI, Displayport and/or LCD) are connected when running these tests

2.2.1.1.6.1.1. Performance (Fps)

Table: GLBenchmark 2.5 Performance

2.2.1.1.6.1.2. Vertex Throughput

Table: GLBenchmark 2.5 Vertex Throughput

2.2.1.1.6.1.3. Pixel Throughput

Table: GLBenchmark 2.5 Pixel Throughput

2.2.1.1.6.2. GFXBench

Run GFXBench and capture performance reported (Score and Display rate in fps). All display outputs (HDMI, Displayport and/or LCD) are connected when running these tests

Table: GFXBench

2.2.1.1.6.3. Glmark2

Run Glmark2 and capture performance reported (Score). All display outputs (HDMI, Displayport and/or LCD) are connected when running these tests

Table: Glmark2


2.2.1.1.7. Multimedia (Decode)

Run gstreamer pipeline “gst-launch-1.0 playbin uri=file://<Path to stream> video-sink=”kmssink sync=false connector=<connector id>” audio-sink=fakesink” and calculate performance based on the execution time reported. All display display outputs (HDMI and LCD) were connected when running these tests, but playout was forced to LCD via the connector=<connector id> option.

2.2.1.1.7.1. H264

Table: Gstreamer H264 in AVI Container Decode Performance


2.2.1.1.7.2. MPEG4

Table: GStreamer MPEG4 in 3GP Container Decode Performance


2.2.1.1.7.3. MPEG2

Table: GStreamer MPEG2 in MP4 Container Decode Performance


2.2.1.1.8. Machine Learning

2.2.1.1.8.1. TensorFlow Lite

TensorFlow Lite https://www.tensorflow.org/lite/ is open source deep learning runtime for on-device inference. Processor SDK supports TensorFlow Lite execution on Cortex A cores on all Sitara devices.

The table below lists TensorFlow Lite performance benchmarks when running several well-known models on Sitara devices. The benchmarking data are obtained with the benchmark_model binary, which is released in the TensorFlow Lite source package and included in Processor SDK Linux filesystem.

Table: TensorFlow Lite Performance


2.2.1.1.8.2. TI Deep Learning

Accelerates deep learning inference on C66x DSP cores and on Embedded Vision Engine (EVE) subsystems.

Table: TIDL Performance


2.2.1.1.9. Ethernet

Ethernet performance benchmarks were measured using Netperf 2.7.1 https://hewlettpackard.github.io/netperf/doc/netperf.html Test procedures were modeled after those defined in RFC-2544: https://tools.ietf.org/html/rfc2544, where the DUT is the TI device and the “tester” used was a Linux PC. To produce consistent results, it is recommended to carry out performance tests in a private network and to avoid running NFS on the same interface used in the test. In these results, CPU utilization was captured as the total percentage used across all cores on the device, while running the performance test over one external interface.

UDP Throughput (0% loss) was measured by the procedure defined in RFC-2544 section 26.1: Throughput. In this scenario, netperf options burst_size (-b) and wait_time (-w) are used to limit bandwidth during different trials of the test, with the goal of finding the highest rate at which no loss is seen. For example, to limit bandwidth to 500Mbits/sec with 1472B datagram:

burst_size = <bandwidth (bits/sec)> / 8 (bits -> bytes) / <UDP datagram size> / 100 (seconds -> 10 ms)
burst_size = 500000000 / 8 / 1472 / 100 = 425

wait_time = 10 milliseconds (minimum supported by Linux PC used for testing)

UDP Throughput (possible loss) was measured by capturing throughput and packet loss statistics when running the netperf test with no bandwidth limit (remove -b/-w options).

In order to start a netperf client on one device, the other device must have netserver running. To start netserver:

netserver [-p <port_number>] [-4 (IPv4 addressing)] [-6 (IPv6 addressing)]

Running the following shell script from the DUT will trigger netperf clients to measure bidirectional TCP performance for 60 seconds and report CPU utilization. Parameter -k is used in client commands to summarize selected statistics on their own line and -j is used to gain additional timing measurements during the test.

#!/bin/bash
for i in 1
do
   netperf -H <tester ip> -j -c -l 60 -t TCP_STREAM --
      -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE &

   netperf -H <tester ip> -j -c -l 60 -t TCP_MAERTS --
      -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE &
done

Running the following commands will trigger netperf clients to measure UDP burst performance for 60 seconds at various burst/datagram sizes and report CPU utilization.

  • For UDP egress tests, run netperf client from DUT and start netserver on tester.
netperf -H <tester ip> -j -c -l 60 -t UDP_STREAM -b <burst_size> -w <wait_time> -- -m <UDP datagram size>
   -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE
  • For UDP ingress tests, run netperf client from tester and start netserver on DUT.
netperf -H <DUT ip> -j -C -l 60 -t UDP_STREAM -b <burst_size> -w <wait_time> -- -m <UDP datagram size>
   -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE

2.2.1.1.9.1. CPSW/CPSW2g/CPSW3g Ethernet Driver
  • CPSW2g: AM65x, J7200, J721e
  • CPSW3g: AM64x

TCP Bidirectional Throughput

Command Used j7200-evm: THROUGHPUT (Mbits/sec) j7200-evm: CPU Load % (LOCAL_CPU_UTIL)
netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_MAERTS 1592.44 58.43

Table: CPSW TCP Bidirectional Throughput


UDP Throughput

Frame Size(bytes) j7200-evm: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE) j7200-evm: THROUGHPUT (Mbits/sec) j7200-evm: CPU Load % (LOCAL_CPU_UTIL)
64 18.00 14.75 50.77
128 82.00 67.36 50.09
256 210.00 267.39 70.77
512 466.00 392.68 50.13
1024 978.00 567.68 35.91
1518 1472.00 952.26 46.25

Table: CPSW UDP Egress Throughput


Frame Size(bytes) j7200-evm: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE) j7200-evm: THROUGHPUT (Mbits/sec) j7200-evm: CPU Load % (LOCAL_CPU_UTIL)
64 18.00 5.44 24.23
128 82.00 11.02 10.69
256 210.00 35.45 12.44
1024 978.00 127.53 11.21
1518 1472.00 2.36 0.25

Table: CPSW UDP Ingress Throughput (0% loss)


Frame Size(bytes) j7200-evm: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE) j7200-evm: THROUGHPUT (Mbits/sec) j7200-evm: CPU Load % (LOCAL_CPU_UTIL) j7200-evm: Packet Loss %
64 18.00 23.75 67.82 2.84
128 82.00 133.89 50.68 7.61
256 210.00 211.85 50.18 45.81
1024 978.00 685.10 61.54 0.01
1518 1472.00 545.15 36.77 0.01

Table: CPSW UDP Ingress Throughput (possible loss)


2.2.1.1.9.2. CPSW5g/CPSW9g Virtual Ethernet Driver
  • CPSW5g: J7200
  • CPSW9g: J721e

TCP Bidirectional Throughput

Command Used j7200-evm: THROUGHPUT (Mbits/sec) j7200-evm: CPU Load % (LOCAL_CPU_UTIL)
netperf -H 192.168.1.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.1.1 -j -c -C -l 60 -t TCP_MAERTS 1858.54 70.13

Table: CPSW9g Virtual Ethernet Driver - TCP Bidirectional Throughput


UDP Throughput

Frame Size(bytes) j7200-evm: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE) j7200-evm: THROUGHPUT (Mbits/sec) j7200-evm: CPU Load % (LOCAL_CPU_UTIL)
64 18.00 22.63 64.11
128 82.00 141.07 84.84
256 210.00 333.94 78.62
1024 978.00 933.27 57.70
1518 1472.00 943.06 44.87

Table: CPSW5g/9g Virtual Ethernet Driver - UDP Egress Throughput


Frame Size(bytes) j7200-evm: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE) j7200-evm: THROUGHPUT (Mbits/sec) j7200-evm: CPU Load % (LOCAL_CPU_UTIL)
64 18.00 2.92 8.98
128 82.00 8.13 4.54
256 210.00 17.30 5.03
1024 978.00 89.97 8.77
1518 1472.00 194.29 13.72

Table: CPSW5g/9g Virtual Ethernet Driver - UDP Ingress Throughput (0% loss)


Frame Size(bytes) j7200-evm: UDP Datagram Size(bytes) (LOCAL_SEND_SIZE) j7200-evm: THROUGHPUT (Mbits/sec) j7200-evm: CPU Load % (LOCAL_CPU_UTIL) j7200-evm: Packet Loss %
64 18.00 47.10 73.80 27.58
128 82.00 209.81 77.14 22.08
256 210.00 554.44 79.05 25.01
1024 978.00 356.83 50.90 61.91
1518 1472.00 956.88 64.02 0.02

Table: CPSW5g/9g Virtual Ethernet Driver - UDP Ingress Throughput (possible loss)


2.2.1.1.9.3. CPSW Ethernet Driver

TCP Bidirectional Throughput

Table: CPSW TCP Bidirectional Throughput

UDP Throughput

Table: CPSW UDP Egress Throughput

Table: CPSW UDP Ingress Throughput (0% loss)

Table: CPSW UDP Ingress Throughput (possible loss)


2.2.1.1.10. PCIe Driver

2.2.1.1.10.1. PCIe-ETH

Table: PCI Ethernet

2.2.1.1.10.2. PCIe-EP
2.2.1.1.10.3. PCIe-NVMe-SSD
2.2.1.1.10.3.1. J7200-EVM
Buffer size (bytes) j7200-evm: Write EXT4 Throughput (Mbytes/sec) j7200-evm: Write EXT4 CPU Load (%) j7200-evm: Read EXT4 Throughput (Mbytes/sec) j7200-evm: Read EXT4 CPU Load (%)
1m 795.00 16.81 1523.00 5.47
4m 794.00 16.49 1523.00 4.84
4k 157.00 49.45 150.00 37.89
256k 800.00 20.48 1519.00 10.43
  • Filesize used is: 10G
  • FIO command options: –ioengine=libaio –iodepth=4 –numjobs=1 –direct=1 –runtime=60 –time_based
  • Platform: Speed 8GT/s, Width x2
  • SSD being used: PLEXTOR PX-128M8PeY

2.2.1.1.11. NAND Driver

2.2.1.1.12. OSPI Flash Driver

2.2.1.1.12.1. J7200-EVM
2.2.1.1.12.1.1. UBIFS
Buffer size (bytes) j7200-evm: Write UBIFS Throughput (Mbytes/sec) j7200-evm: Write UBIFS CPU Load (%) j7200-evm: Read UBIFS Throughput (Mbytes/sec) j7200-evm: Read UBIFS CPU Load (%)
102400 0.18 (min 0.13, max 0.30) 40.72 (min 39.32, max 42.41) 87.62 40.00
262144 0.16 (min 0.11, max 0.19) 40.47 (min 39.95, max 40.90) 86.58 25.00
524288 0.16 (min 0.11, max 0.19) 40.68 (min 38.85, max 42.13) 83.72 20.00
1048576 0.16 (min 0.11, max 0.19) 39.75 (min 38.52, max 40.69) 83.15 25.00
2.2.1.1.12.1.2. RAW
File size (Mbytes) j7200-evm: Raw Read Throughput (Mbytes/sec)
50 208.33

2.2.1.1.13. QSPI Flash Driver

2.2.1.1.14. UBoot QSPI/OSPI Driver

2.2.1.1.14.1. J7200-EVM
File size (bytes in hex) j7200-evm: Write Throughput (Kbytes/sec) j7200-evm: Read Throughput (Kbytes/sec)
400000 393.81 195047.62
800000 395.06 248242.42
1000000 397.36 282482.76
2000000 399.25 300623.85

2.2.1.1.15. SPI Flash Driver

2.2.1.1.16. UFS Driver

Warning

IMPORTANT: The performance numbers can be severely affected if the media is mounted in sync mode. Hot plug scripts in the filesystem mount removable media in sync mode to ensure data integrity. For performance sensitive applications, umount the auto-mounted filesystem and re-mount in async mode.


2.2.1.1.17. UBoot UFS Driver


2.2.1.1.18. EMMC Driver

Warning

IMPORTANT: The performance numbers can be severely affected if the media is mounted in sync mode. Hot plug scripts in the filesystem mount removable media in sync mode to ensure data integrity. For performance sensitive applications, umount the auto-mounted filesystem and re-mount in async mode.


2.2.1.1.19. UBoot EMMC Driver


2.2.1.1.19.1. J7200-EVM

File size (bytes in hex) j7200-evm: Write Throughput (Kbytes/sec) j7200-evm: Read Throughput (Kbytes/sec)
2000000 58306.05 297890.91
4000000 58514.29 319687.80

2.2.1.1.20. SATA Driver




  • Filesize used is : 1G
  • SATA II Harddisk used is: Seagate ST3500514NS 500G
2.2.1.1.20.1. mSATA Driver


  • Filesize used is : 1G
  • MSATA Harddisk used is: SMS200S3/30G Kingston mSATA SSD drive

2.2.1.1.21. MMC/SD Driver

Warning

IMPORTANT: The performance numbers can be severely affected if the media is mounted in sync mode. Hot plug scripts in the filesystem mount removable media in sync mode to ensure data integrity. For performance sensitive applications, umount the auto-mounted filesystem and re-mount in async mode.


2.2.1.1.21.1. J7200-EVM

Buffer size (bytes) j7200-evm: Write EXT4 Throughput (Mbytes/sec) j7200-evm: Write EXT4 CPU Load (%) j7200-evm: Read EXT4 Throughput (Mbytes/sec) j7200-evm: Read EXT4 CPU Load (%)
1m 13.30 0.78 99.10 1.67
4m 14.10 0.62 97.10 1.51
4k 5.19 5.07 18.30 10.60
256k 13.30 0.94 96.90 2.16



The performance numbers were captured using the following:

  • SanDisk 8GB MicroSDHC Class 10 Memory Card
  • Partition was mounted with async option

2.2.1.1.22. UBoot MMC/SD Driver


2.2.1.1.22.1. J7200-EVM

File size (bytes in hex) j7200-evm: Write Throughput (Kbytes/sec) j7200-evm: Read Throughput (Kbytes/sec)
400000 16995.85 80313.73
800000 20428.93 86231.58
1000000 18897.35 89043.48

The performance numbers were captured using the following:

  • SanDisk 8GB MicroSDHC Class 10 Memory Card






2.2.1.1.23. USB Driver

2.2.1.1.23.1. USB Host Controller

Warning

IMPORTANT: For Mass-storage applications, the performance numbers can be severely affected if the media is mounted in sync mode. Hot plug scripts in the filesystem mount removable media in sync mode to ensure data integrity. For performance sensitive applications, umount the auto-mounted filesystem and re-mount in async mode.


Setup : Inateck ASM1153E USB hard disk is connected to usb0 port. File read/write performance data on usb0 port is captured.


2.2.1.1.23.1.1. J7200-EVM

Buffer size (bytes) j7200-evm: Write EXT4 Throughput (Mbytes/sec) j7200-evm: Write EXT4 CPU Load (%) j7200-evm: Read EXT4 Throughput (Mbytes/sec) j7200-evm: Read EXT4 CPU Load (%)
1m 38.80 2.34 38.80 1.52
4m 38.90 2.37 38.90 1.54
4k 21.60 20.02 21.80 18.96
256k 37.00 2.95 38.00 2.19


2.2.1.1.23.2. USB Device Controller

















2.2.1.1.24. CRYPTO Driver

2.2.1.1.24.1. OpenSSL Performance



Listed for each algorithm are the code snippets used to run each benchmark test.

time -v openssl speed -elapsed -evp aes-128-cbc
2.2.1.1.24.2. IPSec Hardware Performance

Note: queue_len is set to 300 and software fallback threshold set to 9 to enable software support for optimal performance

2.2.1.1.24.3. IPSec Software Performance
Algorithm j7200-evm: Throughput (Mbps) j7200-evm: Packets/Sec j7200-evm: CPU Load
3des 186.10 16.00 37.64
aes128 588.10 52.00 57.00
aes192 570.70 50.00 56.96
aes256 578.70 51.00 57.00

2.2.1.1.25. DCAN Driver

Performance and Benchmarks not available in this release.

2.2.1.1.26. Power Management