2.4.1. RT-linux Performance Guide

Read This First

All performance numbers provided in this document are gathered using following Evaluation Modules unless otherwise specified.

Name

Description

AM62A SK

AM62A Starter Kit rev E2 with ARM running at 1200MHz, LPDDR4 data rate is 3733 MT/s

Table: Evaluation Modules


About This Manual

This document provides performance data for each of the device drivers which are part of the Process SDK Linux package. This document should be used in conjunction with release notes and user guides provided with the Process SDK Linux package for information on specific issues present with drivers included in a particular release.

If You Need Assistance

For further information or to report any problems, contact http://e2e.ti.com/ or http://support.ti.com/

2.4.1.1. System Benchmarks

2.4.1.1.1. LMBench

LMBench is a collection of microbenchmarks of which the memory bandwidth and latency related ones are typically used to estimate processor memory system performance. More information about lmbench at http://lmbench.sourceforge.net/whatis_lmbench.html and http://lmbench.sourceforge.net/man/lmbench.8.html

Latency: lat_mem_rd-stride128-szN, where N is equal to or smaller than the cache size at given level measures the cache miss penalty. N that is at least double the size of last level cache is the latency to external memory.

Bandwidth: bw_mem_bcopy-N, where N is equal to or smaller than the cache size at a given level measures the achievable memory bandwidth from software doing a memcpy() type operation. Typical use is for external memory bandwidth calculation. The bandwidth is calculated as byte read and written counts as 1 which should be roughly half of STREAM copy result.

Benchmarks

am62axx_sk-fs: perf

af_unix_sock_stream_latency (microsec)

28.05

af_unix_socket_stream_bandwidth (MBs)

1103.86

bw_file_rd-io-1mb (MB/s)

1226.99

bw_file_rd-o2c-1mb (MB/s)

736.24

bw_mem-bcopy-16mb (MB/s)

1879.48

bw_mem-bcopy-1mb (MB/s)

1945.15

bw_mem-bcopy-2mb (MB/s)

1795.98

bw_mem-bcopy-4mb (MB/s)

1849.85

bw_mem-bcopy-8mb (MB/s)

1843.74

bw_mem-bzero-16mb (MB/s)

7318.19

bw_mem-bzero-1mb (MB/s)

4621.56 (min 1945.15, max 7297.97)

bw_mem-bzero-2mb (MB/s)

4550.17 (min 1795.98, max 7304.35)

bw_mem-bzero-4mb (MB/s)

4587.26 (min 1849.85, max 7324.67)

bw_mem-bzero-8mb (MB/s)

4556.25 (min 1843.74, max 7268.76)

bw_mem-cp-16mb (MB/s)

942.17

bw_mem-cp-1mb (MB/s)

1232.92 (min 943.25, max 1522.59)

bw_mem-cp-2mb (MB/s)

1172.16 (min 952.53, max 1391.79)

bw_mem-cp-4mb (MB/s)

1172.96 (min 957.51, max 1388.41)

bw_mem-cp-8mb (MB/s)

1191.57 (min 992.31, max 1390.82)

bw_mem-fcp-16mb (MB/s)

1631.82

bw_mem-fcp-1mb (MB/s)

4505.45 (min 1712.92, max 7297.97)

bw_mem-fcp-2mb (MB/s)

4464.36 (min 1624.37, max 7304.35)

bw_mem-fcp-4mb (MB/s)

4426.67 (min 1528.66, max 7324.67)

bw_mem-fcp-8mb (MB/s)

4424.43 (min 1580.09, max 7268.76)

bw_mem-frd-16mb (MB/s)

1907.49

bw_mem-frd-1mb (MB/s)

1815.82 (min 1712.92, max 1918.72)

bw_mem-frd-2mb (MB/s)

1703.61 (min 1624.37, max 1782.85)

bw_mem-frd-4mb (MB/s)

1681.06 (min 1528.66, max 1833.46)

bw_mem-frd-8mb (MB/s)

1657.54 (min 1580.09, max 1734.98)

bw_mem-fwr-16mb (MB/s)

1391.30

bw_mem-fwr-1mb (MB/s)

1720.66 (min 1522.59, max 1918.72)

bw_mem-fwr-2mb (MB/s)

1587.32 (min 1391.79, max 1782.85)

bw_mem-fwr-4mb (MB/s)

1610.94 (min 1388.41, max 1833.46)

bw_mem-fwr-8mb (MB/s)

1562.90 (min 1390.82, max 1734.98)

bw_mem-rd-16mb (MB/s)

1980.20

bw_mem-rd-1mb (MB/s)

1979.04 (min 1831.17, max 2126.91)

bw_mem-rd-2mb (MB/s)

1814.31 (min 1687.19, max 1941.43)

bw_mem-rd-4mb (MB/s)

1838.14 (min 1694.44, max 1981.83)

bw_mem-rd-8mb (MB/s)

1819.77 (min 1665.45, max 1974.09)

bw_mem-rdwr-16mb (MB/s)

1666.67

bw_mem-rdwr-1mb (MB/s)

1336.51 (min 943.25, max 1729.77)

bw_mem-rdwr-2mb (MB/s)

1207.40 (min 952.53, max 1462.26)

bw_mem-rdwr-4mb (MB/s)

1307.26 (min 957.51, max 1657.00)

bw_mem-rdwr-8mb (MB/s)

1305.14 (min 992.31, max 1617.96)

bw_mem-wr-16mb (MB/s)

1669.27

bw_mem-wr-1mb (MB/s)

1780.47 (min 1729.77, max 1831.17)

bw_mem-wr-2mb (MB/s)

1574.73 (min 1462.26, max 1687.19)

bw_mem-wr-4mb (MB/s)

1675.72 (min 1657.00, max 1694.44)

bw_mem-wr-8mb (MB/s)

1641.71 (min 1617.96, max 1665.45)

bw_mmap_rd-mo-1mb (MB/s)

2104.52

bw_mmap_rd-o2c-1mb (MB/s)

667.22

bw_pipe (MB/s)

613.53

bw_unix (MB/s)

1103.86

lat_connect (us)

58.41

lat_ctx-2-128k (us)

5.97

lat_ctx-2-256k (us)

6.54

lat_ctx-4-128k (us)

6.43

lat_ctx-4-256k (us)

5.95

lat_fs-0k (num_files)

258.00

lat_fs-10k (num_files)

129.00

lat_fs-1k (num_files)

184.00

lat_fs-4k (num_files)

184.00

lat_mem_rd-stride128-sz1000k (ns)

31.07

lat_mem_rd-stride128-sz125k (ns)

6.30

lat_mem_rd-stride128-sz250k (ns)

6.64

lat_mem_rd-stride128-sz31k (ns)

4.10

lat_mem_rd-stride128-sz50 (ns)

2.41

lat_mem_rd-stride128-sz500k (ns)

13.65

lat_mem_rd-stride128-sz62k (ns)

5.91

lat_mmap-1m (us)

56.00

lat_ops-double-add (ns)

3.22

lat_ops-double-div (ns)

17.69

lat_ops-double-mul (ns)

3.22

lat_ops-float-add (ns)

3.22

lat_ops-float-div (ns)

10.44

lat_ops-float-mul (ns)

3.22

lat_ops-int-add (ns)

0.80

lat_ops-int-bit (ns)

0.54

lat_ops-int-div (ns)

4.82

lat_ops-int-mod (ns)

5.10

lat_ops-int-mul (ns)

3.45

lat_ops-int64-add (ns)

0.80

lat_ops-int64-bit (ns)

0.54

lat_ops-int64-div (ns)

7.63

lat_ops-int64-mod (ns)

5.89

lat_ops-int64-mul (ns)

3.98

lat_pagefault (us)

1.47

lat_pipe (us)

22.37

lat_proc-exec (us)

958.33

lat_proc-fork (us)

744.14

lat_proc-proccall (us)

0.01

lat_select (us)

37.06

lat_sem (us)

3.32

lat_sig-catch (us)

6.05

lat_sig-install (us)

0.74

lat_sig-prot (us)

0.45

lat_syscall-fstat (us)

3.02

lat_syscall-null (us)

0.53

lat_syscall-open (us)

132.97

lat_syscall-read (us)

0.86

lat_syscall-stat (us)

4.32

lat_syscall-write (us)

0.73

lat_tcp (us)

1.03

lat_unix (us)

28.05

latency_for_0.50_mb_block_size (nanosec)

13.65

latency_for_1.00_mb_block_size (nanosec)

15.53 (min 0.00, max 31.07)

pipe_bandwidth (MBs)

613.53

pipe_latency (microsec)

22.37

procedure_call (microsec)

0.01

select_on_200_tcp_fds (microsec)

37.06

semaphore_latency (microsec)

3.32

signal_handler_latency (microsec)

0.74

signal_handler_overhead (microsec)

6.05

tcp_ip_connection_cost_to_localhost (microsec)

58.41

tcp_latency_using_localhost (microsec)

1.03

Table: LM Bench Metrics

2.4.1.1.2. Dhrystone

Dhrystone is a core only benchmark that runs from warm L1 caches in all modern processors. It scales linearly with clock speed. For standard ARM cores the DMIPS/MHz score will be identical with the same compiler and flags.

Benchmarks

am62axx_sk-fs: perf

cpu_clock (MHz)

1250.00

dhrystone_per_mhz (DMIPS/MHz)

2.90

dhrystone_per_second (DhrystoneP)

6451613.00

Table: Dhrystone Benchmark

2.4.1.1.3. Whetstone

Benchmarks

am62axx_sk-fs: perf

whetstone (MIPS)

5000.00

Table: Whetstone Benchmark

2.4.1.1.4. Linpack

Linpack measures peak double precision (64 bit) floating point performance in solving a dense linear system.

Benchmarks

am62axx_sk-fs: perf

linpack (Kflops)

514704.00

Table: Linpack Benchmark

2.4.1.1.5. Stream

STREAM is a microbenchmark for measuring data memory system performance without any data reuse. It is designed to miss on caches and exercise data prefetcher and speculative accesses. It uses double precision floating point (64bit) but in most modern processors the memory access will be the bottleneck. The four individual scores are copy, scale as in multiply by constant, add two numbers, and triad for multiply accumulate. For bandwidth, a byte read counts as one and a byte written counts as one, resulting in a score that is double the bandwidth LMBench will show.

Benchmarks

am62axx_sk-fs: perf

add (MB/s)

2444.50

copy (MB/s)

3488.90

scale (MB/s)

3225.80

triad (MB/s)

2235.80

Table: Stream

2.4.1.1.6. CoreMarkPro

CoreMark®-Pro is a comprehensive, advanced processor benchmark that works with and enhances the market-proven industry-standard EEMBC CoreMark® benchmark. While CoreMark stresses the CPU pipeline, CoreMark-Pro tests the entire processor, adding comprehensive support for multicore technology, a combination of integer and floating-point workloads, and data sets for utilizing larger memory subsystems.

Benchmarks

am62axx_sk-fs: perf

cjpeg-rose7-preset (workloads/)

36.50

core (workloads/)

0.27

coremark-pro ()

784.02

linear_alg-mid-100x100-sp (workloads/)

13.03

loops-all-mid-10k-sp (workloads/)

0.62

nnet_test (workloads/)

0.97

parser-125k (workloads/)

7.35

radix2-big-64k (workloads/)

39.30

sha-test (workloads/)

72.46

zip-test (workloads/)

19.61

Table: CoreMarkPro

2.4.1.1.7. MultiBench

MultiBench™ is a suite of benchmarks that allows processor and system designers to analyze, test, and improve multicore processors. It uses three forms of concurrency: Data decomposition: multiple threads cooperating on achieving a unified goal and demonstrating a processor’s support for fine grain parallelism. Processing multiple data streams: uses common code running over multiple threads and demonstrating how well a processor scales over scalable data inputs. Multiple workload processing: shows the scalability of general-purpose processing, demonstrating concurrency over both code and data. MultiBench combines a wide variety of application-specific workloads with the EEMBC Multi-Instance-Test Harness (MITH), compatible and portable with most any multicore processors and operating systems. MITH uses a thread-based API (POSIX-compliant) to establish a common programming model that communicates with the benchmark through an abstraction layer and provides a flexible interface to allow a wide variety of thread-enabled workloads to be tested.

Benchmarks

am62axx_sk-fs: perf

4m-check (workloads/)

355.77

4m-check-reassembly (workloads/)

112.36

4m-check-reassembly-tcp (workloads/)

54.00

4m-check-reassembly-tcp-cmykw2-rotatew2 (workloads/)

24.07

4m-check-reassembly-tcp-x264w2 (workloads/)

1.47

4m-cmykw2 (workloads/)

161.81

4m-cmykw2-rotatew2 (workloads/)

36.86

4m-reassembly (workloads/)

81.04

4m-rotatew2 (workloads/)

42.21

4m-tcp-mixed (workloads/)

106.67

4m-x264w2 (workloads/)

1.49

empty-wld (workloads/)

idct-4m (workloads/)

17.13

idct-4mw1 (workloads/)

17.13

ippktcheck-4m (workloads/)

356.84

ippktcheck-4mw1 (workloads/)

356.58

ipres-4m (workloads/)

103.52

ipres-4mw1 (workloads/)

103.73

md5-4m (workloads/)

24.46

md5-4mw1 (workloads/)

24.52

rgbcmyk-4m (workloads/)

58.88

rgbcmyk-4mw1 (workloads/)

58.77

rotate-4ms1 (workloads/)

21.40

rotate-4ms1w1 (workloads/)

21.21

rotate-4ms64 (workloads/)

21.51

rotate-4ms64w1 (workloads/)

21.41

x264-4mq (workloads/)

0.51

x264-4mqw1 (workloads/)

0.51

Table: Multibench


2.4.1.2. Ethernet

Ethernet performance benchmarks were measured using Netperf 2.7.1 https://hewlettpackard.github.io/netperf/doc/netperf.html Test procedures were modeled after those defined in RFC-2544: https://tools.ietf.org/html/rfc2544, where the DUT is the TI device and the “tester” used was a Linux PC. To produce consistent results, it is recommended to carry out performance tests in a private network and to avoid running NFS on the same interface used in the test. In these results, CPU utilization was captured as the total percentage used across all cores on the device, while running the performance test over one external interface.

UDP Throughput (0% loss) was measured by the procedure defined in RFC-2544 section 26.1: Throughput. In this scenario, netperf options burst_size (-b) and wait_time (-w) are used to limit bandwidth during different trials of the test, with the goal of finding the highest rate at which no loss is seen. For example, to limit bandwidth to 500Mbits/sec with 1472B datagram:

burst_size = <bandwidth (bits/sec)> / 8 (bits -> bytes) / <UDP datagram size> / 100 (seconds -> 10 ms)
burst_size = 500000000 / 8 / 1472 / 100 = 425

wait_time = 10 milliseconds (minimum supported by Linux PC used for testing)

UDP Throughput (possible loss) was measured by capturing throughput and packet loss statistics when running the netperf test with no bandwidth limit (remove -b/-w options).

In order to start a netperf client on one device, the other device must have netserver running. To start netserver:

netserver [-p <port_number>] [-4 (IPv4 addressing)] [-6 (IPv6 addressing)]

Running the following shell script from the DUT will trigger netperf clients to measure bidirectional TCP performance for 60 seconds and report CPU utilization. Parameter -k is used in client commands to summarize selected statistics on their own line and -j is used to gain additional timing measurements during the test.

#!/bin/bash
for i in 1
do
   netperf -H <tester ip> -j -c -l 60 -t TCP_STREAM --
      -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE &

   netperf -H <tester ip> -j -c -l 60 -t TCP_MAERTS --
      -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE &
done

Running the following commands will trigger netperf clients to measure UDP burst performance for 60 seconds at various burst/datagram sizes and report CPU utilization.

  • For UDP egress tests, run netperf client from DUT and start netserver on tester.

netperf -H <tester ip> -j -c -l 60 -t UDP_STREAM -b <burst_size> -w <wait_time> -- -m <UDP datagram size>
   -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE
  • For UDP ingress tests, run netperf client from tester and start netserver on DUT.

netperf -H <DUT ip> -j -C -l 60 -t UDP_STREAM -b <burst_size> -w <wait_time> -- -m <UDP datagram size>
   -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE

2.4.1.2.1. CPSW/CPSW2g/CPSW3g Ethernet Driver

TCP Bidirectional Throughput

Command Used

am62axx_sk-fs: THROUGHPUT (Mbits/sec)

am62axx_sk-fs: CPU Load % (LOCAL_CPU_UTIL)

netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_MAERTS

1855.62

85.03

Table: CPSW TCP Bidirectional Throughput


2.4.1.3. CRYPTO Driver

2.4.1.3.1. OpenSSL Performance

Algorithm

Buffer Size (in bytes)

am62axx_sk-fs: throughput (KBytes/Sec)

aes-128-cbc

1024

21167.79

aes-128-cbc

16

411.44

aes-128-cbc

16384

83367.25

aes-128-cbc

256

6201.94

aes-128-cbc

64

1621.76

aes-128-cbc

8192

69033.98

aes-192-cbc

1024

20931.58

aes-192-cbc

16

414.10

aes-192-cbc

16384

75688.62

aes-192-cbc

256

6210.73

aes-192-cbc

64

1634.03

aes-192-cbc

8192

63892.14

aes-256-cbc

1024

20416.51

aes-256-cbc

16

412.90

aes-256-cbc

16384

69277.01

aes-256-cbc

256

6145.88

aes-256-cbc

64

1610.69

aes-256-cbc

8192

59189.93

des-cbc

1024

23051.95

des-cbc

16

4122.59

des-cbc

16384

24663.38

des-cbc

256

18923.78

des-cbc

64

11011.56

des-cbc

8192

24545.96

des3

1024

9750.87

des3

16

3229.89

des3

16384

10032.47

des3

256

8893.61

des3

64

6573.42

des3

8192

10005.16

md5

1024

41871.02

md5

16

886.54

md5

16384

133802.67

md5

256

12976.64

md5

64

3446.57

md5

8192

116471.13

sha1

1024

49449.30

sha1

16

846.15

sha1

16384

304911.70

sha1

256

13363.88

sha1

64

3425.77

sha1

8192

225503.91

sha224

1024

49502.89

sha224

16

848.51

sha224

16384

311083.01

sha224

256

13263.36

sha224

64

3375.25

sha224

8192

228018.86

sha256

1024

29983.40

sha256

16

496.45

sha256

16384

248146.60

sha256

256

7828.39

sha256

64

1979.52

sha256

8192

166207.49

sha384

1024

30427.82

sha384

16

831.21

sha384

16384

65465.00

sha384

256

11166.89

sha384

64

3314.47

sha384

8192

60708.18

sha512

1024

21775.70

sha512

16

489.23

sha512

16384

62133.59

sha512

256

7042.13

sha512

64

1959.25

sha512

8192

55184.04


Algorithm

am62axx_sk-fs: CPU Load

aes-128-cbc

37.00

aes-192-cbc

36.00

aes-256-cbc

35.00

des-cbc

97.00

des3

97.00

md5

98.00

sha1

98.00

sha224

98.00

sha256

98.00

sha384

88.00

sha512

98.00


Listed for each algorithm are the code snippets used to run each benchmark test.


time -v openssl speed -elapsed -evp aes-128-cbc

2.4.1.3.2. IPSec Software Performance

Algorithm

am62axx_sk-fs: Throughput (Mbps)

am62axx_sk-fs: Packets/Sec

am62axx_sk-fs: CPU Load

3des

63.80

5.00

24.89

aes128

60.90

5.00

18.42

aes192

56.00

5.00

17.21

aes256

0.00

0.00

8.57