2.2. Performance Guide

2.2. Linux 09.00.00 Performance Guide

About This Manual

This document provides performance data for each of the device drivers which are part of the Processor SDK Linux package. This document should be used in conjunction with release notes and user guides provided with the Processor SDK Linux package for information on specific issues present with drivers included in a particular release.

If You Need Assistance

For further information or to report any problems, contact http://e2e.ti.com/ or http://support.ti.com/

2.2. System Benchmarks

2.2. LMBench

LMBench is a collection of microbenchmarks of which the memory bandwidth and latency related ones are typically used to estimate processor memory system performance. More information about lmbench at http://lmbench.sourceforge.net/whatis_lmbench.html and http://lmbench.sourceforge.net/man/lmbench.8.html

Latency: lat_mem_rd-stride128-szN, where N is equal to or smaller than the cache size at given level measures the cache miss penalty. N that is at least double the size of last level cache is the latency to external memory.

Bandwidth: bw_mem_bcopy-N, where N is equal to or smaller than the cache size at a given level measures the achievable memory bandwidth from software doing a memcpy() type operation. Typical use is for external memory bandwidth calculation. The bandwidth is calculated as byte read and written counts as 1 which should be roughly half of STREAM copy result.

Execute the LMBench with the following:

cd /opt/ltp
./runltp -P j721e-idk-gw -f ddt/lmbench -s LMBENCH_L_PERF_0001
Benchmarks am68_sk-fs: perf am69_sk-fs: perf
af_unix_sock_stream_latency (microsec) 15.11 14.06
af_unix_socket_stream_bandwidth (MBs) 1989.80 3409.24
bw_file_rd-io-1mb (MB/s) 2926.12 3621.00
bw_file_rd-o2c-1mb (MB/s) 1062.13 323.78
bw_mem-bcopy-16mb (MB/s) 3387.68 3129.89
bw_mem-bcopy-1mb (MB/s) 4347.83 9372.61
bw_mem-bcopy-2mb (MB/s) 3577.82 4456.00
bw_mem-bcopy-4mb (MB/s) 3381.23 3738.32
bw_mem-bcopy-8mb (MB/s) 3407.16 3273.77
bw_mem-bzero-16mb (MB/s) 10568.03 10862.19
bw_mem-bzero-1mb (MB/s) 8584.17 (min 4347.83, max 12820.51) 11570.03 (min 9372.61, max 13767.44)
bw_mem-bzero-2mb (MB/s) 7450.36 (min 3577.82, max 11322.89) 9058.89 (min 4456.00, max 13661.77)
bw_mem-bzero-4mb (MB/s) 7023.00 (min 3381.23, max 10664.77) 8008.56 (min 3738.32, max 12278.80)
bw_mem-bzero-8mb (MB/s) 6987.60 (min 3407.16, max 10568.03) 7300.60 (min 3273.77, max 11327.43)
bw_mem-cp-16mb (MB/s) 2119.77 2406.02
bw_mem-cp-1mb (MB/s) 7549.09 (min 2478.75, max 12619.43) 8717.09 (min 3693.44, max 13740.73)
bw_mem-cp-2mb (MB/s) 6702.99 (min 2089.50, max 11316.48) 8218.82 (min 2743.95, max 13693.69)
bw_mem-cp-4mb (MB/s) 6375.94 (min 2092.78, max 10659.09) 7751.62 (min 2802.59, max 12700.65)
bw_mem-cp-8mb (MB/s) 6336.49 (min 2090.96, max 10582.01) 6910.97 (min 2450.23, max 11371.71)
bw_mem-fcp-16mb (MB/s) 3367.36 3051.11
bw_mem-fcp-1mb (MB/s) 8456.01 (min 4091.50, max 12820.51) 10197.42 (min 6627.39, max 13767.44)
bw_mem-fcp-2mb (MB/s) 7403.95 (min 3485.00, max 11322.89) 8668.11 (min 3674.44, max 13661.77)
bw_mem-fcp-4mb (MB/s) 7049.72 (min 3434.66, max 10664.77) 7877.93 (min 3477.05, max 12278.80)
bw_mem-fcp-8mb (MB/s) 6970.83 (min 3373.63, max 10568.03) 7264.14 (min 3200.85, max 11327.43)
bw_mem-frd-16mb (MB/s) 4166.67 3565.86
bw_mem-frd-1mb (MB/s) 4687.06 (min 4091.50, max 5282.62) 7284.10 (min 6627.39, max 7940.80)
bw_mem-frd-2mb (MB/s) 4066.28 (min 3485.00, max 4647.56) 4134.31 (min 3674.44, max 4594.18)
bw_mem-frd-4mb (MB/s) 3827.04 (min 3434.66, max 4219.41) 3698.35 (min 3477.05, max 3919.65)
bw_mem-frd-8mb (MB/s) 3768.71 (min 3373.63, max 4163.78) 3553.87 (min 3200.85, max 3906.89)
bw_mem-fwr-16mb (MB/s) 10575.02 10832.77
bw_mem-fwr-1mb (MB/s) 8951.03 (min 5282.62, max 12619.43) 10840.77 (min 7940.80, max 13740.73)
bw_mem-fwr-2mb (MB/s) 7982.02 (min 4647.56, max 11316.48) 9143.94 (min 4594.18, max 13693.69)
bw_mem-fwr-4mb (MB/s) 7439.25 (min 4219.41, max 10659.09) 8310.15 (min 3919.65, max 12700.65)
bw_mem-fwr-8mb (MB/s) 7372.90 (min 4163.78, max 10582.01) 7639.30 (min 3906.89, max 11371.71)
bw_mem-rd-16mb (MB/s) 4883.26 4263.26
bw_mem-rd-1mb (MB/s) 5881.98 (min 4604.05, max 7159.90) 15977.73 (min 14633.18, max 17322.27)
bw_mem-rd-2mb (MB/s) 4323.14 (min 3001.88, max 5644.40) 6988.59 (min 6321.33, max 7655.85)
bw_mem-rd-4mb (MB/s) 3661.09 (min 2335.54, max 4986.64) 4310.51 (min 3872.84, max 4748.18)
bw_mem-rd-8mb (MB/s) 3567.05 (min 2235.89, max 4898.21) 4265.04 (min 3775.96, max 4754.12)
bw_mem-rdwr-16mb (MB/s) 2128.51 2645.50
bw_mem-rdwr-1mb (MB/s) 3318.17 (min 2478.75, max 4157.59) 6653.54 (min 3693.44, max 9613.64)
bw_mem-rdwr-2mb (MB/s) 2312.18 (min 2089.50, max 2534.85) 3819.73 (min 2743.95, max 4895.50)
bw_mem-rdwr-4mb (MB/s) 2176.81 (min 2092.78, max 2260.84) 2969.19 (min 2802.59, max 3135.78)
bw_mem-rdwr-8mb (MB/s) 2111.86 (min 2090.96, max 2132.76) 2765.16 (min 2450.23, max 3080.08)
bw_mem-wr-16mb (MB/s) 2206.29 3039.51
bw_mem-wr-1mb (MB/s) 4380.82 (min 4157.59, max 4604.05) 13467.96 (min 9613.64, max 17322.27)
bw_mem-wr-2mb (MB/s) 2768.37 (min 2534.85, max 3001.88) 5608.42 (min 4895.50, max 6321.33)
bw_mem-wr-4mb (MB/s) 2298.19 (min 2260.84, max 2335.54) 3504.31 (min 3135.78, max 3872.84)
bw_mem-wr-8mb (MB/s) 2184.33 (min 2132.76, max 2235.89) 3428.02 (min 3080.08, max 3775.96)
bw_mmap_rd-mo-1mb (MB/s) 8630.61 12919.43
bw_mmap_rd-o2c-1mb (MB/s) 979.43 1195.70
bw_pipe (MB/s) 848.26 886.10
bw_unix (MB/s) 1989.80 3409.24
lat_connect (us) 34.62 34.50
lat_ctx-2-128k (us) 3.57 3.42
lat_ctx-2-256k (us) 2.80 2.57
lat_ctx-4-128k (us) 3.76 3.17
lat_ctx-4-256k (us) 2.73 2.54
lat_fs-0k (num_files) 486.00 568.00
lat_fs-10k (num_files) 189.00 220.00
lat_fs-1k (num_files) 296.00 308.00
lat_fs-4k (num_files) 311.00 284.00
lat_mem_rd-stride128-sz1000k (ns) 13.34 5.72
lat_mem_rd-stride128-sz125k (ns) 5.57 5.65
lat_mem_rd-stride128-sz250k (ns) 5.57 5.65
lat_mem_rd-stride128-sz31k (ns) 3.35 3.40
lat_mem_rd-stride128-sz50 (ns) 2.00 2.00
lat_mem_rd-stride128-sz500k (ns) 5.90 5.65
lat_mem_rd-stride128-sz62k (ns) 5.57 5.65
lat_mmap-1m (us) 35.00 34.00
lat_ops-double-add (ns) 1.96 1.96
lat_ops-double-div (ns) 9.01 9.00
lat_ops-double-mul (ns) 2.00 2.00
lat_ops-float-add (ns) 1.96 1.96
lat_ops-float-div (ns) 5.50 5.50
lat_ops-float-mul (ns) 2.00 2.00
lat_ops-int-add (ns) 0.50 0.50
lat_ops-int-bit (ns) 0.33 0.33
lat_ops-int-div (ns) 4.00 4.00
lat_ops-int-mod (ns) 4.69 4.67
lat_ops-int-mul (ns) 1.52 1.52
lat_ops-int64-add (ns) 0.50 0.50
lat_ops-int64-bit (ns) 0.33 0.33
lat_ops-int64-div (ns) 3.00 3.00
lat_ops-int64-mod (ns) 5.67 5.67
lat_ops-int64-mul (ns) 2.52 2.52
lat_pagefault (us) 0.50 0.51
lat_pipe (us) 12.34 17.15
lat_proc-exec (us) 446.25 402.92
lat_proc-fork (us) 370.00 368.06
lat_proc-proccall (us) 0.00 0.00
lat_select (us) 11.10 11.15
lat_sem (us) 1.46 1.33
lat_sig-catch (us) 2.71 2.70
lat_sig-install (us) 0.51 0.52
lat_sig-prot (us) 0.45 0.42
lat_syscall-fstat (us) 1.29 1.26
lat_syscall-null (us) 0.38 0.38
lat_syscall-open (us) 237.26 2937.00
lat_syscall-read (us) 0.48 0.49
lat_syscall-stat (us) 1.68 1.63
lat_syscall-write (us) 0.42 0.44
lat_tcp (us) 0.82 0.76
lat_unix (us) 15.11 14.06
latency_for_0.50_mb_block_size (nanosec) 5.90 5.65
latency_for_1.00_mb_block_size (nanosec) 6.67 (min 0.00, max 13.34) 2.86 (min 0.00, max 5.72)
pipe_bandwidth (MBs) 848.26 886.10
pipe_latency (microsec) 12.34 17.15
procedure_call (microsec) 0.00 0.00
select_on_200_tcp_fds (microsec) 11.10 11.15
semaphore_latency (microsec) 1.46 1.33
signal_handler_latency (microsec) 0.51 0.52
signal_handler_overhead (microsec) 2.71 2.70
tcp_ip_connection_cost_to_localhost (microsec) 34.62 34.50
tcp_latency_using_localhost (microsec) 0.82 0.76

Table: LM Bench Metrics

2.2. Dhrystone

Dhrystone is a core only benchmark that runs from warm L1 caches in all modern processors. It scales linearly with clock speed. For standard ARM cores the DMIPS/MHz score will be identical with the same compiler and flags.

Execute the benchmark with the following:

runDhrystone
Benchmarks am68_sk-fs: perf am69_sk-fs: perf
cpu_clock (MHz) 2000.00 2000.00
dhrystone_per_mhz (DMIPS/MHz) 4.70 5.70
dhrystone_per_second (DhrystoneP) 16666667.00 20000000.00

Table: Dhrystone Benchmark

2.2. Whetstone

Whetstone is a benchmark primarily measuring floating-point arithmetic performance.

Execute the benchmark with the following:

runWhetstone
Benchmarks am68_sk-fs: perf am69_sk-fs: perf
whetstone (MIPS) 10000.00 10000.00

Table: Whetstone Benchmark

2.2. Linpack

Linpack measures peak double precision (64 bit) floating point performance in solving a dense linear system.

Benchmarks am68_sk-fs: perf am69_sk-fs: perf
linpack (Kflops) 2508830.00 2614877.00

Table: Linpack Benchmark

2.2. Stream

STREAM is a microbenchmark for measuring data memory system performance without any data reuse. It is designed to miss on caches and exercise data prefetcher and speculative accesses. It uses double precision floating point (64bit) but in most modern processors the memory access will be the bottleneck. The four individual scores are copy, scale as in multiply by constant, add two numbers, and triad for multiply accumulate. For bandwidth, a byte read counts as one and a byte written counts as one, resulting in a score that is double the bandwidth LMBench will show.

Execute the benchmark with the following:

stream_c
Benchmarks am68_sk-fs: perf am69_sk-fs: perf
add (MB/s) 6213.90 6119.00
copy (MB/s) 6958.40 6635.20
scale (MB/s) 7075.30 6660.80
triad (MB/s) 6223.60 6113.20

Table: Stream

2.2. CoreMarkPro

CoreMark®-Pro is a comprehensive, advanced processor benchmark that works with and enhances the market-proven industry-standard EEMBC CoreMark® benchmark. While CoreMark stresses the CPU pipeline, CoreMark-Pro tests the entire processor, adding comprehensive support for multicore technology, a combination of integer and floating-point workloads, and data sets for utilizing larger memory subsystems.

Benchmarks am68_sk-fs: perf am69_sk-fs: perf
cjpeg-rose7-preset (workloads/) 82.64 81.97
core (workloads/) 0.78 0.78
coremark-pro () 2472.41 2498.29
linear_alg-mid-100x100-sp (workloads/) 80.65 81.57
loops-all-mid-10k-sp (workloads/) 2.48 2.47
nnet_test (workloads/) 3.59 3.56
parser-125k (workloads/) 11.11 10.87
radix2-big-64k (workloads/) 251.51 271.37
sha-test (workloads/) 158.73 158.73
zip-test (workloads/) 47.62 50.00

Table: CoreMarkPro

2.2. MultiBench

MultiBench™ is a suite of benchmarks that allows processor and system designers to analyze, test, and improve multicore processors. It uses three forms of concurrency: Data decomposition: multiple threads cooperating on achieving a unified goal and demonstrating a processor’s support for fine grain parallelism. Processing multiple data streams: uses common code running over multiple threads and demonstrating how well a processor scales over scalable data inputs. Multiple workload processing: shows the scalability of general-purpose processing, demonstrating concurrency over both code and data. MultiBench combines a wide variety of application-specific workloads with the EEMBC Multi-Instance-Test Harness (MITH), compatible and portable with most any multicore processors and operating systems. MITH uses a thread-based API (POSIX-compliant) to establish a common programming model that communicates with the benchmark through an abstraction layer and provides a flexible interface to allow a wide variety of thread-enabled workloads to be tested.

Benchmarks am68_sk-fs: perf am69_sk-fs: perf
4m-check (workloads/) 881.52 1039.93
4m-check-reassembly (workloads/) 151.06 207.04
4m-check-reassembly-tcp (workloads/) 98.04 119.05
4m-check-reassembly-tcp-cmykw2-rotatew2 (workloads/) 41.64 38.68
4m-check-reassembly-tcp-x264w2 (workloads/) 2.68 4.79
4m-cmykw2 (workloads/) 312.01 600.60
4m-cmykw2-rotatew2 (workloads/) 59.17 46.26
4m-reassembly (workloads/) 131.06 155.28
4m-rotatew2 (workloads/) 70.47 49.07
4m-tcp-mixed (workloads/) 266.67 280.70
4m-x264w2 (workloads/) 2.73 4.96
idct-4m (workloads/) 35.00 35.15
idct-4mw1 (workloads/) 35.01 35.11
ippktcheck-4m (workloads/) 881.52 1040.37
ippktcheck-4mw1 (workloads/) 869.26 1042.10
ipres-4m (workloads/) 165.38 208.91
ipres-4mw1 (workloads/) 164.65 208.91
md5-4m (workloads/) 45.98 47.48
md5-4mw1 (workloads/) 45.72 47.26
rgbcmyk-4m (workloads/) 163.13 164.20
rgbcmyk-4mw1 (workloads/) 162.87 163.93
rotate-4ms1 (workloads/) 52.03 55.01
rotate-4ms1w1 (workloads/) 51.02 55.01
rotate-4ms64 (workloads/) 52.69 55.43
rotate-4ms64w1 (workloads/) 52.91 55.62
x264-4mq (workloads/) 1.42 1.42
x264-4mqw1 (workloads/) 1.41 1.40

Table: Multibench

2.2. Boot-time Measurement

2.2. Boot media: MMCSD

Boot Configuration am68_sk-fs: boot time (sec) am69_sk-fs: boot time (sec)
Kernel boot time test when bootloader, kernel and sdk-rootfs are in mmc-sd 17.15 (min 16.84, max 17.34) 15.54 (min 15.19, max 15.76)
Kernel boot time test when init is /bin/sh and bootloader, kernel and sdk-rootfs are in mmc-sd 4.38 (min 4.38, max 4.39) 4.98 (min 4.94, max 4.99)

Table: Boot time MMC/SD

2.2. Graphics SGX/RGX Driver

2.2. Glmark2

Run Glmark2 and capture performance reported (Score). All display outputs (HDMI, Displayport and/or LCD) are connected when running these tests

Benchmark am68_sk-fs: Score am69_sk-fs: Score
Glmark2-DRM 55.00 55.00
Glmark2-Wayland 1297.00 1436.00

Table: Glmark2


2.2. Ethernet

Ethernet performance benchmarks were measured using Netperf 2.7.1 https://hewlettpackard.github.io/netperf/doc/netperf.html Test procedures were modeled after those defined in RFC-2544: https://tools.ietf.org/html/rfc2544, where the DUT is the TI device and the “tester” used was a Linux PC. To produce consistent results, it is recommended to carry out performance tests in a private network and to avoid running NFS on the same interface used in the test. In these results, CPU utilization was captured as the total percentage used across all cores on the device, while running the performance test over one external interface.

UDP Throughput (0% loss) was measured by the procedure defined in RFC-2544 section 26.1: Throughput. In this scenario, netperf options burst_size (-b) and wait_time (-w) are used to limit bandwidth during different trials of the test, with the goal of finding the highest rate at which no loss is seen. For example, to limit bandwidth to 500Mbits/sec with 1472B datagram:

burst_size = <bandwidth (bits/sec)> / 8 (bits -> bytes) / <UDP datagram size> / 100 (seconds -> 10 ms)
burst_size = 500000000 / 8 / 1472 / 100 = 425

wait_time = 10 milliseconds (minimum supported by Linux PC used for testing)

UDP Throughput (possible loss) was measured by capturing throughput and packet loss statistics when running the netperf test with no bandwidth limit (remove -b/-w options).

In order to start a netperf client on one device, the other device must have netserver running. To start netserver:

netserver [-p <port_number>] [-4 (IPv4 addressing)] [-6 (IPv6 addressing)]

Running the following shell script from the DUT will trigger netperf clients to measure bidirectional TCP performance for 60 seconds and report CPU utilization. Parameter -k is used in client commands to summarize selected statistics on their own line and -j is used to gain additional timing measurements during the test.

#!/bin/bash
for i in 1
do
   netperf -H <tester ip> -j -c -l 60 -t TCP_STREAM --
      -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE &

   netperf -H <tester ip> -j -c -l 60 -t TCP_MAERTS --
      -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE &
done

Running the following commands will trigger netperf clients to measure UDP burst performance for 60 seconds at various burst/datagram sizes and report CPU utilization.

  • For UDP egress tests, run netperf client from DUT and start netserver on tester.
netperf -H <tester ip> -j -c -l 60 -t UDP_STREAM -b <burst_size> -w <wait_time> -- -m <UDP datagram size>
   -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE
  • For UDP ingress tests, run netperf client from tester and start netserver on DUT.
netperf -H <DUT ip> -j -C -l 60 -t UDP_STREAM -b <burst_size> -w <wait_time> -- -m <UDP datagram size>
   -k DIRECTION,THROUGHPUT,MEAN_LATENCY,LOCAL_CPU_UTIL,REMOTE_CPU_UTIL,LOCAL_BYTES_SENT,REMOTE_BYTES_RECVD,LOCAL_SEND_SIZE

2.2. CPSW/CPSW2g/CPSW3g Ethernet Driver

  • CPSW2g: AM65x, J7200, J721e, J721S2, J784S4
  • CPSW3g: AM64x

TCP Bidirectional Throughput

Command Used am69_sk-fs: THROUGHPUT (Mbits/sec) am69_sk-fs: CPU Load % (LOCAL_CPU_UTIL)
netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_MAERTS 181.25 3.03

Table: CPSW TCP Bidirectional Throughput


TCP Bidirectional Throughput Interrupt Pacing

Command Used am68_sk-fs: THROUGHPUT (Mbits/sec) am68_sk-fs: CPU Load % (LOCAL_CPU_UTIL)
netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_STREAM; netperf -H 192.168.0.1 -j -c -C -l 60 -t TCP_MAERTS 179.92 9.38

Table: CPSW TCP Bidirectional Throughput Interrupt Pacing


2.2. CRYPTO Driver

2.2. OpenSSL Performance

Algorithm Buffer Size (in bytes) am68_sk-fs: throughput (KBytes/Sec) am69_sk-fs: throughput (KBytes/Sec)
aes-128-cbc 1024 1419020.29 1401805.14
aes-128-cbc 16 312105.08 298655.59
aes-128-cbc 16384 1484745.39 1501779.29
aes-128-cbc 256 1271794.43 1206112.26
aes-128-cbc 64 863253.42 750428.14
aes-128-cbc 8192 1486826.15 1497565.87
aes-192-cbc 1024 1262488.58 1257734.83
aes-192-cbc 16 356885.48 358992.33
aes-192-cbc 16384 1324176.73 1311391.74
aes-192-cbc 256 1088377.09 1075945.22
aes-192-cbc 64 799344.66 799043.11
aes-192-cbc 8192 1322822.31 1322805.93
aes-256-cbc 1024 1088494.93 1076505.60
aes-256-cbc 16 347136.90 277460.52
aes-256-cbc 16384 1128819.37 1117454.34
aes-256-cbc 256 999211.69 956262.31
aes-256-cbc 64 726533.27 642237.78
aes-256-cbc 8192 1129141.59 1130558.81
des3 1024 16290.47 16296.28
des3 16 15234.17 15533.80
des3 16384 16302.08 16307.54
des3 256 16230.91 16246.44
des3 64 16070.38 16110.72
des3 8192 16302.08 16307.54
md5 1024 268004.01 272543.06
md5 16 23708.61 26673.35
md5 16384 321809.07 322120.36
md5 256 176228.44 182089.81
md5 64 74984.49 79500.48
md5 8192 317723.99 318521.34
sha1 1024 739966.63 702369.11
sha1 16 28852.33 28167.43
sha1 16384 1091928.06 1087094.78
sha1 256 328607.49 328636.50
sha1 64 107892.71 107398.46
sha1 8192 1058485.59 1051451.39
sha224 1024 670352.04 666634.24
sha224 16 32146.12 30489.96
sha224 16384 967469.74 967546.20
sha224 256 342687.23 339400.28
sha224 64 115086.29 113553.15
sha224 8192 936132.61 943158.61
sha256 1024 650168.66 651953.83
sha256 16 27388.27 29250.07
sha256 16384 962494.46 960075.09
sha256 256 322742.19 323623.77
sha256 64 104748.25 106680.26
sha256 8192 938912.43 939095.38
sha384 1024 217740.29 216774.66
sha384 16 15821.97 15635.32
sha384 16384 271592.11 271766.87
sha384 256 133338.11 131896.75
sha384 64 63657.26 63300.20
sha384 8192 268088.66 267785.56
sha512 1024 220648.79 219712.85
sha512 16 17191.00 16875.69
sha512 16384 271187.97 272307.54
sha512 256 137726.63 136462.25
sha512 64 68889.02 67711.59
sha512 8192 267815.59 268266.15


Algorithm am68_sk-fs: CPU Load am69_sk-fs: CPU Load
aes-128-cbc 99.00 99.00
aes-192-cbc 99.00 99.00
aes-256-cbc 99.00 99.00
des-cbc 60.00 100.00
des3 99.00 99.00
md5 99.00 99.00
sha1 99.00 99.00
sha224 99.00 99.00
sha256 99.00 99.00
sha384 99.00 99.00
sha512 99.00 99.00
Listed for each algorithm are the code snippets used to run each
benchmark test.
::
time -v openssl speed -elapsed -evp aes-128-cbc

2.2. DCAN Driver

Performance and Benchmarks not available in this release.