3.2.2.20. E5010 JPEG Encoder

3.2.2.20.1. Introduction

The E5010 JPEG encoder is a stateful and scalable performance still image encoder. It is capable of real time encoding of YUV420/YUV422 raw picture data to fully compressed compressed image (jpeg) or a sequence of compressed images (mjpeg).

3.2.2.20.2. Hardware Specification

  • Supports 8bit 420/422 YUV as raw input video data

  • Supports memory-to-memory mode of MJPEG encoding

  • Two different Quantization tables are supported

  • Configurable compression ratio

  • Max resolution 8K x 8K

  • Supports JPEG Baseline profile

  • The core includes an MMU and tiling engine.

  • Encode rate per cycle is 4 pixels per cycle

See the the technical reference manual (TRM) for the SoC in question for more detailed information.

3.2.2.20.3. Driver Architecture

  • The driver is based on the Video4Linux 2 (V4L2) API and is responsible for configuring the E5010 core and kicking off image processing.

  • It is also responsible for generation of JPEG image headers once compressed image is generated by the E5010 JPEG core.

  • The driver is implemented using V4L2’s M2M framework which is a framework for memory-to-memory devices that take data from memory, do some processing and write out processed data back to memory.

Below diagram depicts different software layers involved here :

../../../../_images/E5010_JPEG_driver_stack.png

Fig. 3.3 E5010 JPEG Driver software stack


Linux Kernel documentation references:

3.2.2.20.4. Sample userspace programs for validating the hardware

3.2.2.20.4.1. v4l2-ctl

The v4l2-ctl application from v4l-utils package can be used to encode YUV 4:2:0 and YUV 4:2:2 pixel data from a file to a sequence of JPEG images as demonstrated in below example commands:

v4l2-ctl -d /dev/video2 --set-fmt-video-out=width=640,height=480,pixelformat=NV12 --stream-mmap --stream-out-mmap --stream-to-hdr out.jpeg --stream-from op.yuv

3.2.2.20.4.2. gst-launch-1.0

Gstreamer is a plugin based media framework for creating streaming media applications. gst-launch-1.0 is a sample gstreamer test application from gstreamer1.0 package which along with gstreamer plugins (pre-installed in file system) can be used to run different kind of video encoding based use-cases utilizing this driver as demonstrated in below example commands.

#JPEG Encoding of live stream from Test Pattern Generator
gst-launch-1.0 -v videotestsrc num-buffers=100 '!' video/x-raw,width=640,height=480,format=NV12, framerate=30/1 '!' queue '!' v4l2jpegenc extra-controls=c,compression_quality=75 '!' filesink location="op.mjpeg"

#JPEG Encoding of input YUV source file
gst-launch-1.0 filesrc location=/<path_to_file>  ! rawvideoparse width=1920 height=1080 format=nv12 framerate=30/1 ! v4l2jpegenc ! filesink location=/<path_to_file>

#JPEG Streaming of live v4l2 based RPi Camera (imx219) camera feed
Server (imx219 rawcamera->isp->encode->streamout) :
$gst-launch-1.0 v4l2src device=/dev/video-rpi-cam0 io-mode=5 ! video/x-bayer,width=1920,height=1080,format=bggr ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/d
cc_viss_1920x1080.bin sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a_1920x1080.bin sink_0::device=/dev/v4l-rpi-subdev0 ! video/x-raw,format=NV12 ! v4l2jpegenc output-io-mode=dmabuf-import extra-controls=c,compression_quality=70 ! rtpjpegpay ! udpsink port=5000 host=<ip_address>

#Client (streamin->decode->display) Assuming Ubutu with pre-installed gstreamer as host machine :
$gst-launch-1.0 -v udpsrc port=5000 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)JPEG, payload=(int)26" ! rtpjitterbuffer latency=50 ! rtpjpegdepay ! jpegparse ! jpegdec ! queue ! fpsdisplaysink text-overlay=false name=fpssink video-sink="autovideosink" sync=true -v

3.2.2.20.5. Building the driver

The E5010 JPEG driver is already enabled as a kernel module in processor SDK 9.0 for AM62A in the default defconfig being used for the board. If using a separate defconfig, It can be enabled explicitly for compilation by setting corresponding Kconfig as shown below:

CONFIG_VIDEO_E5010_JPEG_ENC=m

3.2.2.20.6. Supported driver features

The driver currently supports following features :

  • Compression quality setting

  • V4L2 API Compliance

  • Power Management

  • Multi-instance JPEG encoding

  • DMABuf Import and Export support

  • Supported video formats

3.2.2.20.6.1. Compression quality setting

  • The driver provides userspace applications an IOCTL based interface to select picture quality of encoded pixel data.

  • The applications can set the picture quality to be used for encoding using V4L2_CID_JPEG_COMPRESSION_QUALITY which can be set by passing it as a ctrl_id using a VIDIOC_S_CTRL ioctl.

    For more information on above controls below links can be referred :

  • V4L2 JPEG ctrls

  • V4L2 ctrl ioctls

  • There is a trade-off between picture quality and compression ratio as selection of higher value of compression quality setting helps with acheiving better picture quality in encoded file but at the same time it reduces the compression ratio leading to larger encoded file.

  • By default, driver sets compression quality as 75% if userspace doesn’t set any value.

  • Below example depicts how userspace can select different compression quality using gstreamer based example pipelines :

#Select compression quality as 50%
$gst-launch-1.0 -v videotestsrc num-buffers=100 '!' video/x-raw,width=640,height=480,format=NV12, framerate=30/1 '!' queue '!' v4l2jpegenc extra-controls=c,compression_quality=50 capture-io-mode=dmabuf-export output-io-mode=dmabuf-export '!' filesink location="op.mjpeg"

3.2.2.20.6.2. V4L2 API Compliance

The driver is fully compliant with V4L2 API with 100% PASS result achieved for v4l2-compliance test which can be ran as below :

$v4l2-compliance -s -d /dev/videoX (X=video node number for JPEG Encoder)

3.2.2.20.6.3. Power Management

The driver supports both runtime and system suspend hooks although only runtime suspend was validated on AM62A board as system suspend feature is not available in device manager in current release. Due to runtime power management feature, device stays in suspended state and same can be verified using k3conf utility as shown below :

root@am62axx-evm:~# k3conf dump device 201
|------------------------------------------------------------------------------|
| VERSION INFO                                                                 |
|------------------------------------------------------------------------------|
| K3CONF | (version v0.1-90-g1dd468d built Mon Jul 10 05:34:55 PM UTC 2023)    |
| SoC    | AM62Ax SR1.0                                                        |
| SYSFW  | ABI: 3.1 (firmware version 0x0009 '9.0.5--v09.00.05 (Kool Koala))') |
|------------------------------------------------------------------------------|

|---------------------------------------------------|
| Device ID | Device Name        | Device Status    |
|---------------------------------------------------|
|   201     | AM62AX_DEV_JPGENC0 | DEVICE_STATE_OFF |
|---------------------------------------------------|

3.2.2.20.6.4. Multi-instance JPEG encoding:

The hardware can only process one frame at a time but multiple application instances/contexts can still be running in parallel and V4L2 M2M framework takes care of scheduling those contexts sequentially to the E5010 JPEG driver. This can be validated by launching multiple application instances together.

#Pipe1 with 75% compression ratio
$gst-launch-1.0 -v videotestsrc num-buffers=1000 '!' video/x-raw,width=640,height=480,format=NV12, framerate=30/1 '!' queue '!' v4l2jpegenc extra-controls=c,compression_quality=75 capture-io-mode=dmabuf-export output-io-mode=dmabuf-export '!' filesink location="op1.mjpeg" &
#Pipe2 with 50% compression ratio
$gst-launch-1.0 -v videotestsrc num-buffers=1000 '!' video/x-raw,width=640,height=480,format=NV12, framerate=30/1 '!' queue '!' v4l2jpegenc extra-controls=c,compression_quality=50 capture-io-mode=dmabuf-export output-io-mode=dmabuf-export '!' filesink location="op2.mjpeg" &
...
...
...
#PipeN with 30% compression ratio
$gst-launch-1.0 -v videotestsrc num-buffers=1000 '!' video/x-raw,width=640,height=480,format=NV12, framerate=30/1 '!' queue '!' v4l2jpegenc extra-controls=c,compression_quality=30 capture-io-mode=dmabuf-export output-io-mode=dmabuf-export '!' filesink location="op3.mjpeg" &

3.2.2.20.6.5. DMABuf Import and Export support:

The driver supports dmabuf import and export for both capture and output queues which can be used for zero CPU copy transfer of pixel data. This feature is especially useful for output queue where raw pixel data of larger size need to be transferred to device for encoding.

Below examples demonstrate usage of same feature using gstreamer:

#Recoding camera feed by encoding as sequence of JPEG images using DMABUF Import
$gst-launch-1.0 v4l2src device=/dev/video-rpi-cam0 io-mode=5 ! video/x-bayer,width=1920,height=1080,format=bggr ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/d
cc_viss_1920x1080.bin sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a_1920x1080.bin sink_0::device=/dev/v4l-rpi-subdev0 ! video/x-raw,format=NV12 ! v4l2jpegenc output-io-mode=dmabuf-import ! filesink location="op.mjpeg"

#Sample pipeline demonstrating DMABUF export for both capture and output queues of JPEG Encoder while recording from live test pattern generator
$gst-launch-1.0 -v videotestsrc num-buffers=100 '!' video/x-raw,width=640,height=480,format=NV12, framerate=30/1 '!' queue '!' v4l2jpegenc extra-controls=c,compression_quality=75 capture-io-mode=dmabuf-export output-io-mode=dmabuf-export '!' filesink location="op.mjpeg"

#Sample pipeline demonstrating DMABUF import for output queues of JPEG Encoder while transcoding an existing .H264 file to a sequence of JPEG images
$gst-launch-1.0 filesrc location=bbb_4kp60_30s_IPPP.h264 ! h264parse ! v4l2h264dec capture-io-mode=dmabuf ! v4l2jpegenc output-io-mode=dmabuf-import ! filesink location=op.mjpeg

3.2.2.20.6.6. Supported video formats:

The driver supports encoding of both contigous and non-contigous versions of YUV 4:2:0 and YUV 4:2:2 semiplanar raw pixel formats. The non contiguous formats (suffixed with M in below table) use separate buffers (non-contigous) for luma and chroma data. However, the gstreamer framework uses a single video format for both contigous and non-contigous and dynamically maps it to either of them depending upon the requirement of upstream component which is sending the buffer to the driver. If both types of format are supported by driver then upstream gstreamer gives preference to non-contigous version of format. Although this behaviour was changed in gstreamer present in SDK which gives preference to contigous version of video format and this was done to match the requirements of TI specific gstreamer elements.

V4L2 Pixel Format

Number of buffers

Gstreamer Video Format

V4L2_PIX_FMT_NV12

1

GST_VIDEO_FORMAT_NV12

V4L2_PIX_FMT_NV12M

2

GST_VIDEO_FORMAT_NV12

V4L2_PIX_FMT_NV21

1

GST_VIDEO_FORMAT_NV21

V4L2_PIX_FMT_NV21M

2

GST_VIDEO_FORMAT_NV21

V4L2_PIX_FMT_NV16

1

GST_VIDEO_FORMAT_NV16

V4L2_PIX_FMT_NV16M

2

GST_VIDEO_FORMAT_NV16

V4L2_PIX_FMT_NV61

1

GST_VIDEO_FORMAT_NV61

V4L2_PIX_FMT_NV61M

2

GST_VIDEO_FORMAT_NV61

3.2.2.20.7. Buffer alignment requirements:

  • For input raw pixel data, the driver requests for width in pixels to be multiple of 64 bytes and height in pixels to be multiple of 8 bytes and buffers for output queue are allocated/negotiated accordingly.

  • For output encoded data, the driver requests for width in pixels to be multiple of 16 bytes and height in pixels to be multiple of 8 bytes and buffers for capture queue are allocated/negotiated accordingly.

3.2.2.20.8. Performance and Latency Benchmarking

The E5010 core is clocked at 250Mhz in AM62A and theoretical performance expectation with this clocking is as below :

Color subsampling

Pixel Rate

4:2:0

666.25 Mpixels/sec

4:2:2

500 Mpixels/sec

With these numbers theoretically E5010 core can handle 3840x2160@60fps equivalent load for 4:2:2 video formats and 3840x2160@75fps equivalent load for 4:2:0 video formats.

This however requires the upstream element (for e.g. camera) to support above rates. On AM62A board fastest locally available upstream element source is wave5 VPU decoder which provides maximum performance of 3840x2160@59 fps with low bitrate files and we were able to achieve same performance after passing this decoded data to E5010 JPEG Encoder as shown in below example :

$gst-launch-1.0 filesrc location=bbb_4kp60_30s_IPPP.h264 ! h264parse ! v4l2h264dec capture-io-mode=dmabuf ! v4l2jpegenc output-io-mode=dmabuf-import ! fpsdisplaysink text-overlay=false ssink video-sink="fakesink" -v
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink/GstFakeSink:fakesink0: sync = true
Redistribute latency...
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, pixel-aspect-ratio=(fraction)1/1, width=(int)3840, height=(int)2160, framerate=(fraction)60/1, chroma-format=(string)4:2:0,
bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, stream-format=(string)byte-stream, alignment=(string)au, profile=(string)high, level=(string)5.2
/GstPipeline:pipeline0/v4l2h264dec:v4l2h264dec0.GstPad:sink: caps = video/x-h264, pixel-aspect-ratio=(fraction)1/1, width=(int)3840, height=(int)2160, framerate=(fraction)60/1, chroma-format=(string)4:2:0
, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, stream-format=(string)byte-stream, alignment=(string)au, profile=(string)high, level=(string)5.2
/GstPipeline:pipeline0/v4l2h264dec:v4l2h264dec0.GstPad:src: caps = video/x-raw, format=(string)NV12, width=(int)3840, height=(int)2160, interlace-mode=(string)progressive, multiview-mode=(string)mono, mul
tiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)bt7
09, framerate=(fraction)60/1
/GstPipeline:pipeline0/v4l2jpegenc:v4l2jpegenc0.GstPad:src: caps = image/jpeg, width=(int)3840, height=(int)2160, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)60/1, interlace-mode=(string)progres
sive, colorimetry=(string)bt709, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixe
d-mono
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink.GstGhostPad:sink.GstProxyPad:proxypad0: caps = image/jpeg, width=(int)3840, height=(int)2160, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)60/1, i
nterlace-mode=(string)progressive, colorimetry=(string)bt709, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/r
ight-flopped/half-aspect/mixed-mono
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink/GstFakeSink:fakesink0.GstPad:sink: caps = image/jpeg, width=(int)3840, height=(int)2160, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)60/1, interl
ace-mode=(string)progressive, colorimetry=(string)bt709, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-
flopped/half-aspect/mixed-mono
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink.GstGhostPad:sink: caps = image/jpeg, width=(int)3840, height=(int)2160, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)60/1, interlace-mode=(string)
progressive, colorimetry=(string)bt709, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspe
ct/mixed-mono
Redistribute latency...
/GstPipeline:pipeline0/v4l2jpegenc:v4l2jpegenc0.GstPad:sink: caps = video/x-raw, format=(string)NV12, width=(int)3840, height=(int)2160, interlace-mode=(string)progressive, multiview-mode=(string)mono, mu
ltiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)bt
709, framerate=(fraction)60/1
Redistribute latency...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
Redistribute latency...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink/GstFakeSink:fakesink0: sync = true
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 32, dropped: 0, current: 63.03, average: 63.03
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 62, dropped: 0, current: 58.80, average: 60.91
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 92, dropped: 0, current: 59.38, average: 60.40
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 123, dropped: 0, current: 59.65, average: 60.21
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 153, dropped: 0, current: 58.22, average: 59.81
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 183, dropped: 0, current: 59.50, average: 59.76
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 213, dropped: 0, current: 60.00, average: 59.79
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 243, dropped: 0, current: 58.76, average: 59.66
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 274, dropped: 0, current: 60.36, average: 59.74
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 304, dropped: 0, current: 59.11, average: 59.68
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 334, dropped: 0, current: 59.99, average: 59.71
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 364, dropped: 0, current: 59.34, average: 59.68
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 394, dropped: 0, current: 59.46, average: 59.66
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 424, dropped: 0, current: 59.37, average: 59.64
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 455, dropped: 0, current: 59.56, average: 59.63
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 485, dropped: 0, current: 59.90, average: 59.65
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 515, dropped: 0, current: 57.59, average: 59.53
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 546, dropped: 0, current: 60.11, average: 59.56
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 576, dropped: 0, current: 59.37, average: 59.55
^Chandling interrupt. (16.4 %)
Interrupt: Stopping pipeline ...
Execution ended after 0:00:09.868853825
Setting pipeline to NULL ...
Freeing pipeline ...

The performance or per second throughput of a gstreamer pipeline involving E5010 JPEG Encoder can be measured using fpsdisplaysink gstreamer element

The total latency of pipeline (which means time taken by the whole pipeline to process one buffer) as well as per element latency of each gstreamer element (which means time taken by particular element to produce output buffer after receiving input buffer) can be measured using gstreamer latency tracers

Below example depicts a dummy pipeline to measure performance (or throughput) and latency of a video streaming pipeline involving imx219 RPi Camera configured to provide 1080p@30 fps, ISP block and JPEG Encoder which is configured to import data from ISP block using dmabuf sharing.

$GST_TRACERS="latency(flags=pipeline+element+reported)" gst-launch-1.0 v4l2src device=/dev/video-rpi-cam0 io-mode=5 ! video/x-bayer,width=1920,height=1080,format=bggr ! tiovxisp sensor-name=SENS
_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss_1920x1080.bin sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a_1920x1080.bin sink_0::device=/dev/v4l-rpi-subdev0 ! video/x-raw,format=NV12 ! v4l2jpegenc output-io-mode=dmabuf-import ! fpsdisplaysink text-overlay=false name=fpssink video-sink="fakesink" sync=true -v
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink/GstFakeSink:fakesink0: sync = true
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
edistribute latency...
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink/GstFakeSink:fakesink0: sync = true
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 18, dropped: 0, current: 34.84, average: 34.84
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 33, dropped: 0, current: 29.54, average: 32.21
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 48, dropped: 0, current: 29.77, average: 31.41
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 64, dropped: 0, current: 30.84, average: 31.26
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 79, dropped: 0, current: 29.43, average: 30.90
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 94, dropped: 0, current: 29.73, average: 30.70
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 110, dropped: 0, current: 30.00, average: 30.60
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 126, dropped: 0, current: 30.00, average: 30.52
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 142, dropped: 0, current: 30.01, average: 30.46
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 158, dropped: 0, current: 30.02, average: 30.42
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 174, dropped: 0, current: 30.00, average: 30.38
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 190, dropped: 0, current: 30.02, average: 30.35
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 206, dropped: 0, current: 30.01, average: 30.32
^Chandling interrupt.
Interrupt: Stopping pipeline ...
Execution ended after 0:00:07.138036536
Setting pipeline to NULL ...
Freeing pipeline ...
105268.267976 s:  VX_ZONE_INIT:[tivxHostDeInitLocal:110] De-Initializ

#Instantaneous latency of pipeline :
grep -inr fpssink /run/latency.txt
27:0:00:00.324643213  3335      0x71c3860 TRACE             GST_TRACER :0:: latency, src-element-id=(string)0x71de100, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x7224100, sin
k-element=(string)fpssink, sink=(string)sink, time=(guint64)48991362, ts=(guint64)324504863;
39:0:00:00.325933900  3335      0x716e580 TRACE             GST_TRACER :0:: element-reported-latency, element-id=(string)0x7224100, element=(string)fpssink, live=(boolean)1, min=(guint64)0, max=(guint64)0
, ts=(guint64)325911685;
40:0:00:00.325964780  3335 0xffffa4019300 TRACE             GST_TRACER :0:: element-reported-latency, element-id=(string)0x7224100, element=(string)fpssink, live=(boolean)1, min=(guint64)0, max=(guint64)0
, ts=(guint64)325942985;
43:0:00:00.334147321  3335      0x71c3860 TRACE             GST_TRACER :0:: latency, src-element-id=(string)0x71de100, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x7224100, sin
k-element=(string)fpssink, sink=(string)sink, time=(guint64)13477218, ts=(guint64)334032441;
48:0:00:00.347413308  3335      0x71c3860 TRACE             GST_TRACER :0:: latency, src-element-id=(string)0x71de100, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x7224100, sin
k-element=(string)fpssink, sink=(string)sink, time=(guint64)11798814, ts=(guint64)347290932;
53:0:00:00.383427149  3335      0x71c3860 TRACE             GST_TRACER :0:: latency, src-element-id=(string)0x71de100, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x7224100, sin
k-element=(string)fpssink, sink=(string)sink, time=(guint64)14173397, ts=(guint64)383182173;
58:0:00:00.416591946  3335      0x71c3860 TRACE             GST_TRACER :0:: latency, src-element-id=(string)0x71de100, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x7224100, sin
k-element=(string)fpssink, sink=(string)sink, time=(guint64)13566943, ts=(guint64)416325579;
63:0:00:00.452744548  3335      0x71c3860 TRACE             GST_TRACER :0:: latency, src-element-id=(string)0x71de100, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x7224100, sin
k-element=(string)fpssink, sink=(string)sink, time=(guint64)16198467, ts=(guint64)452404311;
68:0:00:00.487517297  3335      0x71c3860 TRACE             GST_TRACER :0:: latency, src-element-id=(string)0x71de100, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x7224100, sin
k-element=(string)fpssink, sink=(string)sink, time=(guint64)16488088, ts=(guint64)487244706;
73:0:00:00.526571569  3335      0x71c3860 TRACE             GST_TRACER :0:: latency, src-element-id=(string)0x71de100, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x7224100, sin
k-element=(string)fpssink, sink=(string)sink, time=(guint64)22065121, ts=(guint64)526044526;
78:0:00:00.555231273  3335      0x71c3860 TRACE             GST_TRACER :0:: latency, src-element-id=(string)0x71de100, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x7224100, sin
k-element=(string)fpssink, sink=(string)sink, time=(guint64)17143766, ts=(guint64)554536794;
83:0:00:00.592873817  3335      0x71c3860 TRACE             GST_TRACER :0:: latency, src-element-id=(string)0x71de100, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x7224100, sin
k-element=(string)fpssink, sink=(string)sink, time=(guint64)21635119, ts=(guint64)592354085;
88:0:00:00.622053359  3335      0x71c3860 TRACE             GST_TRACER :0:: latency, src-element-id=(string)0x71de100, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x7224100, sin
k-element=(string)fpssink, sink=(string)sink, time=(guint64)17565403, ts=(guint64)621521101;
93:0:00:00.659704993  3335      0x71c3860 TRACE             GST_TRACER :0:: latency, src-element-id=(string)0x71de100, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x7224100, sin
k-element=(string)fpssink, sink=(string)sink, time=(guint64)21616694, ts=(guint64)659009055;
98:0:00:00.688435863  3335      0x71c3860 TRACE             GST_TRACER :0:: latency, src-element-id=(string)0x71de100, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x7224100, sin
k-element=(string)fpssink, sink=(string)sink, time=(guint64)17212196, ts=(guint64)687916190;
103:0:00:00.726630390  3335      0x71c3860 TRACE             GST_TRACER :0:: latency, src-element-id=(string)0x71de100, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x7224100, si
nk-element=(string)fpssink, sink=(string)sink, time=(guint64)22179206, ts=(guint64)726111817;
108:0:00:00.755319574  3335      0x71c3860 TRACE             GST_TRACER :0:: latency, src-element-id=(string)0x71de100, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x7224100, si
nk-element=(string)fpssink, sink=(string)sink, time=(guint64)17565303, ts=(guint64)754797816;
113:0:00:00.792849798  3335      0x71c3860 TRACE

#"time=" depicts latency in nano seconds, average for total pipeline latency can be calculated as below
#cat /run/latency.txt | grep sink | awk -F"guint64)" '{print $2}' | awk -F"," '{total +=$1; count++} END { print total/count }'
#1.93678e+07


#Instantaneous latency of v4l2jpegenc element :
grep -inr v4l2jpegenc /run/latency.txt
 28:0:00:00.324738409  3335      0x71c3860 TRACE             GST_TRACER :0:: element-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)14128701, ts=(guint6
 4)324504863;
 37:0:00:00.325884260  3335      0x716e580 TRACE             GST_TRACER :0:: element-reported-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, live=(boolean)1, min=(guint64)0, max=(guin
 t64)0, ts=(guint64)325861764;
 38:0:00:00.325916740  3335 0xffffa4019300 TRACE             GST_TRACER :0:: element-reported-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, live=(boolean)1, min=(guint64)0, max=(guin
 t64)0, ts=(guint64)325889365;
 44:0:00:00.334228867  3335      0x71c3860 TRACE             GST_TRACER :0:: element-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)5000146, ts=(guint64
 )334032441;
 49:0:00:00.347538003  3335      0x71c3860 TRACE             GST_TRACER :0:: element-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4804714, ts=(guint64
 )347290932;
 54:0:00:00.383543410  3335      0x71c3860 TRACE             GST_TRACER :0:: element-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)5390277, ts=(guint64
 )383182173;
 59:0:00:00.416712951  3335      0x71c3860 TRACE             GST_TRACER :0:: element-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)5761004, ts=(guint64
 )416325579;
 64:0:00:00.452912183  3335      0x71c3860 TRACE             GST_TRACER :0:: element-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)5971575, ts=(guint64
 )452404311;
 69:0:00:00.487681793  3335      0x71c3860 TRACE             GST_TRACER :0:: element-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)7370972, ts=(guint64
 )487244706;
 74:0:00:00.526881890  3335      0x71c3860 TRACE             GST_TRACER :0:: element-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)8056735, ts=(guint64
 )526044526;
 79:0:00:00.555532864  3335      0x71c3860 TRACE             GST_TRACER :0:: element-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)8058910, ts=(guint64
 )554536794;
 84:0:00:00.593167984  3335      0x71c3860 TRACE             GST_TRACER :0:: element-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)8115236, ts=(guint64
 )592354085;
 89:0:00:00.622355105  3335      0x71c3860 TRACE             GST_TRACER :0:: element-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)8283526, ts=(guint64
 )621521101;
 94:0:00:00.660009310  3335      0x71c3860 TRACE             GST_TRACER :0:: element-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)8066001, ts=(guint64
 )659009055;
 99:0:00:00.688728319  3335      0x71c3860 TRACE             GST_TRACER :0:: element-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)8249936, ts=(guint64
 )687916190;
 104:0:00:00.726938761  3335      0x71c3860 TRACE             GST_TRACER :0:: element-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)8162631, ts=(guint6
 4)726111817;
 109:0:00:00.755620171  3335      0x71c3860 TRACE             GST_TRACER :0:: element-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)8141121, ts=(guint6
 4)754797816;
 114:0:00:00.793154664  3335      0x71c3860 TRACE             GST_TRACER :0:: element-latency, element-id=(string)0x721d6a0, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)8069441, ts=(guint6
 4)792292405;

#"time=" depicts latency in nano seconds, average for v4l2jpegenc can be calculated as below
$cat /run/latency.txt | grep v4l2jpegenc | awk -F"guint64)" '{print $2}' | awk -F"," '{total +=$1; count++} END { print total/count }'
8.0449e+06

Below table depicts performance and latency numbers achieved :

Pipeline performance

Total Latency

E5010 latency

1920x1080@30 fps

~19.5 ms

~8.044 ms

3.2.2.20.9. Unsupported driver features

The driver currently does not support:

  • Buffers which are allocated as physically non-contigous in memory are not supported

  • Memory tiling scheme selection per subsampling mode to reduce memory latency and power consumption is not supported

3.2.2.20.10. Acronyms Used in This Article

Abrreviation

Full Form

fps

Frames per second

NC

Non contigous

UV

Chroma Interleaved with UV sequence

VU

Chroma Interleaved with VU sequence

V4L2

Video for Linux2

JPEG

Joint Photographic Experts Group

uAPI

Userspace API

MMU

Memory management Unit

DMAbuf

Direct Memory Access buffer

VPU

Video Processing Unit

ISP

Image signal processor

MJPEG

Motion JPEG