3.2.2.19. E5010 JPEG Encoder
3.2.2.19.1. Introduction
The E5010 JPEG encoder is a stateful and scalable performance still image encoder. It is capable of real time encoding of YUV420/YUV422 raw picture data to fully compressed compressed image (jpeg) or a sequence of compressed images (mjpeg). The driver is supported in both TI SDK and upstream at below paths:
3.2.2.19.2. Hardware Specification
Supports 8bit 420/422 YUV as raw input video data
Supports memory-to-memory mode of MJPEG encoding
Two different Quantization tables are supported
Configurable compression ratio
Max resolution 8K x 8K
Supports JPEG Baseline profile
The core includes an MMU and tiling engine.
Encode rate per cycle is 4 pixels per cycle
See the the technical reference manual (TRM) for the SoC in question for more detailed information.
3.2.2.19.3. Driver Architecture
The driver is based on the Video4Linux 2 (V4L2) API and is responsible for configuring the E5010 core and kicking off image processing.
It is also responsible for generation of JPEG image headers once compressed image is generated by the E5010 JPEG core.
The driver is implemented using V4L2’s M2M framework which is a framework for memory-to-memory devices that take data from memory, do some processing and write out processed data back to memory.
Below diagram depicts different software layers involved here :

Fig. 3.2 E5010 JPEG Driver software stack
Linux Kernel documentation references:
3.2.2.19.4. Sample userspace programs for validating the hardware
3.2.2.19.4.1. v4l2-ctl
The v4l2-ctl application from v4l-utils package can be used to encode YUV 4:2:0 and YUV 4:2:2 pixel data from a file to a sequence of JPEG images as demonstrated in below example commands:
v4l2-ctl -d /dev/video2 --set-fmt-video-out=width=640,height=480,pixelformat=NV12 --stream-mmap --stream-out-mmap --stream-to-hdr out.jpeg --stream-from op.yuv
3.2.2.19.4.2. gst-launch-1.0
Gstreamer is a plugin based media framework for creating streaming media applications. gst-launch-1.0 is a sample gstreamer test application from gstreamer1.0 package which along with gstreamer plugins (pre-installed in file system) can be used to run different kind of video encoding based use-cases utilizing this driver as demonstrated in below example commands.
#JPEG Encoding of live stream from Test Pattern Generator
gst-launch-1.0 -v videotestsrc num-buffers=100 '!' video/x-raw,width=640,height=480,format=NV12, framerate=30/1 '!' queue '!' v4l2jpegenc extra-controls=c,compression_quality=75 '!' filesink location="op.mjpeg"
#JPEG Encoding of input YUV source file
gst-launch-1.0 filesrc location=/<path_to_file> ! rawvideoparse width=1920 height=1080 format=nv12 framerate=30/1 ! v4l2jpegenc ! filesink location=/<path_to_file>
#JPEG Encoding of live v4l2 based RPi Camera (imx219) camera feed :
$ gst-launch-1.0 v4l2src io-mode=dmabuf-import num-buffers=100 device=/dev/video-imx219-cam0 ! video/x-bayer,width=1920,height=1080,format=rggb ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss_1920x1080.bin sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a_1920x1080.bin sink_0::device=/dev/v4l-imx219-subdev0 ! video/x-raw,format=NV12 ! v4l2jpegenc output-io-mode=dmabuf-import extra-controls=c,compression_quality=70 ! queue ! filesink location="/run/op.mjpeg"
#JPEG Streaming of live v4l2 based RPi Camera (imx219) camera feed :
Server (imx219 rawcamera->isp->encode->streamout) :
$ gst-launch-1.0 v4l2src io-mode=dmabuf-import num-buffers=100 device=/dev/video-imx219-cam0 ! video/x-bayer,width=1920,height=1080,format=rggb ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss_1920x1080.bin sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a_1920x1080.bin sink_0::device=/dev/v4l-imx219-subdev0 ! video/x-raw,format=NV12 ! v4l2jpegenc output-io-mode=dmabuf-import extra-controls=c,compression_quality=70 ! queue ! rtpjpegpay ! udpsink port=5000 host=<host_ip_addr>
#Client (streamin->decode->display) Assuming Ubutu with pre-installed gstreamer as host machine :
$ gst-launch-1.0 -v udpsrc port=5000 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)JPEG, payload=(int)26" ! rtpjitterbuffer latency=50 ! rtpjpegdepay ! jpegparse ! jpegdec ! queue ! fpsdisplaysink text-overlay=false name=fpssink video-sink="autovideosink" sync=true -v
3.2.2.19.5. Building the driver
The E5010 JPEG driver is already enabled as a kernel module on AM67 as part of the default defconfig being used for the board. If using a separate defconfig, it can be enabled explicitly for compilation by setting corresponding Kconfig as shown below:
CONFIG_VIDEO_E5010_JPEG_ENC=m
3.2.2.19.6. Supported driver features
The driver currently supports following features :
Compression quality setting
V4L2 API Compliance
System and Run-time Power Management
Video cropping
Multi-instance JPEG encoding
DMABuf Import and Export support
YUV 4:2:0 & YUV 4:2:2 video formats supported
3.2.2.19.6.1. Compression quality setting
The driver provides userspace applications an IOCTL based interface to select picture quality of encoded pixel data.
The applications can set the picture quality to be used for encoding using V4L2_CID_JPEG_COMPRESSION_QUALITY which can be set by passing it as a ctrl_id using a VIDIOC_S_CTRL ioctl.
For more information on above controls below links can be referred :
There is a trade-off between picture quality and compression ratio as selection of higher value of compression quality setting helps with acheiving better picture quality in encoded file but at the same time it reduces the compression ratio leading to larger encoded file.
By default, driver sets compression quality as 75% if userspace doesn’t set any value.
Below example depicts how userspace can select different compression quality using gstreamer based example pipelines :
#Select compression quality as 50%
$gst-launch-1.0 -v videotestsrc num-buffers=100 '!' video/x-raw,width=640,height=480,format=NV12, framerate=30/1 '!' queue '!' v4l2jpegenc extra-controls=c,compression_quality=50 capture-io-mode=dmabuf-export output-io-mode=dmabuf-export '!' filesink location="op.mjpeg"
3.2.2.19.6.2. V4L2 API Compliance
The driver is fully compliant with V4L2 API with 100% PASS result achieved for v4l2-compliance test which can be ran as below:
$v4l2-compliance -s -d /dev/videoX (X=video node number for JPEG Encoder)
3.2.2.19.6.3. Power Management
The driver supports both runtime and system suspend hooks.
3.2.2.19.6.3.1. Runtime PM
Due to runtime power management feature, when JPEG encoder is not being used by any of the applications, it stays in suspended state and same can be verified using k3conf utility as shown below :
root@j722s-evm-evm:~# k3conf dump device 201
|------------------------------------------------------------------------------|
| VERSION INFO |
|------------------------------------------------------------------------------|
| K3CONF | (version 0.3-nogit built Thu Jul 25 14:13:02 UTC 2024) |
| SoC | J722S SR1.0 |
| SYSFW | ABI: 4.0 (firmware version 0x000a '10.0.8--v10.00.08 (Fiery Fox))') |
|------------------------------------------------------------------------------|
|---------------------------------------------------|
| Device ID | Device Name | Device Status |
|---------------------------------------------------|
| 201 | J722S_DEV_JPGENC0 | DEVICE_STATE_OFF |
|---------------------------------------------------|
3.2.2.19.6.4. Video Cropping
The E5010 JPEG encoder driver supports video cropping feature where application can request to encode
only a portion of the input frame (called crop rectangle) by providing the coordinates and dimension information
of crop rectangle to the application using VIDIOC_S_SELECTION
ioctl as shown below:
/* apply cropping */
struct v4l2_selection sel = {
.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
.target = V4L2_SEL_TGT_CROP_BOUNDS,
};
struct v4l2_rect r;
r.width = crop_width > 0 ? crop_width : width;
r.height = crop_height > 0 ? crop_height : height;
r.left = crop_left;
r.top = crop_top;
sel.r = r;
sel.target = V4L2_SEL_TGT_CROP;
sel.flags = V4L2_SEL_FLAG_LE;
ret = ioctl(vid_fd, VIDIOC_S_SELECTION, &sel);
if (ret)
printf("raw image cropping failed\n");
else
printf("cropped rectangle: %dx%d\n", sel.r.width, sel.r.height);
For more information on passing up the cropping rectangle referred information to application, please refer below link :
3.2.2.19.6.5. Multi-instance JPEG encoding
The hardware can only process one frame at a time but multiple application instances/contexts can still be running in parallel and V4L2 M2M framework takes care of scheduling those contexts sequentially to the E5010 JPEG driver. This can be validated by launching multiple application instances together.
#Pipe1 with 75% compression ratio
$gst-launch-1.0 -v videotestsrc num-buffers=1000 '!' video/x-raw,width=640,height=480,format=NV12, framerate=30/1 '!' queue '!' v4l2jpegenc extra-controls=c,compression_quality=75 capture-io-mode=dmabuf-export output-io-mode=dmabuf-export '!' filesink location="op1.mjpeg" &
#Pipe2 with 50% compression ratio
$gst-launch-1.0 -v videotestsrc num-buffers=1000 '!' video/x-raw,width=640,height=480,format=NV12, framerate=30/1 '!' queue '!' v4l2jpegenc extra-controls=c,compression_quality=50 capture-io-mode=dmabuf-export output-io-mode=dmabuf-export '!' filesink location="op2.mjpeg" &
...
...
...
#PipeN with 30% compression ratio
$gst-launch-1.0 -v videotestsrc num-buffers=1000 '!' video/x-raw,width=640,height=480,format=NV12, framerate=30/1 '!' queue '!' v4l2jpegenc extra-controls=c,compression_quality=30 capture-io-mode=dmabuf-export output-io-mode=dmabuf-export '!' filesink location="op3.mjpeg" &
3.2.2.19.6.6. DMABuf Import and Export support
The driver supports dmabuf import and export for both capture and output queues which can be used for zero CPU copy transfer of pixel data. This feature is especially useful for output queue where raw pixel data of larger size need to be transferred to device for encoding.
Below examples demonstrate usage of same feature using gstreamer:
#Recoding camera feed by encoding as sequence of JPEG images using DMABUF Import
$gst-launch-1.0 v4l2src device=/dev/video-rpi-cam0 io-mode=5 ! video/x-bayer,width=1920,height=1080,format=bggr ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/d
cc_viss_1920x1080.bin sink_0::dcc-2a-file=/opt/imaging/imx219/dcc_2a_1920x1080.bin sink_0::device=/dev/v4l-rpi-subdev0 ! video/x-raw,format=NV12 ! v4l2jpegenc output-io-mode=dmabuf-import ! filesink location="op.mjpeg"
#Sample pipeline demonstrating DMABUF export for both capture and output queues of JPEG Encoder while recording from live test pattern generator
$gst-launch-1.0 -v videotestsrc num-buffers=100 '!' video/x-raw,width=640,height=480,format=NV12, framerate=30/1 '!' queue '!' v4l2jpegenc extra-controls=c,compression_quality=75 capture-io-mode=dmabuf-export output-io-mode=dmabuf-export '!' filesink location="op.mjpeg"
#Sample pipeline demonstrating DMABUF import for output queues of JPEG Encoder while transcoding an existing .H264 file to a sequence of JPEG images
$gst-launch-1.0 filesrc location=bbb_4kp60_30s_IPPP.h264 ! h264parse ! v4l2h264dec capture-io-mode=dmabuf ! v4l2jpegenc output-io-mode=dmabuf-import ! filesink location=op.mjpeg
3.2.2.19.6.7. Supported video formats
The driver supports encoding of both contigous and non-contigous versions of YUV 4:2:0 and YUV 4:2:2 semiplanar raw pixel formats. The non contiguous formats (suffixed with M in below table) use separate buffers (non-contigous) for luma and chroma data. However, the gstreamer framework uses a single video format for both contigous and non-contigous and dynamically maps it to either of them depending upon the requirement of upstream component which is sending the buffer to the driver. If both types of format are supported by driver then upstream gstreamer gives preference to non-contigous version of format. Although this behaviour was changed in gstreamer present in SDK which gives preference to contigous version of video format and this was done to match the requirements of TI specific gstreamer elements.
V4L2 Pixel Format |
Number of buffers |
Gstreamer Video Format |
V4L2_PIX_FMT_NV12 |
1 |
GST_VIDEO_FORMAT_NV12 |
V4L2_PIX_FMT_NV12M |
2 |
GST_VIDEO_FORMAT_NV12 |
V4L2_PIX_FMT_NV21 |
1 |
GST_VIDEO_FORMAT_NV21 |
V4L2_PIX_FMT_NV21M |
2 |
GST_VIDEO_FORMAT_NV21 |
V4L2_PIX_FMT_NV16 |
1 |
GST_VIDEO_FORMAT_NV16 |
V4L2_PIX_FMT_NV16M |
2 |
GST_VIDEO_FORMAT_NV16 |
V4L2_PIX_FMT_NV61 |
1 |
GST_VIDEO_FORMAT_NV61 |
V4L2_PIX_FMT_NV61M |
2 |
GST_VIDEO_FORMAT_NV61 |
3.2.2.19.7. Buffer alignment requirements
For input raw pixel data, the driver requests for width in pixels to be multiple of 64 bytes and height in pixels to be multiple of 8 bytes and buffers for output queue are allocated/negotiated accordingly.
For output encoded data, the driver requests for width in pixels to be multiple of 16 bytes and height in pixels to be multiple of 8 bytes and buffers for capture queue are allocated/negotiated accordingly.
3.2.2.19.8. Performance and Latency Benchmarking
The E5010 core is clocked at 250Mhz on AM67 and theoretical performance expectation with this clocking is as below :
Color subsampling |
Pixel Rate |
4:2:0 |
666.25 Mpixels/sec |
4:2:2 |
500 Mpixels/sec |
With these numbers theoretically E5010 core can handle 3840x2160@60fps equivalent load for 4:2:2 video formats and 3840x2160@75fps equivalent load for 4:2:0 video formats.
This however requires the upstream element (for e.g. camera) to support above rates. On AM67 board fastest locally available upstream element source is wave5 VPU decoder which provides maximum performance of 3840x2160@59 fps with low bitrate files and we were able to achieve same performance after passing this decoded data to E5010 JPEG Encoder as shown in below example :
$gst-launch-1.0 filesrc location=bbb_4kp60_30s_IPPP.h264 ! h264parse ! v4l2h264dec capture-io-mode=dmabuf ! v4l2jpegenc output-io-mode=dmabuf-import ! fpsdisplaysink text-overlay=false ssink video-sink="fakesink" -v
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink/GstFakeSink:fakesink0: sync = true
Redistribute latency...
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, pixel-aspect-ratio=(fraction)1/1, width=(int)3840, height=(int)2160, framerate=(fraction)60/1, chroma-format=(string)4:2:0,
bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, stream-format=(string)byte-stream, alignment=(string)au, profile=(string)high, level=(string)5.2
/GstPipeline:pipeline0/v4l2h264dec:v4l2h264dec0.GstPad:sink: caps = video/x-h264, pixel-aspect-ratio=(fraction)1/1, width=(int)3840, height=(int)2160, framerate=(fraction)60/1, chroma-format=(string)4:2:0
, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, stream-format=(string)byte-stream, alignment=(string)au, profile=(string)high, level=(string)5.2
/GstPipeline:pipeline0/v4l2h264dec:v4l2h264dec0.GstPad:src: caps = video/x-raw, format=(string)NV12, width=(int)3840, height=(int)2160, interlace-mode=(string)progressive, multiview-mode=(string)mono, mul
tiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)bt7
09, framerate=(fraction)60/1
/GstPipeline:pipeline0/v4l2jpegenc:v4l2jpegenc0.GstPad:src: caps = image/jpeg, width=(int)3840, height=(int)2160, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)60/1, interlace-mode=(string)progres
sive, colorimetry=(string)bt709, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixe
d-mono
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink.GstGhostPad:sink.GstProxyPad:proxypad0: caps = image/jpeg, width=(int)3840, height=(int)2160, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)60/1, i
nterlace-mode=(string)progressive, colorimetry=(string)bt709, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/r
ight-flopped/half-aspect/mixed-mono
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink/GstFakeSink:fakesink0.GstPad:sink: caps = image/jpeg, width=(int)3840, height=(int)2160, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)60/1, interl
ace-mode=(string)progressive, colorimetry=(string)bt709, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-
flopped/half-aspect/mixed-mono
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink.GstGhostPad:sink: caps = image/jpeg, width=(int)3840, height=(int)2160, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)60/1, interlace-mode=(string)
progressive, colorimetry=(string)bt709, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspe
ct/mixed-mono
Redistribute latency...
/GstPipeline:pipeline0/v4l2jpegenc:v4l2jpegenc0.GstPad:sink: caps = video/x-raw, format=(string)NV12, width=(int)3840, height=(int)2160, interlace-mode=(string)progressive, multiview-mode=(string)mono, mu
ltiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)bt
709, framerate=(fraction)60/1
Redistribute latency...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
Redistribute latency...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink/GstFakeSink:fakesink0: sync = true
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 32, dropped: 0, current: 63.03, average: 63.03
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 62, dropped: 0, current: 58.80, average: 60.91
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 92, dropped: 0, current: 59.38, average: 60.40
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 123, dropped: 0, current: 59.65, average: 60.21
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 153, dropped: 0, current: 58.22, average: 59.81
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 183, dropped: 0, current: 59.50, average: 59.76
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 213, dropped: 0, current: 60.00, average: 59.79
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 243, dropped: 0, current: 58.76, average: 59.66
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 274, dropped: 0, current: 60.36, average: 59.74
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 304, dropped: 0, current: 59.11, average: 59.68
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 334, dropped: 0, current: 59.99, average: 59.71
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 364, dropped: 0, current: 59.34, average: 59.68
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 394, dropped: 0, current: 59.46, average: 59.66
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 424, dropped: 0, current: 59.37, average: 59.64
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 455, dropped: 0, current: 59.56, average: 59.63
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 485, dropped: 0, current: 59.90, average: 59.65
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 515, dropped: 0, current: 57.59, average: 59.53
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 546, dropped: 0, current: 60.11, average: 59.56
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 576, dropped: 0, current: 59.37, average: 59.55
^Chandling interrupt. (16.4 %)
Interrupt: Stopping pipeline ...
Execution ended after 0:00:09.868853825
Setting pipeline to NULL ...
Freeing pipeline ...
The performance or per second throughput of a gstreamer pipeline involving E5010 JPEG Encoder can be measured using fpsdisplaysink gstreamer element
The total latency of pipeline (which means time taken by the whole pipeline to process one buffer) as well as per element latency of each gstreamer element (which means time taken by particular element to produce output buffer after receiving input buffer) can be measured using gstreamer latency tracers
Below example depicts a dummy pipeline to measure performance (or throughput) and latency of a video streaming pipeline involving imx219 RPi Camera configured to provide 1920x1080@30 fps, ISP block and JPEG Encoder which is configured to import data from ISP block using dmabuf sharing.
$GST_TRACERS="latency(flags=pipeline+element)" GST_DEBUG=GST_TRACER:7 GST_DEBUG_FILE="/run/latency.txt" gst-launch-1.0 v4l2src io-mode=dmabuf-import num-buffers=100 device=/dev/video-imx219-cam0 ! video/x-bayer,width=1920,height=1080,format=rggb ! tiovxisp sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/linear/dcc_viss_1920x1080.bin sink_0::dcc-2a-file=/opt/imaging/imx219/linear/dcc_2a_1920x1080.bin sink_0::device=/dev/v4l-imx219-subdev0 ! video/x-raw,format=NV12 ! v4l2jpegenc output-io-mode=dmabuf-import extra-controls=c,compression_quality=70 ! fpsdisplaysink text-overlay=false name=fpssink video-sink="fakesink" sync=true -v
Redistribute latency...
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink/GstFakeSink:fakesink0: sync = true
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 17, dropped: 0, current: 33.12, average: 33.12
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 32, dropped: 0, current: 29.96, average: 31.56
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 48, dropped: 0, current: 30.00, average: 31.03
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 64, dropped: 0, current: 30.01, average: 30.76
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 80, dropped: 0, current: 29.99, average: 30.61
/GstPipeline:pipeline0/GstFPSDisplaySink:fpssink: last-message = rendered: 96, dropped: 0, current: 30.01, average: 30.51
Got EOS from element "pipeline0".
Execution ended after 0:00:03.419338739
Setting pipeline to NULL ...
Freeing pipeline ...
16749.164890 s: VX_ZONE_INIT:[tivxHostDeInitLocal:120] De-Initialization Done for HOST !!!
16749.169468 s: VX_ZONE_INIT:[tivxDeInitLocal:206] De-Initialization Done !!!
APP: Deinit ... !!!
REMOTE_SERVICE: Deinit ... !!!
REMOTE_SERVICE: Deinit ... Done !!!
16749.169923 s: IPC: Deinit ... !!!
16749.170370 s: IPC: DeInit ... Done !!!
16749.170401 s: MEM: Deinit ... !!!
16749.170479 s: DDR_SHARED_MEM: Alloc's: 25 alloc's of 24308555 bytes
16749.170493 s: DDR_SHARED_MEM: Free's : 25 free's of 24308555 bytes
16749.170502 s: DDR_SHARED_MEM: Open's : 0 allocs of 0 bytes
16749.170516 s: MEM: Deinit ... Done !!!
APP: Deinit ... Done !!!
#Instantaneous latency of pipeline :
grep -inr fpssink /run/latency.txt
16:0:00:00.298788716 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930, s
ink-element=(string)fpssink, sink=(string)sink, time=(guint64)31337537, ts=(guint64)298700410;
21:0:00:00.313996917 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930, s
ink-element=(string)fpssink, sink=(string)sink, time=(guint64)13222727, ts=(guint64)313894497;
26:0:00:00.345176174 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930, s
ink-element=(string)fpssink, sink=(string)sink, time=(guint64)11158342, ts=(guint64)345084293;
31:0:00:00.379370220 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930, s
ink-element=(string)fpssink, sink=(string)sink, time=(guint64)11979311, ts=(guint64)379234094;
36:0:00:00.411468036 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930, s
ink-element=(string)fpssink, sink=(string)sink, time=(guint64)10772985, ts=(guint64)411379240;
41:0:00:00.445575826 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930, s
ink-element=(string)fpssink, sink=(string)sink, time=(guint64)11599510, ts=(guint64)445489111;
46:0:00:00.478097459 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930, s
ink-element=(string)fpssink, sink=(string)sink, time=(guint64)10788116, ts=(guint64)478006814;
51:0:00:00.512219750 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930, s
ink-element=(string)fpssink, sink=(string)sink, time=(guint64)11498979, ts=(guint64)512078704;
56:0:00:00.544931074 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930, s
ink-element=(string)fpssink, sink=(string)sink, time=(guint64)10938736, ts=(guint64)544831618;
61:0:00:00.578820438 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930, s
ink-element=(string)fpssink, sink=(string)sink, time=(guint64)11521555, ts=(guint64)578719583;
66:0:00:00.611518342 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930, s
ink-element=(string)fpssink, sink=(string)sink, time=(guint64)10799061, ts=(guint64)611370796;
71:0:00:00.645526317 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930, s
ink-element=(string)fpssink, sink=(string)sink, time=(guint64)11575424, ts=(guint64)645433236;
76:0:00:00.678096105 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930, s
ink-element=(string)fpssink, sink=(string)sink, time=(guint64)10763720, ts=(guint64)677948869;
81:0:00:00.712266206 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930, s
ink-element=(string)fpssink, sink=(string)sink, time=(guint64)11601969, ts=(guint64)712170960;
86:0:00:00.744779354 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930, s
ink-element=(string)fpssink, sink=(string)sink, time=(guint64)10843931, ts=(guint64)744679488;
91:0:00:00.778807454 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930, s
ink-element=(string)fpssink, sink=(string)sink, time=(guint64)11553809, ts=(guint64)778713438;
96:0:00:00.811437337 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930, s
ink-element=(string)fpssink, sink=(string)sink, time=(guint64)10798071, ts=(guint64)811345492;
101:0:00:00.845455847 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930,
sink-element=(string)fpssink, sink=(string)sink, time=(guint64)11543724, ts=(guint64)845361392;
106:0:00:00.878015796 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930,
sink-element=(string)fpssink, sink=(string)sink, time=(guint64)10773901, ts=(guint64)877923145;
111:0:00:00.912146406 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930,
sink-element=(string)fpssink, sink=(string)sink, time=(guint64)11501199, ts=(guint64)912003120;
116:0:00:00.944711844 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930,
sink-element=(string)fpssink, sink=(string)sink, time=(guint64)10814391, ts=(guint64)944610679;
121:0:00:00.978729974 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930,
sink-element=(string)fpssink, sink=(string)sink, time=(guint64)11512469, ts=(guint64)978635704;
126:0:00:01.011345368 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930,
sink-element=(string)fpssink, sink=(string)sink, time=(guint64)10761190, ts=(guint64)1011250667;
131:0:00:01.045390658 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930,
sink-element=(string)fpssink, sink=(string)sink, time=(guint64)11526020, ts=(guint64)1045295478;
136:0:00:01.077958881 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930,
sink-element=(string)fpssink, sink=(string)sink, time=(guint64)10763061, ts=(guint64)1077864151;
141:0:00:01.112132287 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930,
sink-element=(string)fpssink, sink=(string)sink, time=(guint64)11565580, ts=(guint64)1112036122;
146:0:00:01.144706700 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930,
sink-element=(string)fpssink, sink=(string)sink, time=(guint64)10844041, ts=(guint64)1144606150;
151:0:00:01.178663635 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930,
sink-element=(string)fpssink, sink=(string)sink, time=(guint64)11490844, ts=(guint64)1178569250;
156:0:00:01.211327294 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930,
sink-element=(string)fpssink, sink=(string)sink, time=(guint64)10821611, ts=(guint64)1211235083;
161:0:00:01.245347059 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: latency, src-element-id=(string)0x3e942370, src-element=(string)v4l2src0, src=(string)src, sink-element-id=(string)0x3ec08930,
sink-element=(string)fpssink, sink=(string)sink, time=(guint64)11517934, ts=(guint64)1245248708;
#"time=" depicts latency in nano seconds, average for total pipeline latency can be calculated as below
#cat /run/latency.txt | grep sink | awk -F"guint64)" '{print $2}' | awk -F"," '{total +=$1; count++} END { print total/count }'
#1.11008e+07
#Instantaneous latency of v4l2jpegenc element :
grep -inr v4l2jpegenc /run/latency.txt
17:0:00:00.298863641 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)11044827, ts=(guint
64)298700410;
22:0:00:00.314066193 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)5345005, ts=(guint$
4)313894497;
27:0:00:00.345305685 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4279670, ts=(guint$
4)345084293;
32:0:00:00.379433905 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4527486, ts=(guint$
4)379234094;
37:0:00:00.411572966 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4121249, ts=(guint$
4)411379240;
42:0:00:00.445629667 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4169550, ts=(guint$
4)445489111;
47:0:00:00.478150340 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4112790, ts=(guint$
4)478006814;
52:0:00:00.512276880 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4126639, ts=(guint$
4)512078704;
57:0:00:00.544997699 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4239595, ts=(guint$
4)544831618;
62:0:00:00.578880448 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4165510, ts=(guint$
4)578719583;
67:0:00:00.611577852 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4144864, ts=(guint$
4)611370796;
72:0:00:00.645584357 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4156854, ts=(guint$
4)645433236;
77:0:00:00.678151860 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4107079, ts=(guint$
4)677948869;
82:0:00:00.712324916 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4147884, ts=(guint$
4)712170960;
87:0:00:00.744863324 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4191194, ts=(guint$
4)744679488;
92:0:00:00.778864059 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4153499, ts=(guint$
4)778713438;
97:0:00:00.811494143 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4136115, ts=(guint$
4)811345492;
102:0:00:00.845533548 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4133964, ts=(guin$
64)845361392;
107:0:00:00.878070336 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4122909, ts=(guin$
64)877923145;
112:0:00:00.912202591 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4121644, ts=(guin$
64)912003120;
117:0:00:00.944768840 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4210385, ts=(guin$
64)944610679;
122:0:00:00.978784930 1869 0xffff9c000ef0 TRACE GST_TRACER :0:: element-latency, element-id=(string)0x3ebfc740, element=(string)v4l2jpegenc0, src=(string)src, time=(guint64)4132154, ts=(guin$
64)978635704;
#"time=" depicts latency in nano seconds, average for v4l2jpegenc can be calculated as below
$cat /run/latency.txt | grep v4l2jpegenc | awk -F"guint64)" '{print $2}' | awk -F"," '{total +=$1; count++} END { print total/count }'
4.14626e+06
Below table depicts performance and latency numbers achieved :
Pipeline performance |
Total Latency |
E5010 latency |
1920x1080@30 fps |
~11.1 ms |
~4.146 ms |
3.2.2.19.9. Unsupported driver features
The driver currently does not support:
Buffers which are allocated as physically non-contigous in memory are not supported
Memory tiling scheme selection per subsampling mode to reduce memory latency and power consumption is not supported
3.2.2.19.10. Acronyms Used in This Article
Abrreviation |
Full Form |
fps |
Frames per second |
NC |
Non contigous |
UV |
Chroma Interleaved with UV sequence |
VU |
Chroma Interleaved with VU sequence |
V4L2 |
Video for Linux2 |
JPEG |
Joint Photographic Experts Group |
uAPI |
Userspace API |
MMU |
Memory management Unit |
DMAbuf |
Direct Memory Access buffer |
VPU |
Video Processing Unit |
ISP |
Image signal processor |
MJPEG |
Motion JPEG |