View Full Version : DMK72BUC02 problem getting image

June 10, 2015, 11:56:07

As I write on the other topic we manage to get firmware loaded on camera, which is connected to Raspberry py with Raspbian OS.

If we run the test code from github, which is

gst-launch-0.10 v4l2src ! video/x-raw-gray,width=640,height=480,framerate=15/1 !videorate ! video/x-raw-gray,framerate=3/2 ! tis_auto_exposure ! tiscolorize ! tiswhitebalance ! bayer2rgb ! ffmpegcolorspace ! ximagesink

the image fps is lower than 3 fps and there are some randomly spread colorized pixels.

v4l2-ctl --all


Driver Info (not using libv4l2):
Driver name : uvcvideo
Card type : DMx 72BUC02
Bus info : usb-bcm2708_usb-1.2.3
Driver version: 3.18.14
Capabilities : 0x84000001
Video Capture
Device Capabilities
Device Caps : 0x04000001
Video Capture
Priority: 2
Video input : 0 (Camera 1: ok)
Format Video Capture:
Width/Height : 2592/1944
Pixel Format : 'GREY'
Field : None
Bytes per Line: 2592
Size Image : 5038848
Colorspace : Unknown (00000000)
Crop Capability Video Capture:
Bounds : Left 0, Top 0, Width 2592, Height 1944
Default : Left 0, Top 0, Width 2592, Height 1944
Pixel Aspect: 1/1
Streaming Parameters Video Capture:
Capabilities : timeperframe
Frames per second: 7.000 (7/1)
Read buffers : 0
brightness (int) : min=0 max=255 step=1 default=-8193 value=12
gain (int) : min=4 max=63 step=1 default=57343 value=36
exposure_absolute (int) : min=1 max=300000 step=1 default=127 value=127
focus_absolute (int) : min=0 max=1000 step=1 default=57343 value=0
privacy (bool) : default=0 value=0

If we use OpenCV to capture image we get an entirely green picture, which is normal, because of bad support. With uvccapture we managed to get two real, small and strange colorized pictures on big one which is the right size for resolution, as similarly seen on web.

What is the right and easiest approach to get single image with preset camera parameters into OpenCV? Are there any examples?

Stefan Geissler
June 10, 2015, 15:59:19
I attached a simple V4L2 - OpenCV sample

Also you can adjust the red, green and blue amplifiers of the camera with the IOctrl functions of V4L2, so you can do the white balance manually.

June 11, 2015, 10:20:18

We are currently using Python for programming, so we loaded Open-CV and everything that is needed for Python. As long there is simple way to get image from camera with preset customized setting (exposure, gain ...) in to Open-CV, which is used by Python, we will stick to it. We don't need video stream we just need to take a picture when software trigger is present. If I figure it out right there is no direct support for v4l2 in Python, but it's possible to execute terminal commands form Python. So is there any option for saving image to RAM with gstramer or something else in that way, that we can loaded back into Python/Open-CV?

If we use the terminal code from Github (Gstreamer) the real fps ratio is really slow. If we delete videorate and second framerate streaming works as rocket. But the problem is cropped image.

Stefan Geissler
June 12, 2015, 14:03:35

I must admit, I have no experience with Python. Therefore, my help is limited here.

I am sure, you can create a GStreamer Pipeline, that saves one image. There should be many samples in the internet. My first search for "http://talk.maemo.org/showthread.php?t=34749" results in

If we use the terminal code from Github (Gstreamer) the real fps ratio is really slow. If we delete videorate and second framerate streaming works as rocket. But the problem is cropped image.

Being really slow was done extra for use on Raspberry PI (1). The damaged images are result of interrupted image data transfer, might be caused by high CPU Load. I do not know, which hardware you use....