PDA

View Full Version : Performance issue



pauljurczak
January 17, 2007, 19:53:15
Lets assume that we have a bus based camera (USB or IEEE1394), which streams the data via DMA to memory buffers. Can I access that data using IC Imaging Control without copying it to another buffer? The article at http://www.imagingcontrol.com/support/documentation/class/tech_BasicConcepts.htm suggests that it is possible, by using single frame filter and NOT using ring buffer. The post http://www.theimagingsourceforums.com/showthread.php?p=19417&highlight=performance#post19417 reads: "use the ring buffer, there is no other way to capture the incoming frames". Can you explain this contradiction?

Regards,
Paul.

Stefan Geissler
January 18, 2007, 09:33:37
Hello Paul,

This is a difficult question. The image buffers are transfered by DirectShow through the filters in the DirectShow filter graph. In DirectShow are two types of filters available: The update-in-place and the transform filters. The update-in-place filters do not copy the image from source to destination, they only work on the original image data and pass only the pointer to the image to the next filter in the graph. The transform filters copy the source image into a new buffer. This happens if a image format is changed e.g. from YUV to RGB.
IC Imaging Control's frame filters are inserted after the color space transformation, if necessary, into the filter graph. This results in one copy process. But a frame filter inserted in the device path is the earliest point to access the image buffer.
With a frame filter you can capture or process images. The difference to the ring buffer is in the threading. While the frame filter runs in the thread of the filter graph, is the access in to the ring buffer in another thread. This means if an image processing in the frame filter needs too long, you will get dropped frames.
If the image processing runs in a call back like the GrabberListener, then images are copied to the ring buffer, while the image processing is running. No frames are lost. The C++ Callback sample demonstrates this. In this sample are 10 images to be captures in a 10 image size ring buffer. The image processing in the GrabberListener simulates a long image processing, thus the frameReady() method is not called for each incoming frame. Nevertheless all 10 incoming frames have been saved in the ring buffer.

If your image processing is fast enough, you can use a frame filter to access the incoming images at the earliest possible point of time.

The contradiction you mention is simple: By creating more frame filters in the last time, we are still learning too :-)

pauljurczak
January 19, 2007, 03:52:46
Hello Stefan,

lets say I make the following steps:

* Switch off the display: this will remove the display path completely from the image stream.
* Prevent automatic color space conversion: by specifying the same video format for the device and the sink.
* Switch off all overlays.
* Insert one frame filter (singleFilter) derived from FrameUpdateFilterImpl in the sink path; singleFilter::modifiesData() will return false.
* Do not insert additional frame filters in the device path or in the sink path.

Will the call to singleFilter.updateInPlace() pass the pointer to the original image buffer filled by low level driver using DMA or a copy of that buffer?

Regards,
Paul.

Stefan Geissler
January 19, 2007, 13:39:37
Hi Paul,

your points are correct. But the frames are NOT copied through DMA by DirectShow. I am not able to answer the last question, because this is low level DirectShow stuff. I simply use capabillities of DirectShow, but I have no idea, from where DirectShow gets its data. Sorry.