PDA

View Full Version : problem synchronizing two cameras with external trigger



LH2011
July 18, 2011, 15:23:51
Hello!
I need to synchronize two cameras accurately. To do this, I will use a pair of DFK 22BUC03 cameras making use of external trigger.

The external trigger signal is given by a device that produces a 60Hz sync signal that follows the specifications of the document "Industrial CMOS Cameras - Dxx22xUC03and Dxx72xUC02-Using the trigger and the digital output."

The problem I have found is that after a random time, the images I receive from both cameras do not match each other: that is, image received from camera 1 does not correspond to the image captured by the camera 2 in the same instant of time the image was captured on camera 1.

Each of the cameras has the following structure:
class ImgSrcCameraV32Impl
{
public:
CListener *_pListener;
_DSHOWLIB_NAMESPACE::Grabber _grabber;
_DSHOWLIB_NAMESPACE::tFrameHandlerSinkPtr _pSink;
_DSHOWLIB_NAMESPACE::tMemBufferCollectionPtr _pCollection;
CSimplePropertyAccess _properties;

public:
ImgSrcCameraV32Impl(): _pListener(NULL){}
~ImgSrcCameraV32Impl(){}
};

and are initialized as follows:
bool ImgSrcCameraV32::iniCamera(bool bPlay, std::string& devStateFilename, std::map<std::string, bool>& mOpenDevices)
{
closeCamera();

if(! _DSHOWLIB_NAMESPACE::InitLibrary())
return false;

if( !setupDeviceFromFile(devStateFilename, mOpenDevices))
return false;

if(!_grabber.isDevValid())
return false;

_properties.init(_grabber.getAvailableVCDPropertie s());

// Set the image buffer format to eY800. eY800 means monochrome, 8 bits (1 byte) per pixel.
// Let the sink create a matching MemBufferCollection with 1 buffer.
_pSink = _DSHOWLIB_NAMESPACE::FrameHandlerSink::create( _DSHOWLIB_NAMESPACE::eRGB24, 1 );

// We use snap mode.
_pSink->setSnapMode( false );

// Set the sink.
_grabber.setSinkType( _pSink );

// Prepare the live mode, to get the output size if the sink.
if( !_grabber.prepareLive( false ) )
{
//std::cerr << "Could not render the VideoFormat into a eRGB24 sink.";
return false;
}

// Retrieve the output type and dimension of the handler sink.
// The dimension of the sink could be different from the VideoFormat, when
// you use filters.
_DSHOWLIB_NAMESPACE::FrameTypeInfo info;
_pSink->getOutputFrameType( info );


// Allocate 2 image buffers of the above calculate buffer size.
for( int i = 0; i < 2; ++i )
{
_pBuf[i] = new unsigned char[info.buffersize];
}

// Create a new MemBuffer collection that uses our own image buffers.
_pCollection = _DSHOWLIB_NAMESPACE::MemBufferCollection::create( info, 2, _pBuf );
if( _pCollection == 0 || !_pSink->setMemBufferCollection( _pCollection ) )
{
std::cerr << "Could not set the new MemBufferCollection, because types do not match.";
return false;
}

// Create the GrabberListener object.
// CListener is derived from GrabberListener.
_pListener = new CListener(info.buffersize);

if(!_grabber.addListener( _pListener, _DSHOWLIB_NAMESPACE::GrabberListener::eFRAMEREADY) )
return false;

// Start live mode for fast snapping. The live video will not be displayed,
// because false is passed to startLive().
if(bPlay)
{
bool resultPlay;
resultPlay = _grabber.startLive(false);
return resultPlay;
}

return true;
}

And the way to capture images is

bool ImgSrcCameraV32::QueryFrame(unsigned char** frame)
{
if(_Enabled)
{
_pListener->update();
*frame = _pListener->getPtrFrame();
return((*frame)!=NULL);
}
return false;
}

With the functions update() and getPtrFrame() of the listener:
void CListener::update()
{
//Critical section
CSimpleLock lockerModifyImage(&_mutex);
if(_frameReady)
{
_frameR = _frameW;
_frameW = (_frameW ? 0: 1);
_frameReady= false;
}
}
unsigned char* i3bh::cameraFlxLib::CListener::getPtrFrame()
{
return (_frameR >=0?_pBuf[_frameR]:NULL);
}

And In the main function�

for(;;)
{
if((camera[0]->QueryFrame(&frame[0])) && (camera[1]->QueryFrame(&frame[1])))
{
[�]
}
}

What might be happening? Why frame [0] and frame [1] does not correspond to the same instant of time?

A solution introduced to solve this problem was to control the FrameTimeStamp. In order to control the gap between the two cameras, the difference of timestamps is rated; if this difference is below a certain threshold, each of the grabbers associated with the cameras makes the following sequence:
stopLive ();
startLive (false);
And continues the normal operation mode.

What can be happening? How I can improve my code? Is there another solution?

thank you very much in advance.

Stefan Geissler
July 18, 2011, 17:21:25
Hello

The SampleStartTime of the image buffer is a time stamp (DirectShow Reference Time) set by the camera driver, when it is notified by the USB controller, that an image has been finished. This means, there is a delay between the image exposure in the camera and the point of time, when the driver is notified by the USB controller.

The SampleStartTimes of both images should have a maximum difference of the time intervall determined by the frame rate. At 60 fps, both images should be available within 16.66 ms. If the difference is bigger, then you may encountered a frame drop or Windows wasted time doing some more important stuff. Windows is not a real time operating system.

The interesting part, where you compare the SampleStartTimes seems to be missing in your post. However, I do not understand, why you stop and start the cameras, if the difference between both times is below a threshold. Above sounds more logical to me. But I think, a stop and start is not necessary.