PDA

View Full Version : Image Processing



Jonathan Jackso
March 5, 2003, 23:20:33
I am a student researcher in the field of computer vision. We are using a stereo vision system to create an augmented reality as one of our projects. My end is the computer vision side. I was going through the sdk, and i was wondering where the best place to do the actual processing was. The processing is fairly fast (just accessing a look-up table for the first part, then doing connected component labeling and a couple of other operations, all in all it is about a 4-5 pass algorithm). The goal is to have several tracked points displayed onscreen, have stereo calculated, and then send the point coordinates over the network to the computer that"s handling the rendering of the VR. Obviously I'm not asking for any help on the computer vision aspect. I'm just really new to the sdk and I have some deadlines to meet. I've messed around with the frameReady() callback function, created a pointer to the data, and as a test inverted all the colors. I did this on top of your sample code for a callback function:

void CListener::frameReady( Grabber& param, smart_ptr<MemBuffer> pBuffer,
DWORD currFrame){
pBuffer->lock();
//printf("Buffer %2d processed in CListener::frameReady().\n",currFrame);
BYTE * data=pBuffer->getPtr();
for(int i=0; i<int(pBuffer->getBufferSize()); i++){
data[i]=255-data[i];
}
saveImage( pBuffer, currFrame ); // Do the buffer processing.
//Sleep(1250); // Simulate a time expensive processing.
pBuffer->unlock();
}

It works for the 1st and tenth frames, but I need to process all frames that i can. I was also wondering if i should just use the the snapImages() function to just get one buffer, process it, then do another.

Thank you so much for your time.
-Jonathan Jackson

Jonathan Jackso
March 6, 2003, 08:17:22
Also I wanted to note that the faster the better. I would much rather be limited in speed by the computer vision algorithms than the sdk's calls. I can always speed my code up. How often should I be able to get a frame for processing? Our VR people inform me that I need to get them data at a rate of at least (and i mean at a bare minimum) 5 frames per second. I definitely believe that I can offer this on my end, but I was wondering how fast IC could deliver frames to me (we're using IC 1.4 on a P4 1.7 GHz machine).

Thanks again so much! I'm sure I'll have many more questions for you later.

-Jonathan Jackson
jojackso@uncc.edu

Stefan Geissler
March 6, 2003, 09:52:01
Jonathan,

I suggest to do the image processing in the frameReady() method of the
GrabberListener object. To grab every frame, you should start the grabber as
follows:


m_Grabber.setSinkType( DShowLib::FrameGrabberSink(
DShowLib::FrameGrabberSink::tFrameGrabberMode::eGR AB, DShowLib::eRGB24 ) );

m_pMemBuffColl = m_Grabber.newMemBufferCollection( 5 );

m_Grabber.setActiveMemBufferCollection( m_pMemBuffColl );
m_Grabber.startLive(false); // Start without live video



This code starts the grabber without displaying the live video. The "eGRAB" parameter
in the sink type makes the grabber grab every frame and call the frameReady()
method of your Grabber instance.

I think, your problem is, that the delivered frames could be too old, this means,
they are not actual. Therefore, it is neccessary to drop the frames, that are
captured during your image processing.
In your from GrabberListener derived object you need to declare a member, that
will store the grabber's running time. The header of this objecgt should look
like this:



class CListener : public GrabberListener
{
public:
CListener();
virtual ~CListener();
virtual void frameReady( Grabber& param, smart_ptr<MemBuffer> pBuffer, DWORD FrameNumber);

protected:
REFERENCE_TIME m_GrabberRefTime; // Running time of the Grabber

};


The frameReady() method should be implemented as follows:



void CListener::frameReady( Grabber& param, smart_ptr<MemBuffer> pBuffer, DWORD FrameNumber)
{
tsMediaSampleDesc MediaSampleDesc;
REFERENCE_TIME GrabberStartTime;
pBuffer->lock();
// Compare the sample time of the incomming MemBuffer with the previoulsy
// saved Grabber reference time. If the sample time of the Membuffer is
// less (older) than the saved grabber reference time, the frame in the
// MemBuffer will not be processed.

MediaSampleDesc = pBuffer->getSampleDesc();

if( MediaSampleDesc.SampleEnd >= m_GrabberRefTime )
{
DoImageProcessing( pBuffer ); // This may take some time

// Get the current grabber reference time. This time needed to check, whether
// the MemBuffers that will come in, are not too old.
param.getGraphStartReferenceTime(GrabberStartTime) ;
param.getCurReferenceTime(m_GrabberRefTime);
m_GrabberRefTime -= GrabberStartTime;
}
pBuffer->unlock();
}

Jonathan Jackso
March 6, 2003, 20:20:51
Thanks a whole bunch for that! I have one more question for right now...
If I wanted to display the output from my image processing, how would I go about doing that? I want to mark certain positions on the original images (the points I'm tracking). I know how to do this in terms of the image itself, I've already got the code to do it. I also don't want to use the overlay callback method because 1) I want it displayed on the frame processed and 2) I don't want to add anything to the image to corrupt my processing. So basically I'm asking, once I've processed a frame with the frameReady() callback, how do I display it?

Thanks again for all of your help!

-Jonathan Jackson

Stefan Geissler
March 7, 2003, 07:50:21
Jonathan,

There are two ways to do this.
1. IC Imaging Control provides a bitmap overlay, (OverlayBitmap) object, that
can be used for drawing on the live video. The overlay bitmap can be used,
to draw informations directly in the video stream, like current time, camera i
nformations etc. It is very simple to use.
The GrabberListener object provides the method "overlayCallback". In
this method you can draw information in every frame, e.g. the frame number or
the frame reference time. You have not direct access in this method to the
frame data (pixels). With this method, an analysis of the image data would
be displayed in the next frame.

2. You can manage the displaying of the image data in the frameReady event
on your own. The advantage is, that you have access to the image data and
can mark your relevant things, whithout manipulating the imagedata themselfes.

Therefore, you need a handle to the CWnd, you want to draw in. The header
file of the CListener object should look like follows:



using namespace DShowLib;

class CListener : public GrabberListener
{
public:
void SetViewCWnd( CWnd *pView);
CListener();
virtual ~CListener();
virtual void frameReady( Grabber& param, smart_ptr<MemBuffer> pBuffer, DWORD FrameNumber);

protected:
CWnd* m_pDrawCWnd; // Window to draw in
void DrawBuffer( smart_ptr<MemBuffer> pBuffer);
void DoImageProcessing( smart_ptr<MemBuffer> pBuffer);

};


You set the pointer to the window to draw in, With the method SetViewCWnd(). This
method is to called from your mainframe once.

The frameReady() implementation looks like follows:



void CListener::frameReady( Grabber& param, smart_ptr<MemBuffer> pBuffer, DWORD FrameNumber)
{
pBuffer->lock();
DoImageProcessing( pBuffer ); // Imageprocessing to get your marking data
DrawBuffer(pBuffer); // Draw the imagebuffer and your marking data
pBuffer->unlock();
}


The image processing is your part. In this method, you should collect the
data, you want to display later on the image buffer in the drawing Window.
The DrawBuffer() method is implemeted as follows:



void CListener:: DrawBuffer( smart_ptr<MemBuffer> pBuffer)
{
if( m_pDrawCWnd != NULL)
{
if( pBuffer != 0 )
{
CDC *pDC = m_pDrawCWnd->GetDC();

smart_ptr<BITMAPINFOHEADER> pInf = pBuffer->getBitmapInfoHeader();

void* pBuf = pBuffer->getPtr();

int nLines = SetDIBitsToDevice(
pDC->GetSafeHdc(),// Handle to the device
0,
0,
pInf->biWidth, // Source rectangle width
pInf->biHeight, // Source rectangle height 0,// X-coordinate of lower-left corner of the source rect
0,// Y-coordinate of lower-left corner of the source rect
0,// First scan line in array
pInf->biHeight, // Number of scan lines
pBuffer->getPtr(),// Modified address of array with DIB bits
reinterpret_cast<LPBITMAPINFO>( &*pInf ), // Address of structure
// with bitmap info
DIB_RGB_COLORS // RGB or palette indices);

// Here do your own drawing in the pDC.

m_pDrawCWnd->ReleaseDC(pDC);
}
}
}


You can draw with the normal GDI functions (or MFC) in the pDC. This is displayed
in the DrawCWnd without manipulating the image buffer.

Jonathan Jackso
March 8, 2003, 00:37:26
Ok, I did all of that on top of the demoapp demo. I'm getting a run-time assertion failed message. It occurs within the CListener:: DrawBuffer( smart_ptr<MemBuffer> pBuffer) function. Running the debugger lets me know that the code gets to the line:
CDC *pDC = m_pDrawCWnd->GetDC();
and then it gives me the assertion failure as such:

Assertion failed!

Program: ...ging Control 1.4\ClassLib\Debug\DemoAppDev.exe
File: c:\csource\core\tisudshl\listenercontainer.h
Line: 67

Expression: false && "GrabberListener, one listener has thrown an unknown exception."

...
Here's how I coded the DrawBuffer() funtion:

void CListener:: DrawBuffer( smart_ptr<MemBuffer> pBuffer)
{
if( m_pDrawCWnd != NULL)
{
if( pBuffer != 0 )
{
CDC *pDC = m_pDrawCWnd->GetDC();

smart_ptr<BITMAPINFOHEADER> pInf = pBuffer->getBitmapInfoHeader();

void* pBuf = pBuffer->getPtr();

int nLines = SetDIBitsToDevice(
pDC->GetSafeHdc(),// Handle to the device
0,
0,
pInf->biWidth, // Source rectangle width
pInf->biHeight, // Source rectangle height
0,// X-coordinate of lower-left corner of the source rect
0,// Y-coordinate of lower-left corner of the source rect
0,// First scan line in array
pInf->biHeight, // Number of scan lines
pBuffer->getPtr(),// Modified address of array with DIB bits
reinterpret_cast<LPBITMAPINFO>( &*pInf ), // Address of structure
// with bitmap info
DIB_RGB_COLORS // RGB or palette indices
);

// Here do your own drawing in the pDC.

m_pDrawCWnd->ReleaseDC(pDC);
}
}
return;
}

I was wondering if you knew what was going on?
I can give you the rest of the code and how I modified it if you like.

thanks again.
-Jonathan

Stefan Geissler
March 10, 2003, 10:33:17
Jonathan,

Make sure, that the window in m_pDrawDC is really a valid window, not something like 0xcccccccc. In the constructor of CListener you should enter the line m_pDrawWnd = NULL;

Jonathan Jackso
March 18, 2003, 09:34:02
Thank you for all of your help. I've got it up and running right now. One question though... is there a timeout on the frameready method. I inserted some code which happens to be rather time intensive, and the output is the input (which it is definitely not supposed to be). I haven't finished making sure my code works properly, but I thought I'd ask. Thanks Again!

-Jonathan Jackson

Stefan Geissler
March 18, 2003, 09:42:42
Jonathan,


there is no timeout for the frameready method. If you have locked a frame buffer, you can manipulate its data until you are finished and unlock it. It will not be overwritten, while it is locked. If all frame buffers are locked, no new frames are delivered.

Jonathan Jackso
March 18, 2003, 10:35:21
I already found out that on my own. I did have a mistake (or more) in my code... I'm still working it all out, but thanks again for your help.

-Jonathan Jackson

Jonathan Jackso
March 18, 2003, 10:44:28
I thought I'd mention that you guys are great for your prompt and very useful responses. I have another question though. (Big surprise there huh?) I am now ready to implement a stereo vision system, in real time. I've read much of the multiple cameras thread, but I still have a couple of questions. Will two of the same type of camera still show up separately when I get the info about what imaging devices? (I'm almost 99.9% sure it will) Also, is there any way to garantee which one will load first all the time? (this shouldn't matter too much once I get camera calibration working properly) An lastly, I will need to do simultanious image capture from 2 devices for accurate 3D, with one computer I'm sure all I would do is cross-lock the two image buffer. The problem though is that we need fast output, and I was thinking about using two separate computers for each camera, would I then have to use an external trigger to ensure simultanious capture?

Thanks again, I'm sure I'll be writing back soon...

-Jonathan Jackson

Jonathan Jackso
March 18, 2003, 17:30:38
What would give me an assertion failure like this:

Assertion failed!

Program: ...ging Control 1.4\ClassLib\Debug\DemoAppDev.exe
File: c:\csource\core\tisudshl\listenercontainer.h
Line: 67

Expression: false && "GrabberListener, one listener has thrown an unknown exception."

I am sure it's in my code, but I just wanted to make sure. My processing runs for a while (variable), then it gives me the assertion failure. I was wondering if maybe you had an idea what the problem could be(especially since the asserts are in either the IC libs or in Direct X. Thank you so much again.

-Jonathan Jackson

Jonathan Jackso
March 18, 2003, 17:50:13
I was an idiot, i needed to set a value higher for dynamically allocating an array. Sometimes my code would go out of bounds... Thanks though!

-Jonathan Jackson

Stefan Geissler
March 19, 2003, 07:28:24
Jonathan,

To come back to your two camera question: If you are not able to identify the cameras unique, for example by using the serialnumber of te cameras, you can not be sure, which one is camera 1 and which one is camera 2. (Except you would connect them after booting the computer manual.)
For the Sony DCams, i have no method to read out the serial number. For our DxK DCams is a method implemented.

If you need to get the images of the two cameras at the same point of time, you should trigger them. Only the Sony DCams have an external trigger. The maximum trigger frequency of these cameras are up to 15 Hz.

A second method is to use the graph time. You can calculate the times of an image, after the grabber has been started. If you have the time of the first camera image, you can try to find the image with nearly the same timestamp of the second camera. Therefore, it is necessary to capture more than one image from the second camera, to get images from a time intervall. The ring buffer supports this:
Start the second camera with an eGRAB sinktype and a ringbuffersize of 10.
Snap an image of the first camera and pause immediately the second camera. Now, correlate the times of the grabbers and find the image buffer of the second camera, that has a mathing frame time.

You need the difference of the grabber starting time, to synchronize both grabbers.

bandanna2000
August 14, 2003, 11:19:26
Hi,
I have found this thread extremely useful in my development of a frame grabbing application.

1:-
Jonathan asked a question about receiving an assertion fail, I am receiving this message as well. What is failing to cause this message? :confused:

--------------------------------------
Assertion failed!

Program: ...ging Control 1.4\ClassLib\Debug\DemoAppDev.exe
File: c:\csource\core\tisudshl\listenercontainer.h
Line: 67

Expression: false && "GrabberListener, one listener has thrown an unknown exception."
--------------------------------------

2:-
I am also using your method of determining how old the frames are and discarding them if they are to old. I have a problem with this implementation when I stop the grabber grabbing, and then restart grabbing. All the frames are then out of date. Where should I be updating the time?

Thanks, David

Stefan Geissler
August 14, 2003, 12:07:54
Hello,

To the 1st question: The excetion is thrown, because you have an error in one of the functions, you are calling from the frameready function. To find, where the error occurs, you need to implement exception handlers in your functions.

To the 2. question: You need to implement a public method in your GrabberListener class, that is called, before startLive is called. In this method, you can reset any counters, that are used.

RPaulsen
July 23, 2004, 10:52:25
Hi!

I am basically interrested in doing the same thing as Jonathan:

- Process the frames in frameready and showing the result on the frame using a custom drawing/painting function.

I have started to modify the Overlay application found in the ICTrial package with the suggestions above. However, it would be so much easier if you could provide a small framework with the above mentioned functionallity. Is that possible?

Best regards,
Rasmus

RPaulsen
July 23, 2004, 16:20:26
Hi Again,

I managed to do it myself, so please ignore the above posting.

-Rasmus