PDA

View Full Version : Capture Image Sequence



trekman
April 23, 2012, 17:37:52
I am working on an application where I want to just capture an image sequence as opposed to movie file, but still at 30fps frame rate. I would ideally like uncompressed images and have them named something like img_1, img_2, img_3, etc. I was just wondering if there was an easy way to do this?

Stefan Geissler
April 23, 2012, 17:41:56
Hi,

Yes, there is a "easy" way to do this. You only need to add a Grabblistener inherited class, that does the image saving. You can find a sample listener in the "Callback" sample.

Image saving is simple using the "saveasbmp" or "saveasjpg" functions.

You only need to set the the snapmode of the sink to false (psink->snapMode(false)).

However, I do not think, your hard disc is fast enough to save 30 frames per second, especially if it is a tradition mechanical hard disc and no SSD.

On our last try, we have been able to save 10 frames per second only.

trekman
May 13, 2012, 10:54:19
I am trying to capture an image and save it to a file but I'm having a bit of trouble. I have written the code below, but when I try to run it I get the error " An option is not available, eg you called setFlipH and the VideoCaptureDevice does not support flipping" what am I doing wrong?


tFrameHandlerSinkPtr pSink = FrameHandlerSink::create(1);
m_Grabber1.setSinkType(pSink);

// Snap one image and copy it into the MemBufferCollection.
Error e = pSink->snapImages( 1 );
if( e.isError() )
{
// Display an error.
::MessageBox( 0, e.toString().c_str(), "Error", MB_OK|MB_ICONERROR );
}
else
{
// Save the image to a bitmap file.
pSink->getLastAcqMemBuffer()->save( "image.bmp" );
}

Stefan Geissler
May 14, 2012, 09:11:05
Hi,

I miss the m_Grabber1.startLive() call. In this case, the imagebuffer is null and you receive the slightly misleading error message.

Without m_Grabber1.startLive() the camera is not started and therefore does not provide images. The pSink->snapImages will not automatically start and stop the live video stream, as it "MemorySnapImage" does in the .NET component.

trekman
May 14, 2012, 10:54:49
I actually call this code segment on a button push, and the startLive() is called earlier in the program. So when this gets executed there is a live image up. My question is this the correct way to grab an image as I can't seem to get it to work?

Stefan Geissler
May 14, 2012, 11:31:37
Hi

Then this wont work at all. The sink must be created and connected to the grabber, before startLive() is called.

payamo
May 25, 2012, 20:38:10
Hi, I am a newbie to this forum, so please excuse if my post is not correct form.
Also I am new to image processing, so apologies if my questions seem silly.
I am also a newbie to windows and .NET, although (thankfully) not to C++.
I have a several questions that seem related to this post, so I post here. If in fact this should be a separate thread I apologise for that too.

My setup is DMK 51BU02.H (monochrome) via usb 2.0 to windows 7. I am using Visual Studio C++ 2008 and looking at the sample applications.
In particular I have been playing around with callback sample and the icdialogtemplate project, looking at CListener::DoImageProcessing() method.

Here are my questions.
1). My camera is monochrome, and outputs 8 bits of information per pixel. Code is something like:
smart_ptr<BITMAPINFOHEADER> pInf = pBuffer->getBitmapInfoHeader();
int iImageSize = pInf->biWidth * pInf->biHeight * pInf->biBitCount / 8

examining this, one sees that there is 8 bits for each of Red Green and Blue, when in fact each of these 3 bytes has the same value with the mono camera.
This seems like we are carrying around 3x as much data as is actually needed.
Why is this? Is this to be consistent with .bmp file format?
Is there a format I could employ that would only create 8 bits in RAM and 8 bits to disk -- to save on disk space and processing?

2). In trying to save image data to disk, I need raw data at hopefully 10 Hz. I assume that I should be using a .bmp format, and that an AVI format will lose data but have less overhead because of reduced file operations?

3). When running the code in the callback, we see that the delay of 250 msec means that the callback
CListener::frameReady() is not called for all frames that are stored.
This suggests that there are multiple threads running. Is there a description of the threading model that is employed?

4). I would like to have an accurate timestamp to assist with integration with some MEMs inertials. Is there a way to sample a continuous hardware timer (or equivalent) in this framework.

5). I would like to make use of some signal processing routines that are available thru OpenCV. Is there an example of integrating OpenCV software with ICimaging software?

With thanks in advance for all these questions.

PaYaMo

Stefan Geissler
May 29, 2012, 12:56:24
Hi


1). My camera is monochrome, and outputs 8 bits of information per pixel. Code is something like:
smart_ptr<BITMAPINFOHEADER> pInf = pBuffer->getBitmapInfoHeader();
int iImageSize = pInf->biWidth * pInf->biHeight * pInf->biBitCount / 8

examining this, one sees that there is 8 bits for each of Red Green and Blue, when in fact each of these 3 bytes has the same value with the mono camera.
This seems like we are carrying around 3x as much data as is actually needed.
Why is this? Is this to be consistent with .bmp file format?
Is there a format I could employ that would only create 8 bits in RAM and 8 bits to disk -- to save on disk space and processing?
This is, because the color format in the memory (sink) has been set to RGB24, which is default. Set the colorformat in the FrameHandlerSink to eY800.



2). In trying to save image data to disk, I need raw data at hopefully 10 Hz. I assume that I should be using a .bmp format, and that an AVI format will lose data but have less overhead because of reduced file operations?
10 fps single image save needs a fast hard disc. However, using AVI files with "Y800" as codec setting does not need a so fast hard disc and you will save the images lossless into the AVI file. You can create the sink with

pSink = MediaStreamSink::create( pCont, MEDIASUBTYPE_Y800 );

as shown in the "CreateVideoFile" sample.


3). When running the code in the callback, we see that the delay of 250 msec means that the callback
CListener::frameReady() is not called for all frames that are stored.
This suggests that there are multiple threads running. Is there a description of the threading model that is employed?


I do not understand your question. However, the frames are saved in the ringbuffer.


4). I would like to have an accurate timestamp to assist with integration with some MEMs inertials. Is there a way to sample a continuous hardware timer (or equivalent) in this framework.

The time stamp in the SampleDesc structure is set by the driver from the computer's clock, when the driver was informed about the arrival.


5). I would like to make use of some signal processing routines that are available thru OpenCV. Is there an example of integrating OpenCV software with ICimaging software?


Yes. Search for "IPLImage" in the forum and you will find e.g.
http://www.theimagingsourceforums.com/showthread.php?323456-C-Console-App-without-MFC&highlight=IPLImage

The trick is, using MemBuffer->getPtr() and either copy the image data to IPLImage or pass the pointer only.