PDA

View Full Version : Hardware for Converting S-Video to DV



Kay
March 31, 2004, 21:41:51
Stefan,

I know your company makes devices to convert analog video to DV. My application needs at least S-Video and Composite as sources of video.

This is for a medical instrumentation device. One of the system requirements deals with the delay through the device. That is, how much time does it take for the S-Video frame to be captured, converted to DV and output from your device?

If I want to keep this delay through the capture hardware to a minimum, which one of your devices would you recommend and what would this delay be?

Thanks in advance,

Peter

Johannes Vogel
April 1, 2004, 09:38:34
Hello,

I am sorry, but we doe not have video to DV converters in our product line. Our video to FireWire converter outputs UNCOMPRESSES video only. The delay for this device is dependent on the selected video format and the processor (type and speed).

Kay
April 1, 2004, 15:03:17
Johannes,

Uncompressed video is fine. The format should be at least 640x480. The PC will most likely be a P4 at 3.2 GHz and 1 Gig of memory.

My concern is finding out the delay from S-Video or Composite going into your converter box and the equivalent frame coming out of it as Firewire. Then, the next delay is Firewire into the PC to the equivalent frame in system memory.

I'm also assuming that your box is compatible with the Imaging Control C++ Class Library?

Any help you can provide is appreciated.

Thanks,

Peter

Johannes Vogel
April 2, 2004, 18:32:11
Hello,

the converter is compatible with IC Imaging Control.

It does not buffer the video data. Therefore, it takes a few milliseconds to digitize and transfer the data into the PC. Since the video data is interlaced, the driver has to process the incoming data and put the odd and even fields together. The time needed for this task is dependent on the current system load (delayed delivery of interrupts) and the processor type and speed.
Dependent on the current video format, further processing (color space conversion) is needed. The time required for this task is also dependent on the processor type and speed.

Every image buffer contains a time stamp that is related to a global time. Using these time stamps it is easy to measure the delay on a specific system.

If you have tough boundary condition as far as the delay is concerned, I suggest you order a converter for a free evaluation and measure the delay.

Kay
April 2, 2004, 19:47:40
Johannes,

I will buy one DFG/SV1 for my prototyping work. Where on the internet do I buy one?

I already requested a quote from your company for it.

I need to get one by Tuesday, 4/6, is that possible?

Thanks,

Peter

Kay
April 2, 2004, 21:45:20
Johannes,

Thanks for the reply,

My application combines video and medical sensor signals. The video and signals must be linked with an accuracy of a frame. This is the reason why it's important for me to know the number of milliseconds from S-Video frame out of the camera to the moment that the FrameReady handler runs in the PC for that frame. I already know what my delays are afterwards, that is, the delays during processing in the FrameReady and other threads.

I think that I understand your explanation, but allow me to ask you some new clarifying questions on this:

1 - Since you mentioned that the DFG/SV1 device does not buffer the data, are you saying that the first field of analog video is digitized and sent to the Firewire port, and then the next field of analog video is digitized and sent, in sequence, after the first?

2 - Are you saying that the Windows 2000/XP WDM driver takes care of putting the two fields together into a frame? I think that's what you said but I wanted to make sure that I do understand. Just so you know, if your driver provided me with the two fields as two separate buffers, that would be okay as well!

3 - My test bed is running with a P4 @ 3.2 GHz with at least 1 GByte of memory. The video format is RGB24. If, during prototyping, I need more speed, this application is allowed to use a more expensive solution, something like a dual Xeon processor ******board, with each CPU at 3+ GHz. I already have the ICImaging library code running in its own thread. Other threads are handling the display code, saving to disk, etc. I'll be prototyping for the delays, but I'm wondering what I should expect for the values that I'm going to see, now that you know more about what I'm trying to do.

4 - What should I do in order to prevent extra time wasted doing the color space conversion that you mentioned in your reply? My application needs the display running at 1024x768, 32 bit color, but aside from that, I do have some flexibillity.

5 - Please clarify the use of time stamps with my application. Where is the 'global time' number defined? If I get to understand this better I may be able to develop a way to sink together the video and medical signals data!!

Thanks for all your great help,

Peter

Johannes Vogel
April 5, 2004, 10:28:43
Hello,

First of all, I would like to clarify which type of device we are talking obout. The DFG/SV1 is a PCI frame grabber. On the other hand your are talking about DV and FireWire. Our video to FireWire converter is called DFG/1394-1e. I assume, you are using the DFG/1394-1e.

The most accurate time stamp generation is achieved by setting the converter to use the UYVY video format . In this case, required color space conversions are done outside the driver and will therefore not affect the time stamp generation. The driver quiries the current time and stores this information as the time stamp to a buffer before it is sent into the stream.
In order to synchronize data to a buffer, you have to quiry the current time the very moment the data is sent to your application (interrupt or callback from the i/o hardware). If you store the time and the data together, you are able to determine the buffer that contains the field that was processed by the converter while the data was acquired using the i/o hardware.

The current time is returned by the property: Grabber::getCurReferenceTime