ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > General Software Discussion

Video Editors

<< < (5/16) > >>

Vurbal:
On the capture question, it's always best to capture at a higher resolution except when it isn't really a higher resolution. Yeah, it's just that simple.

Actually here's what I mean. Pretty much every (consumer) capture device will have 1 specific resolution that the actual capture (sampling) happens at. It may allow you to select half that resolution which it achieves by simply throwing away half the samples. Obviously that's not what you want. Depending on the hardware, though, it may also offer higher resolutions which involve processing frames beyond what was sampled and that's not good either.

I find the best way to look at it is to distinguish between resolution and definition. This isn't the technical meaning of definition because, in fact, there isn't one. It's just marketing speak but for lack of an alternative it's the word I use.

Any digital image is a collection of samples representing an analog image. Think of each pixel as an individual detail. I refer to that original resolution as the image's definition. In other words definition, as I use the term, refers to the level of captured detail. If you reduce the resolution you also reduce the definition. However if you increase the resolution, the definition remains the same. You haven't added any actual details. All you've done is told your computer to interpolate new details which may or (more likely) may not be accurate.

In other words you should always capture at the highest resolution possible without exceeding the definition of the capture hardware. That's assuming both the hardware driver and capture software allow it, which they often don't. USB capture devices, in particular, are usually very limiting because they tend not to have hardware encoders and USB itself isn't particularly reliable for sustained throughput.

The question is whether there are other realistic options available now that most people have abandoned analog video entirely. Of course at the end of the day the most important thing is how happy you are with the quality. If you aren't happy with it and end up deciding to get better hardware I can only give you general advice. Back in the day Hauppauge capture cards had a good reputation but I don't know if that's still true since they don't use hardware encoding any more. That might be a good starting point at least.

Happy Expat:
As I made all the original files at the "normal" setting for the capture device I imagine that was It's optimum. Is there a way to verify this?
Would the secondary processing where I modified the audio and then re-saved in the same format and resolution likely have adversely affected the true video data or is that likely to be negligible?  Arcsoft use their own CODEC whereas Power Director (I think) use a more "industry standard?" CODEC. Or am I talking complete rubbish

40hz:
Any digital image is a collection of samples representing an analog image. Think of each pixel as an individual detail. I refer to that original resolution as the image's definition. In other words definition, as I use the term, refers to the level of captured detail. If you reduce the resolution you also reduce the definition. However if you increase the resolution, the definition remains the same. You haven't added any actual details. All you've done is told your computer to interpolate new details which may or (more likely) may not be accurate.
-Vurbal (March 06, 2014, 05:48 AM)
--- End quote ---

@V - thank you for that! That was the shortest and clearest illumination of the difference (and one-way interaction) between 'definition' and 'resolution' I've ever read. Next time I need to explain those terms to someone I'm going with your definition and example. :Thmbsup:

Vurbal:
As I made all the original files at the "normal" setting for the capture device I imagine that was It's optimum. Is there a way to verify this?
Would the secondary processing where I modified the audio and then re-saved in the same format and resolution likely have adversely affected the true video data or is that likely to be negligible?  Arcsoft use their own CODEC whereas Power Director (I think) use a more "industry standard?" CODEC. Or am I talking complete rubbish
-Happy Expat (March 06, 2014, 06:03 AM)
--- End quote ---

I did some digging and came up with a reasonable amount of information on your card plus a little information (and some educated guesses) about the company that sells it. First off I would make sure never to buy anything from them again. Between some questionable, not outright false, statements about their products and the supposed knock offs and their requirement to sign up for their forum before even reading any posts they seem slimy and untrustworthy.

The good news, though, is you seem to have a pretty much industry standard USB capture device. It's one of at least 3 made by different Chinese OEMs, and possibly the best of the 3 since it looks like Hauppauge sells (or sold) essentially the same unit under their brand. The key components, in terms of capture software compatibility and video capture specs are as follows:

Video capture chip: Empia EM2861 (WDM capture hardware)
Video processing chip: Philips SAA7113 (Samples full frame SD video to NTSC/PAL,Uncompressed YUV 4:2:2)
Audio processing chip: possibly Empia EMP202 2 Channel AC'97 (Dolby Digital) or perhaps other chip supporting just 8000Hz mono

The captured video will be filtered automatically in the hardware but that's pretty much unavoidable unless you're ready to shell out $200 - $300 for a prosumer level capture device from BlackMagic Design. On the bad side the manufacturer's quality control is about what you'd expect from a low end commodity electronics product. If the sound is bad the only solution may be bypassing it and capturing with your sound card.

That option would also mean going with a different capture program but it doesn't look like ArcSoft's product is really suited to proper capturing anyway. Although it appears to stick with industry standard formats, I question the quality of their encoders, particularly for realtime encoding. Also the standard method for high quality capturing involves using an intermediate lossless codec initially and then encoding to your final format as a separate step. Show Biz 3.5 (or 5) may or may not be able to do that. It mostly depends on whether it gives you access to any VfW or ACM encoders you may have installed.

Back to the good news it looks like VirtualDub should have no problem capturing from it and also supports capturing audio through a sound card at the same time. I've never used it for capturing and I do know some people have horrible problems with audio sync, at least until they spend time tweaking some settings. It would definitely work with some great free lossless codecs which are designed for capture.

For maximum quality given your hardware that's where I would start. If that's what you want I'll do what I can to help but you need to understand up front that it could involve quite a bit of frustration in the beginning. Or it might be a walk in the park. There just isn't any way to tell ahead of time.

Curt:
Curt: I hope I'm the "new generation youngster" you're referring to.-Happy Expat (March 06, 2014, 03:44 AM)
--- End quote ---

^hahaha! Tyvärr!

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version