topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Friday March 29, 2024, 12:35 am
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Last post Author Topic: What's better: modern built-in motherboard sound chip or old sound card?  (Read 26252 times)

superticker

  • Supporting Member
  • Joined in 2006
  • **
  • Posts: 143
    • View Profile
    • Superticker's SU reviews about technology
    • Donate to Member
OK, I've reread your post a couple dozen times ...
My explanation isn't complete if you're not familiar with OS internals, and that discussion is really a separate thread.  Briefly, every OS has two schedulers, one for the application layer and another for the driver layer.  For Windows, the scheduler for the application layer time slices around every 200mS, and that's just too slow for real-time scheduling, so--in contrast to a regular real-time OS--nothing real-time can be scheduled by the Windows or Unix application scheduler.  So we turn to the driver scheduler instead.

The driver schedule in Windows is really stupid (not resource aware), and it has only one priority.  (Regular real-time OSes have 64 at the driver level.)  We're pretty limited there too.  But Microsoft makes due with a single driver process that performs real-time de-compression of audio and video when you provide the codec plug-ins.

However, some codec plug-ins are more powerful (and slower) than others.  What you would like to do is select the simplest, fastest one that provides the sound/video quality you need.  I would schedule the fastest one first, but if your audio player wants more sound quality than this codec can deliver, then the de-compressor engine needs to pick the next slower one down on the priority chain.

So at the top of the codec priority chain is the highest-speed quality-limited codecs and at the bottom of the chain are the slow-speed high-fidelity codecs.  You want the de-compressor engine to pick the fastest, simplest one--but not too simple, otherwise, your sound quality will suffer.

Thanx superticker.  Do you know how to set these priorities?
That's the easy question.  The hard question is how do you decide on these selection priorities such that the trade-off between performance and sound/video quality is best determined?  That's well outside the scope of this thread.  What sound card and multimedia application players do is provide the codecs as a "kit" with their priorities setup for you.  What end-users do is install multiple kits for WMA, WinAmp, RealPlayer, etc such that these kits and their priorities conflict with each other.  Now you got a mess because a codec that WinAmp might rate as a 4 in its kit might be rated as a 6 in ATI's Multimedia player.

Also, some of the better (more expensive codecs) are smarter and faster with higher sound quality.  Sorting through this incompatibility mess is also beyond this thread.

Anyway, from an Administrator account, open up the sound control panel and click on the hardware tab.  That should show you the codec kits.  Open up a given kit; for example, open Audio codecs.  Inside that kit is a listing of individual codecs.  Double-click on one to show its properties and which priority it is, which you can change or disable.  Now I've given you just enough information to be dangerous.  I'm not responsible for what happens after this.

I would suggest you find another website of audiophiles that have spent hours playing with different codecs to determine how to balance the priorities within your own codec kits.

For me, I just disable everything except just what my system needs.  That makes my system very deterministic as far as codec selection goes.
« Last Edit: November 09, 2006, 09:30 PM by superticker »

superticker

  • Supporting Member
  • Joined in 2006
  • **
  • Posts: 143
    • View Profile
    • Superticker's SU reviews about technology
    • Donate to Member
superticker: I'd say that hardware has a good deal to do with it, once we deal with stuff more advanced than dumb raw audio playback
I agree, but if I'm understanding the original post correctly, this user is getting stutters with a single-stream playback.  That's caused because the input FIFO buffer into the sound chip ran dry before the driver could fill it again.

As far as fancy effects, that all must be done in hardware--for Windows.  Even the driver layer isn't fast enough to assemble real-time effects on the fly.  However, even though the hardware is doing the effects, the driver is still responsible for keeping the chip's FIFO buffers full.  You'll get stuttering if those buffers run dry regardless of the hardware you're using.

A software interrupt in the Windows driver layer can take as much as 50mS to get service from the OS.  An application running at the highest priority can take as long as 250mS to get service.  Some might find this funny, but counting the mouse pulses actually gets more priority than servicing software interrupts. :-)   I think that's pretty sad.

Don't move your mouse if you're recording something really important on Windows.  :D    Edit: BTW I'm joking, the A/D converters (on the sound chip) have big enough output buffers to handle the 50mS delay for driver service, but if you really moved that mouse ... that 50mS might get stretched some. (And that's not a joke.)
« Last Edit: November 09, 2006, 06:23 PM by superticker »

f0dder

  • Charter Honorary Member
  • Joined in 2005
  • ***
  • Posts: 9,153
  • [Well, THAT escalated quickly!]
    • View Profile
    • f0dder's place
    • Read more about this member.
    • Donate to Member
Hrm, are "mS" referring to mili- or microseconds?

Your above post on scheduling etc. doesn't really sound like the Windows NT I'm familiar with, and described in "Inside Windows 2000" (think it's been renamed to "Windows Internals" for the XP+ version).
- carpe noctem

superticker

  • Supporting Member
  • Joined in 2006
  • **
  • Posts: 143
    • View Profile
    • Superticker's SU reviews about technology
    • Donate to Member
Real-time OS drivers and their scheduling
« Reply #28 on: November 10, 2006, 10:51 AM »
Hrm, are "mS" referring to mili- or microseconds?
That would be in milliseconds.  Microseconds would be uS.  To clarify, what I'm saying here is that you may have to wait up to 50mS for someone else's drive to finish service before Windows gets to yours.  If your driver was the only driver in the system, you wouldn't have to wait this long.  In other words, you're not waiting on the OS (as you're thinking), but rather on another driver to complete service.  The point is: Microsoft cannot guarantee how long someone else's driver will take to complete; whereas, in a real-time driver there is a guarantee because of the tiered driver design.  What does this mean?

A conventional OS is only able to service about 30 interrupts a second before context switch overhead brings it to its knees.  In contrast, a real-time OS (RTOS) commonly does 3000 interrupts a second and the service latency can be as short as 50uS with RTOSes such as ThreadX.  RTOSes can achieve this because their driver model is designed differently.  For example, the first tier of interrupt service is done strictly in hardware.  The hardware interrupt is thrown, the modern-day processor grabs the status register of the device that threw it and pushes it onto the stack (That occurs automatically in some modern processors.). It may also stash (and set a point to) the data if there are any. The only software operation is the scheduling of a software interrupt and its priority (for later service) to service the data coming in.

In contrast, a conventional OS (like Windows) would now try to service the data on the hardware interrupt.  That means daisy-chained services (devices sharing that interrupt) will not have any service requests honored until that driver finishes servicing the data--and who knows how long that could take?  If that happened in a real-time application, your cruse missile would hit a tree.

What we try to do with OSes like Windows is nest the system design such that the RTOS is out front of the host OS.  The RAID disk controller on your system is an example of this.  When you flash the firmware on your RAID controller, you're updating its RTOS and its application layer that handles controlling the disk arrays.  The host OS only directs disk activity; whereas, the RAID RTOS is in charge of the actual real-time disk operations.  SCSI disks also work this way.

Your above post on scheduling etc. doesn't really sound like the Windows NT I'm familiar with, and described in "Inside Windows 2000" (think it's been renamed to "Windows Internals" for the XP+ version).
Microsoft sells a product called Real-time Windows NT, which is a "scalable" version of their Windows product, but the term "real-time" is a misnomer in this context.  For an OS to be real-time in the computer-science sense, it has to garanttee service times--which includes deadline scheduling, and the Windows scheduler doesn't do any of that.  Moreover, there's way too much context switch overhead to ever make Windows a real-time OS, and you wouldn't want to either.

Remember, we design the scheduling of a conventional OS to maximize application layer efficiency (average service time)--which is the correct goal.  In contrast, we design the RTOS for deterministic scheduling where a hardware servicing operation must complete within 3mS otherwise the disk head will miss the targeted sector.  One design goal is mutually exclusive of the other.

We can talk more about RTOSes, but we should start a different thread.  You can also find more about them if you Google "embedded systems design".  Warning, this is a popular topic.  :)

f0dder

  • Charter Honorary Member
  • Joined in 2005
  • ***
  • Posts: 9,153
  • [Well, THAT escalated quickly!]
    • View Profile
    • f0dder's place
    • Read more about this member.
    • Donate to Member
I know NT isn't real-time, but 50ms for an IRQ to be handled sounds ludicruous. And AFAIK, data processing isn't done directly in the IRQ handler, instead some state information is saved and passed down as an IRP, and the IRQ handler itself finishes quickly. Iirc linux does somewhat the same by having "high" and "low" parts of their IRQ handlers.

Hadn't heard about real-time NT, are you sure you're not thinking of NT embedded? Just because something is embedded doesn't mean it has to be hard realtime :)

Iirc there's also just one scheduler in the whole of NT, used for both usermode and kernelmode stuff - although there's a distinction between usermode and kernelmode threads. The "scheduler" also isn't a separate modular part, it's interweaved in most of the NT kernel because of it's particular design.

As for priority levels, there's 32 of them, with one being REALTIME. While that priority isn't strictly "realtime" by computer science terms, it's good enought that you can lock up a single-cpu system if you don't manually relinquish control...

Btw., might be a good idea for a moderator to cut off these last few posts into a separat thread so as to not pollute the rest of the thread :)
- carpe noctem

superticker

  • Supporting Member
  • Joined in 2006
  • **
  • Posts: 143
    • View Profile
    • Superticker's SU reviews about technology
    • Donate to Member
Real-time OS drivers and their scheduling
« Reply #30 on: November 11, 2006, 03:46 AM »
This thread has now been moved to the programmer's area since it's about driver design.  I have a reply there.  https://www.donation...31.msg43263#msg43263

I know NT isn't real-time, but 50ms for an IRQ to be handled sounds ludicruous. And AFAIK, data processing isn't done directly in the IRQ handler, instead some state information is saved and passed down as an IRP, and the IRQ handler itself finishes quickly. Iirc linux does somewhat the same by having "high" and "low" parts of their IRQ handlers....
Continued on the new thread.

gjehle

  • Member
  • Joined in 2006
  • **
  • Posts: 286
  • lonesome linux warrior
    • View Profile
    • Open Source Corner
    • Read more about this member.
    • Donate to Member
well,
if it comes to audio you will have, at some point, an analog signal
and unless you're using digital outputs, you'll have these on your soundcard (aka classic speaker/line out)

while digital electronics operate within well defined limites and are fairly well to master, what it comes down to is your digntal/analog converter.

THAT's actually the part a lot of (lower to middle end) cards have a poor (cheap) design.
a lot of cards use a simple R/C combo to smoothen the signal, that's cheap (2 pieces) and easy.
but if you want a good quality, it sucks.

also, (sorry if i'm repeating something that has already been said, i haven't read _all_ of it) on-board cards are more prone to noise, that's one reason why a lot of middle to high-end cards are not even pci(e) but firewire or usb, just to get away from all the nasty noise producing electronics inside the computer case.

if you want sound = get onboard
if you want decent sound = get a card
if you want good sound = get a external soundcard

and if you're DAC sucks (simple R/C smoother) the digital part can be as good as it gets, it'll be ruined just before it reaches your speakers.

and why do a lot of manufactures use simple R/C ?
it's easy to build cheap digital electronics (you really dont have to care about noise that much if it's digital anyways)
but it's hard and expensive to build low-noise analog devices.
they also tend to use up quite a bit of PCB real estate

ok, that's all for my 2ct
« Last Edit: November 11, 2006, 09:06 PM by gjehle »

superticker

  • Supporting Member
  • Joined in 2006
  • **
  • Posts: 143
    • View Profile
    • Superticker's SU reviews about technology
    • Donate to Member
Digital filtering for PC sound reproduction
« Reply #32 on: November 12, 2006, 02:15 AM »
... why do a lot of manufactures use simple R/C ?
It's easy to build cheap digital electronics (you really don't have to care about noise that much if it's digital anyways).
But it's hard and expensive to build low-noise analog devices.
They also tend to use up quite a bit of PCB real estate

All the above is true, but there's a more important--theoretical--reason why a cheap first-order RC filter is used, and that has to do with distortion.  Any analog filer is a "causal filter" such that it can't know the future.  In contrast, digital FIR filters can be non-causal such that the value of the points are known before time zero and after time zero.  If we balance the filter coefficients across time zero, then we have a "zero-phase" filter across all frequencies.

Having zero-phase delay across all frequencies creates a distortionless filter for our application, so we really favor digital filtering over analog filtering when it's feasible.

For audio play back, the strategy is to oversample the signal by x4 (or better) using a first-ordered anti-aliasing analog filter (to minimize phase distortion) at ultrasonic frequencies, either 45K or 90KHz.  Then we run it through a zero-phase digital filter to achieve a "perfect" cut-off at whatever frequency we want, say 45/2 KHz.

Remember, for a non-causal filter, the cutoff can be perfectly sharp; whereas, for a causal filter (which includes any analog filter), there will always be a response roll off.  The higher the order, the sharper the roll off and phase delay (leading to more distortion).  So our cheap first-order RC filter produces less ultrasonic distortion than a higher ordered analog one.  But the assumption is standard audio equipment won't reproduce those ultrasonic frequencies (say at 90KHz) anyway.

I should add my area is in scientific instrumentation design, not sound card design.  Frankly, I couldn't tell you whether the PC sound chips are being design right or not.  But it wouldn't cost anymore to design them with x4 oversampling in mind and zero-phase digital filtering at their outputs.  I do know that some audio component CD players employ x4 and x8 oversampling and digital comb filtering, but I really can't speak for the sound-chip PC world.  Perhaps someone else knows for sure or has a link to a sound-chip spec sheet.
« Last Edit: November 12, 2006, 02:17 AM by superticker »

superboyac

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 6,347
    • View Profile
    • Donate to Member
All the above is true, but there's a more important--theoretical--reason why a cheap first-order RC filter is used, and that has to do with distortion.  Any analog filer is a "causal filter" such that it can't know the future.  In contrast, digital FIR filters can be non-causal such that the value of the points are known before time zero and after time zero.  If we balance the filter coefficients across time zero, then we have a "zero-phase" filter across all frequencies.

Having zero-phase delay across all frequencies creates a distortionless filter for our application, so we really favor digital filtering over analog filtering when it's feasible.

For audio play back, the strategy is to oversample the signal by x4 (or better) using a first-ordered anti-aliasing analog filter (to minimize phase distortion) at ultrasonic frequencies, either 45K or 90KHz.  Then we run it through a zero-phase digital filter to achieve a "perfect" cut-off at whatever frequency we want, say 45/2 KHz.

Remember, for a non-causal filter, the cutoff can be perfectly sharp; whereas, for a causal filter (which includes any analog filter), there will always be a response roll off.  The higher the order, the sharper the roll off and phase delay (leading to more distortion).  So our cheap first-order RC filter produces less ultrasonic distortion than a higher ordered analog one.  But the assumption is standard audio equipment won't reproduce those ultrasonic frequencies (say at 90KHz) anyway.

I should add my area is in scientific instrumentation design, not sound card design.  Frankly, I couldn't tell you whether the PC sound chips are being design right or not.  But it wouldn't cost anymore to design them with x4 oversampling in mind and zero-phase digital filtering at their outputs.  I do know that some audio component CD players employ x4 and x8 oversampling and digital comb filtering, but I really can't speak for the sound-chip PC world.  Perhaps someone else knows for sure or has a link to a sound-chip spec sheet.
yeah.......what he said. ;)

mouser

  • First Author
  • Administrator
  • Joined in 2005
  • *****
  • Posts: 40,896
    • View Profile
    • Mouser's Software Zone on DonationCoder.com
    • Read more about this member.
    • Donate to Member
my motherboard decided to help me solve this dilemna by not working..
so now i'm off to buy a sound card..  :tellme:

superticker

  • Supporting Member
  • Joined in 2006
  • **
  • Posts: 143
    • View Profile
    • Superticker's SU reviews about technology
    • Donate to Member
my motherboard decided to help me solve this dilemma by not working..
so now I'm off to buy a sound card..  :tellme:
You never said whether or not it's a hardware or software problem.  95% of the time it's a software problem.  Open the Sound control panel and click the hardware tab.  Be sure all the codec kits that should be there are there.  If you're in doubt, reinstall the kits from the CD that came with your motherboard.  (You can also check the registry if you know what you're looking for.)

Moreover, open up each kit and verify all its driver components are running right.  Double click on driver details to see their properties.  See if all the drivers (*.sys) are signed.  Although they may not have been signed originally, there are probably signed versions available by now.  If you think there's driver corruption and you haven't ran chkdsk recently, do it now.  Any disk with more than 7% bad sectors should be discarded; it's got mechanical problems.

If it really is a hardware problem, chances are you'll get a little something out of it--maybe a crackle.