Is CD audio quality good enough for the final delivery of music?a question about audio quality recordingAcoustic recording quality - mic or audio interfaceWhat should I look for in a portable audio recorder to be able to record loud concerts/jam sessions?Quantization for recorded audio tracksIs the quality of a USB microphone worse than using a conventional mic through a separate audio interface?Should I buy an Audio Interface for USB microphone?Is the zoom H4n a good option for chamber music?Does the microphone gear/setup used to record vocals determine the output quality of the music or does the mixing and mastering?How to Hook up M-Audio 2x2 Mtrack with Yamaha MG10XU MIxer for recording via laptopHow do I get higher recording quality for my Nord Piano 3?What specification should I look in an audio interface for independent input recording?

Is there a DSLR/mirorless camera with minimal options like a classic, simple SLR?

Can the removal of a duty-free sales trolley result in a measurable reduction in emissions?

How to befriend someone who doesn't like to talk?

Why are MBA programs closing in the United States?

How long is it safe to leave marker on a Chessex battle map?

Why does this query, missing a FROM clause, not error out?

Increase speed altering column on large table to NON NULL

Has there been a multiethnic Star Trek character?

bash vs. zsh: What are the practical differences?

I've been given a project I can't complete, what should I do?

Is the use of umgeben in the passive unusual?

Why do radiation hardened IC packages often have long leads?

How to avoid typing 'git' at the begining of every Git command

Ability To Change Root User Password (Vulnerability?)

Does the new finding on "reversing a quantum jump mid-flight" rule out any interpretations of QM?

tabular: caption and align problem

Electricity free spaceship

Grep Match and extract

Why did the World Bank set the global poverty line at $1.90?

Getting UPS Power from One Room to Another

Proving that a Russian cryptographic standard is too structured

What is the logic behind charging tax _in the form of money_ for owning property when the property does not produce money?

C++ logging library

Is using 'echo' to display attacker-controlled data on the terminal dangerous?



Is CD audio quality good enough for the final delivery of music?


a question about audio quality recordingAcoustic recording quality - mic or audio interfaceWhat should I look for in a portable audio recorder to be able to record loud concerts/jam sessions?Quantization for recorded audio tracksIs the quality of a USB microphone worse than using a conventional mic through a separate audio interface?Should I buy an Audio Interface for USB microphone?Is the zoom H4n a good option for chamber music?Does the microphone gear/setup used to record vocals determine the output quality of the music or does the mixing and mastering?How to Hook up M-Audio 2x2 Mtrack with Yamaha MG10XU MIxer for recording via laptopHow do I get higher recording quality for my Nord Piano 3?What specification should I look in an audio interface for independent input recording?













29















According to Wikipedia, the audio contained in a CD consists of two-channel signed 16-bit Linear PCM sampled at 44,100 Hz.



Of course, both the sample rate and the bit depth could be increased to improve the quality e.g. according to Wikipedia BluRay audio uses 24-bit/96 kHz or 24-bit/192 kHz linear PCM.



But, can anyone hear the improvement? I am fairly sure that I cannot. For a start, I cannot hear up to 22kHz (Nyquist frequency). A web search finds plenty of opinions but many are clearly nonsense and it is hard to determine which, if any, are the result of scientific testing e.g. double blind testing.



I have some BluRays of music (with and without video) and I find them better in some ways but I think that factors other than the bit depth or sample rate are the explanation.



The bass is often better which might just be that they were produced with the expectation of being played on a system with a sub-woofer.



The rear channels add some atmosphere. This is subtle but can enhance the impression of really being present at a performance.



Are there any good quality studies of whether enhancing the sample rate or bit depth could be detected by humans?



Clarifications:



I am only asking about the final delivery to the consumer. The merits of higher quality in the original capture or editing is an interesting but separate question.



I am not considering cases in which further processing is expected.



I am only asking whether the CD standard is good enough not whether it is more than good enough e.g. whether a lower quality would be good enough. Again, an interesting but separate question.



I am not asking about the value of additional channels. I mention BluRay audio because it is an example of greater bit depth and higher sample rate. However, that is complicated by the extra channels.



Finally, of course, poor recordings exist. However good your tools are, they can be badly used. However, the existence of poorly made recordings does not, by itself, invalidate the standard.










share|improve this question



















  • 1





    The close vote may be because some of the answers are beside the point (audio quality while recording and mixing, quality of audio compression, anecdotal evidence and personal preferences ...) and for some people this indicates that the question was unclear or too broad or asking for opinions. They should just downvote the answers they didn't like, but there seems to be a no-downvotes tradition on this site.

    – Your Uncle Bob
    May 26 at 17:33











  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 26 at 17:43











  • The one existing close vote is because the question is primarily opinion based. There's no objective measure of "good enough". @YourUncleBob I would never vote to close based on answers, that seems... inexplicable. I have also downvoted the question. I do not have a personal "no downvotes" policy. Also, to me this is off-topic because it's more about consumer audio than it is about audio production. And the number of digital audio quality conversations/arguments that exist online is manifold and IMHO, completely unhelpful. No need for yet another here.

    – Todd Wilcox
    May 28 at 14:11












  • @ToddWilcox Indeed, there is plenty, too much, on the net about the subject. Note that I requested "Are there any good quality studies". My hope was to get a good quality answer.

    – badjohn
    May 28 at 14:23











  • If there were good quality answers available, they would have been found in a cursory web search. The other side of this is, nobody is going to be delivering CD audio anymore. CDs are dead. Authoring today is for streaming services, iTunes, YouTube, and video of various formats. The producer and mastering engineers don't choose the delivery resolution and bit depth, they deliver what the services request/require. Delivering in a format different from the service's format means it will be re-encoded after delivery, and most producers want to avoid that. IMHO this question is pointless.

    – Todd Wilcox
    May 28 at 14:26















29















According to Wikipedia, the audio contained in a CD consists of two-channel signed 16-bit Linear PCM sampled at 44,100 Hz.



Of course, both the sample rate and the bit depth could be increased to improve the quality e.g. according to Wikipedia BluRay audio uses 24-bit/96 kHz or 24-bit/192 kHz linear PCM.



But, can anyone hear the improvement? I am fairly sure that I cannot. For a start, I cannot hear up to 22kHz (Nyquist frequency). A web search finds plenty of opinions but many are clearly nonsense and it is hard to determine which, if any, are the result of scientific testing e.g. double blind testing.



I have some BluRays of music (with and without video) and I find them better in some ways but I think that factors other than the bit depth or sample rate are the explanation.



The bass is often better which might just be that they were produced with the expectation of being played on a system with a sub-woofer.



The rear channels add some atmosphere. This is subtle but can enhance the impression of really being present at a performance.



Are there any good quality studies of whether enhancing the sample rate or bit depth could be detected by humans?



Clarifications:



I am only asking about the final delivery to the consumer. The merits of higher quality in the original capture or editing is an interesting but separate question.



I am not considering cases in which further processing is expected.



I am only asking whether the CD standard is good enough not whether it is more than good enough e.g. whether a lower quality would be good enough. Again, an interesting but separate question.



I am not asking about the value of additional channels. I mention BluRay audio because it is an example of greater bit depth and higher sample rate. However, that is complicated by the extra channels.



Finally, of course, poor recordings exist. However good your tools are, they can be badly used. However, the existence of poorly made recordings does not, by itself, invalidate the standard.










share|improve this question



















  • 1





    The close vote may be because some of the answers are beside the point (audio quality while recording and mixing, quality of audio compression, anecdotal evidence and personal preferences ...) and for some people this indicates that the question was unclear or too broad or asking for opinions. They should just downvote the answers they didn't like, but there seems to be a no-downvotes tradition on this site.

    – Your Uncle Bob
    May 26 at 17:33











  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 26 at 17:43











  • The one existing close vote is because the question is primarily opinion based. There's no objective measure of "good enough". @YourUncleBob I would never vote to close based on answers, that seems... inexplicable. I have also downvoted the question. I do not have a personal "no downvotes" policy. Also, to me this is off-topic because it's more about consumer audio than it is about audio production. And the number of digital audio quality conversations/arguments that exist online is manifold and IMHO, completely unhelpful. No need for yet another here.

    – Todd Wilcox
    May 28 at 14:11












  • @ToddWilcox Indeed, there is plenty, too much, on the net about the subject. Note that I requested "Are there any good quality studies". My hope was to get a good quality answer.

    – badjohn
    May 28 at 14:23











  • If there were good quality answers available, they would have been found in a cursory web search. The other side of this is, nobody is going to be delivering CD audio anymore. CDs are dead. Authoring today is for streaming services, iTunes, YouTube, and video of various formats. The producer and mastering engineers don't choose the delivery resolution and bit depth, they deliver what the services request/require. Delivering in a format different from the service's format means it will be re-encoded after delivery, and most producers want to avoid that. IMHO this question is pointless.

    – Todd Wilcox
    May 28 at 14:26













29












29








29


10






According to Wikipedia, the audio contained in a CD consists of two-channel signed 16-bit Linear PCM sampled at 44,100 Hz.



Of course, both the sample rate and the bit depth could be increased to improve the quality e.g. according to Wikipedia BluRay audio uses 24-bit/96 kHz or 24-bit/192 kHz linear PCM.



But, can anyone hear the improvement? I am fairly sure that I cannot. For a start, I cannot hear up to 22kHz (Nyquist frequency). A web search finds plenty of opinions but many are clearly nonsense and it is hard to determine which, if any, are the result of scientific testing e.g. double blind testing.



I have some BluRays of music (with and without video) and I find them better in some ways but I think that factors other than the bit depth or sample rate are the explanation.



The bass is often better which might just be that they were produced with the expectation of being played on a system with a sub-woofer.



The rear channels add some atmosphere. This is subtle but can enhance the impression of really being present at a performance.



Are there any good quality studies of whether enhancing the sample rate or bit depth could be detected by humans?



Clarifications:



I am only asking about the final delivery to the consumer. The merits of higher quality in the original capture or editing is an interesting but separate question.



I am not considering cases in which further processing is expected.



I am only asking whether the CD standard is good enough not whether it is more than good enough e.g. whether a lower quality would be good enough. Again, an interesting but separate question.



I am not asking about the value of additional channels. I mention BluRay audio because it is an example of greater bit depth and higher sample rate. However, that is complicated by the extra channels.



Finally, of course, poor recordings exist. However good your tools are, they can be badly used. However, the existence of poorly made recordings does not, by itself, invalidate the standard.










share|improve this question
















According to Wikipedia, the audio contained in a CD consists of two-channel signed 16-bit Linear PCM sampled at 44,100 Hz.



Of course, both the sample rate and the bit depth could be increased to improve the quality e.g. according to Wikipedia BluRay audio uses 24-bit/96 kHz or 24-bit/192 kHz linear PCM.



But, can anyone hear the improvement? I am fairly sure that I cannot. For a start, I cannot hear up to 22kHz (Nyquist frequency). A web search finds plenty of opinions but many are clearly nonsense and it is hard to determine which, if any, are the result of scientific testing e.g. double blind testing.



I have some BluRays of music (with and without video) and I find them better in some ways but I think that factors other than the bit depth or sample rate are the explanation.



The bass is often better which might just be that they were produced with the expectation of being played on a system with a sub-woofer.



The rear channels add some atmosphere. This is subtle but can enhance the impression of really being present at a performance.



Are there any good quality studies of whether enhancing the sample rate or bit depth could be detected by humans?



Clarifications:



I am only asking about the final delivery to the consumer. The merits of higher quality in the original capture or editing is an interesting but separate question.



I am not considering cases in which further processing is expected.



I am only asking whether the CD standard is good enough not whether it is more than good enough e.g. whether a lower quality would be good enough. Again, an interesting but separate question.



I am not asking about the value of additional channels. I mention BluRay audio because it is an example of greater bit depth and higher sample rate. However, that is complicated by the extra channels.



Finally, of course, poor recordings exist. However good your tools are, they can be badly used. However, the existence of poorly made recordings does not, by itself, invalidate the standard.







recording






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 26 at 15:16







badjohn

















asked May 25 at 12:52









badjohnbadjohn

2,219723




2,219723







  • 1





    The close vote may be because some of the answers are beside the point (audio quality while recording and mixing, quality of audio compression, anecdotal evidence and personal preferences ...) and for some people this indicates that the question was unclear or too broad or asking for opinions. They should just downvote the answers they didn't like, but there seems to be a no-downvotes tradition on this site.

    – Your Uncle Bob
    May 26 at 17:33











  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 26 at 17:43











  • The one existing close vote is because the question is primarily opinion based. There's no objective measure of "good enough". @YourUncleBob I would never vote to close based on answers, that seems... inexplicable. I have also downvoted the question. I do not have a personal "no downvotes" policy. Also, to me this is off-topic because it's more about consumer audio than it is about audio production. And the number of digital audio quality conversations/arguments that exist online is manifold and IMHO, completely unhelpful. No need for yet another here.

    – Todd Wilcox
    May 28 at 14:11












  • @ToddWilcox Indeed, there is plenty, too much, on the net about the subject. Note that I requested "Are there any good quality studies". My hope was to get a good quality answer.

    – badjohn
    May 28 at 14:23











  • If there were good quality answers available, they would have been found in a cursory web search. The other side of this is, nobody is going to be delivering CD audio anymore. CDs are dead. Authoring today is for streaming services, iTunes, YouTube, and video of various formats. The producer and mastering engineers don't choose the delivery resolution and bit depth, they deliver what the services request/require. Delivering in a format different from the service's format means it will be re-encoded after delivery, and most producers want to avoid that. IMHO this question is pointless.

    – Todd Wilcox
    May 28 at 14:26












  • 1





    The close vote may be because some of the answers are beside the point (audio quality while recording and mixing, quality of audio compression, anecdotal evidence and personal preferences ...) and for some people this indicates that the question was unclear or too broad or asking for opinions. They should just downvote the answers they didn't like, but there seems to be a no-downvotes tradition on this site.

    – Your Uncle Bob
    May 26 at 17:33











  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 26 at 17:43











  • The one existing close vote is because the question is primarily opinion based. There's no objective measure of "good enough". @YourUncleBob I would never vote to close based on answers, that seems... inexplicable. I have also downvoted the question. I do not have a personal "no downvotes" policy. Also, to me this is off-topic because it's more about consumer audio than it is about audio production. And the number of digital audio quality conversations/arguments that exist online is manifold and IMHO, completely unhelpful. No need for yet another here.

    – Todd Wilcox
    May 28 at 14:11












  • @ToddWilcox Indeed, there is plenty, too much, on the net about the subject. Note that I requested "Are there any good quality studies". My hope was to get a good quality answer.

    – badjohn
    May 28 at 14:23











  • If there were good quality answers available, they would have been found in a cursory web search. The other side of this is, nobody is going to be delivering CD audio anymore. CDs are dead. Authoring today is for streaming services, iTunes, YouTube, and video of various formats. The producer and mastering engineers don't choose the delivery resolution and bit depth, they deliver what the services request/require. Delivering in a format different from the service's format means it will be re-encoded after delivery, and most producers want to avoid that. IMHO this question is pointless.

    – Todd Wilcox
    May 28 at 14:26







1




1





The close vote may be because some of the answers are beside the point (audio quality while recording and mixing, quality of audio compression, anecdotal evidence and personal preferences ...) and for some people this indicates that the question was unclear or too broad or asking for opinions. They should just downvote the answers they didn't like, but there seems to be a no-downvotes tradition on this site.

– Your Uncle Bob
May 26 at 17:33





The close vote may be because some of the answers are beside the point (audio quality while recording and mixing, quality of audio compression, anecdotal evidence and personal preferences ...) and for some people this indicates that the question was unclear or too broad or asking for opinions. They should just downvote the answers they didn't like, but there seems to be a no-downvotes tradition on this site.

– Your Uncle Bob
May 26 at 17:33













Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 26 at 17:43





Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 26 at 17:43













The one existing close vote is because the question is primarily opinion based. There's no objective measure of "good enough". @YourUncleBob I would never vote to close based on answers, that seems... inexplicable. I have also downvoted the question. I do not have a personal "no downvotes" policy. Also, to me this is off-topic because it's more about consumer audio than it is about audio production. And the number of digital audio quality conversations/arguments that exist online is manifold and IMHO, completely unhelpful. No need for yet another here.

– Todd Wilcox
May 28 at 14:11






The one existing close vote is because the question is primarily opinion based. There's no objective measure of "good enough". @YourUncleBob I would never vote to close based on answers, that seems... inexplicable. I have also downvoted the question. I do not have a personal "no downvotes" policy. Also, to me this is off-topic because it's more about consumer audio than it is about audio production. And the number of digital audio quality conversations/arguments that exist online is manifold and IMHO, completely unhelpful. No need for yet another here.

– Todd Wilcox
May 28 at 14:11














@ToddWilcox Indeed, there is plenty, too much, on the net about the subject. Note that I requested "Are there any good quality studies". My hope was to get a good quality answer.

– badjohn
May 28 at 14:23





@ToddWilcox Indeed, there is plenty, too much, on the net about the subject. Note that I requested "Are there any good quality studies". My hope was to get a good quality answer.

– badjohn
May 28 at 14:23













If there were good quality answers available, they would have been found in a cursory web search. The other side of this is, nobody is going to be delivering CD audio anymore. CDs are dead. Authoring today is for streaming services, iTunes, YouTube, and video of various formats. The producer and mastering engineers don't choose the delivery resolution and bit depth, they deliver what the services request/require. Delivering in a format different from the service's format means it will be re-encoded after delivery, and most producers want to avoid that. IMHO this question is pointless.

– Todd Wilcox
May 28 at 14:26





If there were good quality answers available, they would have been found in a cursory web search. The other side of this is, nobody is going to be delivering CD audio anymore. CDs are dead. Authoring today is for streaming services, iTunes, YouTube, and video of various formats. The producer and mastering engineers don't choose the delivery resolution and bit depth, they deliver what the services request/require. Delivering in a format different from the service's format means it will be re-encoded after delivery, and most producers want to avoid that. IMHO this question is pointless.

– Todd Wilcox
May 28 at 14:26










7 Answers
7






active

oldest

votes


















31














Tentatively: Yes. As medium for final delivery, I've not yet seen any strong evidence that a well-designed 16-bit, 44,100 Hz system can be significantly improved on as a vehicle for listening to the final mix (and therefore, from a musician's perspective, as a vehicle for presenting the final mix.)



When I looked into this a few years back, I was disappointed to find a relative lack of seemingly well-conducted tests compared to the level of interest in the subject. I certainly didn't find anything that seemed to strongly suggest that there was any major listener benefit in 'higher-definition' audio. (I'm writing this answer partly because I'd be very grateful if anyone knows different!)



Just to provide a little further reading - here are some anecdotes of tests that deal with bit depth and sample rate.



Of course a particular 16-bit listening experience might be devalued by recording levels being too low (resulting in a perceptible noise floor), OR by over-aggressive limiting of peaks to stay within the headroom. That's an example of where recording initially to a higher bit-depth initially would have been valuable. Likewise, a 44.1K DAC with a badly-designed anti-aliasing filter might sound bad - but this doesn't seem to be inevitable with the current state of technology.



Edit: I have just found this paper, published since I last explored this, that concludes that "there was a small but statistically significant ability to discriminate between standard quality audio(44.1or48kHz,16bit) and high resolution audio (beyond standard quality)", based on review of a number of experiments in this area. However, it also states that this ability to discriminate is something that is far more significant when subjects were trained, and still concludes that "the ability to discriminate high bit depth, e.g., 24 bit versus 16 bit, remains an open question". So it still seems reasonable to call CD quality 'good enough', even if 'very slightly better' may be possible.






share|improve this answer

























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 10:42


















23














The short answer: 16-bit 44.1 KHz PCM encoding, when properly sampled
and played back, is close enough to perfect reproduction for human
hearing in virtually all situations that it's unequivocally "good
enough."



The main caveats:



  1. The material must be recorded and reproduced with properly
    engineered sampling and playback systems. While this is not
    particularly difficult or expensive for a competent engineer using
    modern technology, there are a number of mistakes the engineer
    (both equipment designer and recording engineer) could make that
    could be mitigated by a higher sampling rate and/or more bit depth.

  2. There are situations where 16-bit depth will have an audible noise
    floor. These do not occur "naturally" and the vast majority of
    listeners and even audio engineers would have neither the
    inclination nor care to spend the money to produce an environment
    where this would occur. (The noise floor is below audible in places
    such as a soundproofed cinema in a quiet neighbourhood.)

  3. This applies to the storage format only: intermediate processing
    uses appropriate higher bit depths and sampling rates as necessary.
    As a simplistic example for bit depth, when mixing one would
    normally mix multiple 16-bit input siganls to a 24-bit output
    signal and then scale that output signal back to 16 bits. A
    simplistic example for sampling rate is that one might sample at 8x
    or more the 44.1 KHz final frequency in order to use analogue
    filters that distort the signal less when filtering out signals
    above the 22.05 KHz Nyquist frequency.

Now to the details.



A seemingly little known fact of digital sampling of analogue signals
is that, so long as the sampled signal has no frequency components
above the Nyquist frequency of 1/2 the sampling
rate, a properly reproduced playback of that sample will be an exact
copy of the analogue input waveform. All those stair steps you see in
pictures of sampling? They're nonsense; that's a made-up waveform that
cannot be generated by a proper reproduction system because such a
signal would have the "steps" removed by the output filter. I'm not
going to go into more details on this here, but if you are not
convinced or just want to learn more, see Monty Montgomery's "D/A and
A/D | Digital Show and Tell" in
video (also on
YouTube) or
text form.



Note that other answers here get this wrong, and it does seem to be
very difficult to believe for some people. As this post
puts it quite eloquently:




The concept of the perfect measurement or of recreating a waveform
perfectly may seem like marketing hype. However, in this case it is
not. It is in fact the fundamental tenet of the Nyquist-Shannon
Sampling Theorem on which the very existence and invention of
digital audio is based. From WIKI: “In essence the theorem shows
that an analog signal that has been sampled can be perfectly
reconstructed from the samples”. I know there will be some who will
disagree with this idea, unfortunately, disagreement is NOT an
option. This theorem hasn't been invented to explain how digital
audio works, it's the other way around. Digital Audio was invented
from the theorem, if you don't believe the theorem then you can't
believe in digital audio either!




This tells us that in theory, with what we know about human hearing
limits and the noise floors of professionally-designed low-noise
listening environments (such as recording studio or good cinema), the
frequency response and noise floor of 44.1 KHz 16-bit digital audio
recordings will be essentially perfect. (There's a lot more detail on
this in 24/192 Music Downloads...and why they make no
sense. As an
interesting aside, it also mentions that providing wider spectra may
actually make things worse: playback of ultrasonic signals of any
significant amplitude into standard analogue audio amplifiers may well
create intermodulation distortion products in the audio frequencies.)



So the question now becomes, can we do the reproduction well enough in practice?



Well, the way to do this is to test it, of course.



These sorts of tests have been rife with major problems, some as bad as comparing different recordings of the "same" material, such as an SACD remaster of an album against its original master mix from the CD. Even very skeptical experts on testing can accept badly-advised shortcuts such as not double-blinding the test. And of course the listening environment has a massive and difficult-to-correct-for influence on the audio. Even just small movements of your head can result in massive spectrum changes due to comb filtering.



That said, amongst the enormous number of bad tests, a few good ones have been done and they have all invariably shown that nobody, not even professional recording engineers or people with "golden ears," can tell the difference between 44.1 KHz 16-bit and higher rate/depth source recordings.



The canonical paper on this dates from 2006 or so: Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback. The abstract:




Claims both published and anecdotal are regularly made for audibly
superior sound quality for two-channel audio encoded with longer word
lengths and/or at higher sampling rates than the 16-bit/44.1-kHz CD
standard. The authors report on a series of double-blind tests
comparing the analog output of high-resolution players playing
high-resolution recordings with the same signal passed through a
16-bit/44.1-kHz “bottleneck.” The tests were conducted for over a year
using different systems and a variety of subjects. The systems
included expensive professional monitors and one high-end system with
electrostatic loudspeakers and expensive components and cables. The
subjects included professional recording engineers, students in a
university recording program, and dedicated audiophiles. The test
results show that the CD-quality A/D/A loop was undetectable at
normal-to-loud listening levels, by any of the subjects, on any of the
playback systems. The noise of the CD-quality loop was audible only at
very elevated levels.




I'd like point out particularly section 4 of the paper because I think may give some insight into how this whole "high-definition" audio" mess happened:




Though our tests failed to substantiate the claimed advantages of
high-resolution encoding for two-channel audio, one trend became
obvious very quickly and held up throughout our testing: virtually
all of the SACD and DVD-A recordings sounded better than most CDs—
sometimes much better. Had we not “degraded” the sound to CD quality
and blind-tested for audible differences, we would have been tempted
to ascribe this sonic superiority to the recording processes used to
make them. Plausible reasons for the remarkable sound quality of
these recordings emerged in discussions with some of the engineers
currently working on such projects. This portion of the business is
a niche market in which the end users are preselected, both for
their aural acuity and for their willingness to buy expensive
equipment, set it up correctly, and listen carefully in a low-noise
environment. Partly because these recordings have not captured a
large portion of the consumer market for music, engineers and
producers are being given the freedom to produce recordings that
sound as good as they can make them, without having to compress or
equalize the signal to suit lesser systems and casual listening
conditions. These recordings seem to have been made with great care
and manifest affection, by engineers trying to please themselves and
their peers. They sound like it, label after label. High-resolution
audio discs do not have the overwhelming majority of the program
material crammed into the top 20 (or even 10) dB of the available
dynamic range, as so many CDs today do. Our test results indicate
that all of these recordings could be released on conventional CDs
with no audible difference. They would not, however, find such a
reliable conduit to the homes of those with the systems and
listening habits to appreciate them. The secret, for two-channel
recordings at least, seems to lie not in the high-bit recording but
in the high-bit market.




Here are my references and some more reading if you want to get more deeply into this.




  • Audibility of a CD-Standard A/D/A Loop Inserted into
    High-Resolution Audio Playback.
    The best study I know of on this, though there are probably others.

  • Paul D. Lehrman, The Emperor's New Sampling Rate,
    Mix magazine. This is what led me to the article above, and it
    serves as a higher-level summary, along with some further
    information.

  • Monty Montgomery, "D/A and A/D | Digital Show and Tell"
    video (also on
    YouTube) or
    text form. If
    you don't instinctively think "rubbish" when you see a stair-step
    waveform associated with digital sampling, you really need to see
    this. Even if you prefer reading things, the video is well worth
    watching as the demonstrations of what's going on are very clear.

  • Monty Montgomery, 24/192 Music Downloads...and why they make no
    sense. The
    science behind hearing and why you can't hear "better" than 44.1
    KHz/16-bit, and some information on sampling. Includes 16-bit WAV
    files with 0 dB and -105 dB tones if you want to try to hear the
    full dynamic range of 16-bits. Also a long list of what listening
    tests may be testing instead of the source recording frequency and
    depth.

  • image-line.com, Audio Myths & DAW Wars.
    A quick recapitulation of various things that usually cause changes
    in audio quality outside of source rate/depth. Oriented towards
    people who do music production.

  • Ethan Winer, High Definition Audio Comparison. Do your own personal test of
    "high-definition" vs. 44.1 KHz/16-bit!

  • Ethan Winer, Ethan's Magazine Articles and Videos. Lots of other good
    information on audio, listening tests, gear, and so on.





share|improve this answer

























  • Let us continue this discussion in chat.

    – Curt J. Sampson
    May 26 at 23:52






  • 6





    Maybe this is the moment to point out that there are Sound Design and a Signal Processing sites on SE...

    – Mr Lister
    May 27 at 14:57


















13














There are two separate issues here - resolution and frequency. And we also need to separate out recording and playback.



16-bit resolution is plenty good enough for playback. However when recording you want to allow extra headroom, because the worst thing you can do to a sampled signal is to clip it at the limits of its range. It's normal to record at -10dB or so to give that headroom. With 16-bit recordings we would lose substantial recording fidelity that way - but with 24-bit we're fine.



For playback, it's maybe possible to hear the difference, but you'd need good ears. More significantly, you'd also need good equipment. You won't notice the difference on anything short of decent studio kit.



44kHz is in theory good enough to reproduce 22kHz. The problem though is aliasing. If you don't cut everything above 22kHz when you record then those inaudible higher frequencies reflect back on the opposite side of the Nyquist frequency and become audible. When 20kHz is your threshold for hearing, that means your filter needs to pass 20kHz but have cut hard by 22kHz, which is really hard to do. We have filters now which can do it, but certainly older hardware (especially back in the early days of CDs) couldn't do it well at all. Recording at 96kHz though gives you a Nyquist frequency at 48kHz, and it's relatively easy to build a filter which passes 20kHz and cuts hard by 48kHz.



Again, this is for recording. Unless your ears can hear above 22kHz, you won't get any benefit from playback at 96kHz.



For playback though... All the above does assume that playback is done competently. It's not unknown for software (and hardware) to handle one sample rate better than another. I remember a few interesting articles about this in Sound On Sound back in the mid 00s. I doubt these issues still apply today, but it's worth mentioning.






share|improve this answer























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 7:57


















3














Most "try it yourself" experiments on this are meaningless, because you have no way to know what your complete audio reproduction chain is doing to the digital data before you hear it. That doesn't just include the most obvious distortion source of the speakers or headphones, but also the digital-to-analog converter circuits in your CD player!



Also, there have been many psychoacoustic experiments, dating back long before the digital recording era, comparing live performers, live performers with an acoustic filter between performers and listeners, and recorded or broadcast-quality music. Many of those found that the general public preferred the limited frequency range of recorded music to the sound from live performance. One explanation is that this is simply an example of the general principle of "I never listen to X, therefore I don't like it" - most of the subjects in those early test would have heard much more music over low quality AM radio (with a frequency cut-off at only 8 KHz!) than live performance, and they preferred what they were accustomed to hearing.



A second reason why a test like Rick Beato's are meaningless is that the "uncompressed wav file" may already have had the high frequency content removed from the original recording. The upper frequency limit for FM radio broadcasting is 16kHz, so for commercial recordings there is no point producing a final mix that wastes bandwidth which can't be broadcast, when that bandwidth could be used to raise the apparent "volume level" of the mix by another fraction. In Beato's test, the classical piano recording might not have been filtered in that way, but all the other recordings certainly would have been. You can't hear the presence or absence of silence!



There is a fundamental theoretical issue here which is usually ignored. Most of the "basic" theory of digital signal processing is only applicable when the digital data has infinitely fine amplitude resolution. That includes statements like "you can reproduce the audio exactly up to the Nyquist frequency of half the sampling rate" which are bandied around as if they were incontrovertibly true.



To see the problem, consider the sampling rate of 44100 per second, and a signal of 9800 KHz. Each cycle of the 9.8 KHz signal take 44100/9800 = 4.5 samples of the digital data. Therefore, the digital data does not repeat exactly with a frequency of 9.8 Hz, but every 9 samples, i.e. every 4.9 kHz.



The original 9.8KHz signal (periodic, but not necessarily a sine wave) has only two harmonics in the typical human audio range, i.e. 9.8 and 19.6 KHz. However the digital audio signal has four. There are two more at 4.9 KHz and 14.7 KHz.



Of course the amplitudes of those two additional frequencies are "small", since they are only caused by the amplitude quantization of the original analog audio signal. But human hearing does not have a flat frequency response. It has a peak in its response curve around 3 kHz to 4 kHz (which most likely evolved to optimize the ability to process human speech.) A human brain's audio processing functions have evolved to detect quiet sounds at 3-4kHz mixed with lounder sounds in the rest of the frequency band - i.e. it is optimized to detect this sort of digital audio artefact!



These "ghost tones" are audible in controlled conditions and there is no way to remove them when converting the digital data back to analog. Dithering the digital signal (which is often done as the final step in processing) doesn't remove them, it simply smears them out across a range of frequencies.



Increasing the bit resolution from 16 to 24 does reduce them by a factor of 256. Increasing the sampling rate from 44.1k/sec to say 96k/sec can also reduce them, since a dithering algorithm can now "dump" all the "noise" into the inaudible frequency range above 22 kHz.






share|improve this answer

























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27


















0














The German "Audio" magazine published an article some time 25-30 years ago. The found a high-end CD player that for some reason allowed to turn individual bits of the 16 bit signal on or off - why you would do that is beyond me, but that is what this CD player did.



What they found: Turning bit #16 off (with a top quality amplifier and top quality speakers) made no audible difference. Turning bit #15 off made an audible difference but there was no agreement in blind test which version was better or more accurate, just that there was a difference. Turning off bit #14 was a definite loss of quality.



Not in any way peer reviewed, just the published opinion of reporters who made their living reviewing and comparing high-end audio equipment. So according to them, 15 and 16 bit was indistinguishable.






share|improve this answer


















  • 3





    This isn't actually useful because they were misusing the system. Digitally recording an audio signal without using dither and noise shaping (which is effectively what they were doing) is comparable to doing analogue tape recording with an incorrect or missing bias signal: you are reducing the quality of the signal through incorrect use of your equipment. Had they properly downsampled 16-bit to 14-bit, it's unlikely anybody would have noticed the difference.

    – Curt J. Sampson
    May 28 at 1:46



















0














No - on some cell phones, audio recorded with HD video will be higher & there is a noticeable difference from the 16-bit default recording of the audio app to the 24-bit HD audio in the HD video recording. My family has a weird ear thing.. one hears lower pitches, one hears higher pitches. Both my brother & I have this, & we can both hear clear data loss when comparing those two files. The closer you are to recording the best native format for a live feed, the closer you are to perfection.



Just as 24-bit is better than 16, 32-bit is better than 24. However, frequency beyond 48 kHz is multiplied as a sampling of 44.1 or 48 kHz, so you may not hear the difference through frequency changes. Look at this through a speakers analogy on the receiving end.. if one sampling is 2 speakers, then for each next sampling it might be like the recipient is inside a circle of two more speakers. At what point does it all just become noise?



32-bit 48 kHz is a great recording level for #Audacity, & with a clean recording mixer, like #Cerwin-#Vega with the USB interface, just the right oxygen-free copper or silver wire leads, I really enjoy the 32-bit 48 kHz recordings far better than the lower settings.






share|improve this answer

























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 31 at 8:41


















-1














I've read that the ear cannot distinguish between anything higher than 16bit 44khz. Some studies show that as the sampling frequency increases above that level, the listener experiences a reduction in quality and loudness, or no improvement.



When you're recording, or working in a DAW, having higher bitrates and frequencies can increase the quality of your final mix (according to my research). This is probably because higher sampling rates helps the ability of the DAW to do the calculations necessary for mixing with higher resolution.



The post from @graham is very interesting, and also adds information to why working with higher bitrates in a DAW is ideal. I've read that they're helpful, but don't know about exactly why. However, I don't always cut my highs with a low pass. I've noticed that it changes the way the track sounds. I definitely reduce anything above 8khz, but a lowpass even at 20khz seems to change the way the audio sounds; for me, it's not always ideal. He's definitely right that cutting frequencies in this range will give your mix more headroom. And if recording with a higher sampling rate allows for more precise and wide ranging ability to cut frequencies, that would justify using higher sampling rates.



Blind test where they couldn't tell the difference:
https://www.tomshardware.com/reviews/high-end-pc-audio,3733-18.html



Another anecdotal example:
https://www.avsforum.com/forum/91-audio-theory-setup-chat/2909658-192khz-24bit-vs-44-1khz-16bit-audio-no-quality-difference.html



There are endless cases of people claiming they can't tell the difference. I know I can't. I actually like 16bit 44.1khz more than 24bit or 32bit when it comes to the final rendered product.



However, it's worth mentioning that if you're uploading to a website like soundcloud.com .. your track is going to get compressed into mp3 format for streaming. This goes with a lot of streaming services; they will compress your music. So basically, your track is going through another DAW after final render. If you're uploading to a website that will compress your music, it may be beneficial to render in the highest possible quality so that the track has maximum input resolution. I've uploaded tracks in 24bit and 16bit and can't tell the difference. I've listened to tracks rendered in 16/24/32 bit and I can't tell the difference. Note that in Fl Studio, dithering is only available in 16bit wav renders and I've read that dithering makes the final sound more palatable to the ear.



The quality of your recording setup, equipment, and personal ability is far more important to how the final sound will be perceived than anything above CD quality on your rendered product.






share|improve this answer

























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "240"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmusic.stackexchange.com%2fquestions%2f85212%2fis-cd-audio-quality-good-enough-for-the-final-delivery-of-music%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























7 Answers
7






active

oldest

votes








7 Answers
7






active

oldest

votes









active

oldest

votes






active

oldest

votes









31














Tentatively: Yes. As medium for final delivery, I've not yet seen any strong evidence that a well-designed 16-bit, 44,100 Hz system can be significantly improved on as a vehicle for listening to the final mix (and therefore, from a musician's perspective, as a vehicle for presenting the final mix.)



When I looked into this a few years back, I was disappointed to find a relative lack of seemingly well-conducted tests compared to the level of interest in the subject. I certainly didn't find anything that seemed to strongly suggest that there was any major listener benefit in 'higher-definition' audio. (I'm writing this answer partly because I'd be very grateful if anyone knows different!)



Just to provide a little further reading - here are some anecdotes of tests that deal with bit depth and sample rate.



Of course a particular 16-bit listening experience might be devalued by recording levels being too low (resulting in a perceptible noise floor), OR by over-aggressive limiting of peaks to stay within the headroom. That's an example of where recording initially to a higher bit-depth initially would have been valuable. Likewise, a 44.1K DAC with a badly-designed anti-aliasing filter might sound bad - but this doesn't seem to be inevitable with the current state of technology.



Edit: I have just found this paper, published since I last explored this, that concludes that "there was a small but statistically significant ability to discriminate between standard quality audio(44.1or48kHz,16bit) and high resolution audio (beyond standard quality)", based on review of a number of experiments in this area. However, it also states that this ability to discriminate is something that is far more significant when subjects were trained, and still concludes that "the ability to discriminate high bit depth, e.g., 24 bit versus 16 bit, remains an open question". So it still seems reasonable to call CD quality 'good enough', even if 'very slightly better' may be possible.






share|improve this answer

























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 10:42















31














Tentatively: Yes. As medium for final delivery, I've not yet seen any strong evidence that a well-designed 16-bit, 44,100 Hz system can be significantly improved on as a vehicle for listening to the final mix (and therefore, from a musician's perspective, as a vehicle for presenting the final mix.)



When I looked into this a few years back, I was disappointed to find a relative lack of seemingly well-conducted tests compared to the level of interest in the subject. I certainly didn't find anything that seemed to strongly suggest that there was any major listener benefit in 'higher-definition' audio. (I'm writing this answer partly because I'd be very grateful if anyone knows different!)



Just to provide a little further reading - here are some anecdotes of tests that deal with bit depth and sample rate.



Of course a particular 16-bit listening experience might be devalued by recording levels being too low (resulting in a perceptible noise floor), OR by over-aggressive limiting of peaks to stay within the headroom. That's an example of where recording initially to a higher bit-depth initially would have been valuable. Likewise, a 44.1K DAC with a badly-designed anti-aliasing filter might sound bad - but this doesn't seem to be inevitable with the current state of technology.



Edit: I have just found this paper, published since I last explored this, that concludes that "there was a small but statistically significant ability to discriminate between standard quality audio(44.1or48kHz,16bit) and high resolution audio (beyond standard quality)", based on review of a number of experiments in this area. However, it also states that this ability to discriminate is something that is far more significant when subjects were trained, and still concludes that "the ability to discriminate high bit depth, e.g., 24 bit versus 16 bit, remains an open question". So it still seems reasonable to call CD quality 'good enough', even if 'very slightly better' may be possible.






share|improve this answer

























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 10:42













31












31








31







Tentatively: Yes. As medium for final delivery, I've not yet seen any strong evidence that a well-designed 16-bit, 44,100 Hz system can be significantly improved on as a vehicle for listening to the final mix (and therefore, from a musician's perspective, as a vehicle for presenting the final mix.)



When I looked into this a few years back, I was disappointed to find a relative lack of seemingly well-conducted tests compared to the level of interest in the subject. I certainly didn't find anything that seemed to strongly suggest that there was any major listener benefit in 'higher-definition' audio. (I'm writing this answer partly because I'd be very grateful if anyone knows different!)



Just to provide a little further reading - here are some anecdotes of tests that deal with bit depth and sample rate.



Of course a particular 16-bit listening experience might be devalued by recording levels being too low (resulting in a perceptible noise floor), OR by over-aggressive limiting of peaks to stay within the headroom. That's an example of where recording initially to a higher bit-depth initially would have been valuable. Likewise, a 44.1K DAC with a badly-designed anti-aliasing filter might sound bad - but this doesn't seem to be inevitable with the current state of technology.



Edit: I have just found this paper, published since I last explored this, that concludes that "there was a small but statistically significant ability to discriminate between standard quality audio(44.1or48kHz,16bit) and high resolution audio (beyond standard quality)", based on review of a number of experiments in this area. However, it also states that this ability to discriminate is something that is far more significant when subjects were trained, and still concludes that "the ability to discriminate high bit depth, e.g., 24 bit versus 16 bit, remains an open question". So it still seems reasonable to call CD quality 'good enough', even if 'very slightly better' may be possible.






share|improve this answer















Tentatively: Yes. As medium for final delivery, I've not yet seen any strong evidence that a well-designed 16-bit, 44,100 Hz system can be significantly improved on as a vehicle for listening to the final mix (and therefore, from a musician's perspective, as a vehicle for presenting the final mix.)



When I looked into this a few years back, I was disappointed to find a relative lack of seemingly well-conducted tests compared to the level of interest in the subject. I certainly didn't find anything that seemed to strongly suggest that there was any major listener benefit in 'higher-definition' audio. (I'm writing this answer partly because I'd be very grateful if anyone knows different!)



Just to provide a little further reading - here are some anecdotes of tests that deal with bit depth and sample rate.



Of course a particular 16-bit listening experience might be devalued by recording levels being too low (resulting in a perceptible noise floor), OR by over-aggressive limiting of peaks to stay within the headroom. That's an example of where recording initially to a higher bit-depth initially would have been valuable. Likewise, a 44.1K DAC with a badly-designed anti-aliasing filter might sound bad - but this doesn't seem to be inevitable with the current state of technology.



Edit: I have just found this paper, published since I last explored this, that concludes that "there was a small but statistically significant ability to discriminate between standard quality audio(44.1or48kHz,16bit) and high resolution audio (beyond standard quality)", based on review of a number of experiments in this area. However, it also states that this ability to discriminate is something that is far more significant when subjects were trained, and still concludes that "the ability to discriminate high bit depth, e.g., 24 bit versus 16 bit, remains an open question". So it still seems reasonable to call CD quality 'good enough', even if 'very slightly better' may be possible.







share|improve this answer














share|improve this answer



share|improve this answer








edited May 26 at 11:32

























answered May 25 at 16:20









topo mortotopo morto

29.5k249116




29.5k249116












  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 10:42

















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 10:42
















Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 28 at 10:42





Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 28 at 10:42











23














The short answer: 16-bit 44.1 KHz PCM encoding, when properly sampled
and played back, is close enough to perfect reproduction for human
hearing in virtually all situations that it's unequivocally "good
enough."



The main caveats:



  1. The material must be recorded and reproduced with properly
    engineered sampling and playback systems. While this is not
    particularly difficult or expensive for a competent engineer using
    modern technology, there are a number of mistakes the engineer
    (both equipment designer and recording engineer) could make that
    could be mitigated by a higher sampling rate and/or more bit depth.

  2. There are situations where 16-bit depth will have an audible noise
    floor. These do not occur "naturally" and the vast majority of
    listeners and even audio engineers would have neither the
    inclination nor care to spend the money to produce an environment
    where this would occur. (The noise floor is below audible in places
    such as a soundproofed cinema in a quiet neighbourhood.)

  3. This applies to the storage format only: intermediate processing
    uses appropriate higher bit depths and sampling rates as necessary.
    As a simplistic example for bit depth, when mixing one would
    normally mix multiple 16-bit input siganls to a 24-bit output
    signal and then scale that output signal back to 16 bits. A
    simplistic example for sampling rate is that one might sample at 8x
    or more the 44.1 KHz final frequency in order to use analogue
    filters that distort the signal less when filtering out signals
    above the 22.05 KHz Nyquist frequency.

Now to the details.



A seemingly little known fact of digital sampling of analogue signals
is that, so long as the sampled signal has no frequency components
above the Nyquist frequency of 1/2 the sampling
rate, a properly reproduced playback of that sample will be an exact
copy of the analogue input waveform. All those stair steps you see in
pictures of sampling? They're nonsense; that's a made-up waveform that
cannot be generated by a proper reproduction system because such a
signal would have the "steps" removed by the output filter. I'm not
going to go into more details on this here, but if you are not
convinced or just want to learn more, see Monty Montgomery's "D/A and
A/D | Digital Show and Tell" in
video (also on
YouTube) or
text form.



Note that other answers here get this wrong, and it does seem to be
very difficult to believe for some people. As this post
puts it quite eloquently:




The concept of the perfect measurement or of recreating a waveform
perfectly may seem like marketing hype. However, in this case it is
not. It is in fact the fundamental tenet of the Nyquist-Shannon
Sampling Theorem on which the very existence and invention of
digital audio is based. From WIKI: “In essence the theorem shows
that an analog signal that has been sampled can be perfectly
reconstructed from the samples”. I know there will be some who will
disagree with this idea, unfortunately, disagreement is NOT an
option. This theorem hasn't been invented to explain how digital
audio works, it's the other way around. Digital Audio was invented
from the theorem, if you don't believe the theorem then you can't
believe in digital audio either!




This tells us that in theory, with what we know about human hearing
limits and the noise floors of professionally-designed low-noise
listening environments (such as recording studio or good cinema), the
frequency response and noise floor of 44.1 KHz 16-bit digital audio
recordings will be essentially perfect. (There's a lot more detail on
this in 24/192 Music Downloads...and why they make no
sense. As an
interesting aside, it also mentions that providing wider spectra may
actually make things worse: playback of ultrasonic signals of any
significant amplitude into standard analogue audio amplifiers may well
create intermodulation distortion products in the audio frequencies.)



So the question now becomes, can we do the reproduction well enough in practice?



Well, the way to do this is to test it, of course.



These sorts of tests have been rife with major problems, some as bad as comparing different recordings of the "same" material, such as an SACD remaster of an album against its original master mix from the CD. Even very skeptical experts on testing can accept badly-advised shortcuts such as not double-blinding the test. And of course the listening environment has a massive and difficult-to-correct-for influence on the audio. Even just small movements of your head can result in massive spectrum changes due to comb filtering.



That said, amongst the enormous number of bad tests, a few good ones have been done and they have all invariably shown that nobody, not even professional recording engineers or people with "golden ears," can tell the difference between 44.1 KHz 16-bit and higher rate/depth source recordings.



The canonical paper on this dates from 2006 or so: Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback. The abstract:




Claims both published and anecdotal are regularly made for audibly
superior sound quality for two-channel audio encoded with longer word
lengths and/or at higher sampling rates than the 16-bit/44.1-kHz CD
standard. The authors report on a series of double-blind tests
comparing the analog output of high-resolution players playing
high-resolution recordings with the same signal passed through a
16-bit/44.1-kHz “bottleneck.” The tests were conducted for over a year
using different systems and a variety of subjects. The systems
included expensive professional monitors and one high-end system with
electrostatic loudspeakers and expensive components and cables. The
subjects included professional recording engineers, students in a
university recording program, and dedicated audiophiles. The test
results show that the CD-quality A/D/A loop was undetectable at
normal-to-loud listening levels, by any of the subjects, on any of the
playback systems. The noise of the CD-quality loop was audible only at
very elevated levels.




I'd like point out particularly section 4 of the paper because I think may give some insight into how this whole "high-definition" audio" mess happened:




Though our tests failed to substantiate the claimed advantages of
high-resolution encoding for two-channel audio, one trend became
obvious very quickly and held up throughout our testing: virtually
all of the SACD and DVD-A recordings sounded better than most CDs—
sometimes much better. Had we not “degraded” the sound to CD quality
and blind-tested for audible differences, we would have been tempted
to ascribe this sonic superiority to the recording processes used to
make them. Plausible reasons for the remarkable sound quality of
these recordings emerged in discussions with some of the engineers
currently working on such projects. This portion of the business is
a niche market in which the end users are preselected, both for
their aural acuity and for their willingness to buy expensive
equipment, set it up correctly, and listen carefully in a low-noise
environment. Partly because these recordings have not captured a
large portion of the consumer market for music, engineers and
producers are being given the freedom to produce recordings that
sound as good as they can make them, without having to compress or
equalize the signal to suit lesser systems and casual listening
conditions. These recordings seem to have been made with great care
and manifest affection, by engineers trying to please themselves and
their peers. They sound like it, label after label. High-resolution
audio discs do not have the overwhelming majority of the program
material crammed into the top 20 (or even 10) dB of the available
dynamic range, as so many CDs today do. Our test results indicate
that all of these recordings could be released on conventional CDs
with no audible difference. They would not, however, find such a
reliable conduit to the homes of those with the systems and
listening habits to appreciate them. The secret, for two-channel
recordings at least, seems to lie not in the high-bit recording but
in the high-bit market.




Here are my references and some more reading if you want to get more deeply into this.




  • Audibility of a CD-Standard A/D/A Loop Inserted into
    High-Resolution Audio Playback.
    The best study I know of on this, though there are probably others.

  • Paul D. Lehrman, The Emperor's New Sampling Rate,
    Mix magazine. This is what led me to the article above, and it
    serves as a higher-level summary, along with some further
    information.

  • Monty Montgomery, "D/A and A/D | Digital Show and Tell"
    video (also on
    YouTube) or
    text form. If
    you don't instinctively think "rubbish" when you see a stair-step
    waveform associated with digital sampling, you really need to see
    this. Even if you prefer reading things, the video is well worth
    watching as the demonstrations of what's going on are very clear.

  • Monty Montgomery, 24/192 Music Downloads...and why they make no
    sense. The
    science behind hearing and why you can't hear "better" than 44.1
    KHz/16-bit, and some information on sampling. Includes 16-bit WAV
    files with 0 dB and -105 dB tones if you want to try to hear the
    full dynamic range of 16-bits. Also a long list of what listening
    tests may be testing instead of the source recording frequency and
    depth.

  • image-line.com, Audio Myths & DAW Wars.
    A quick recapitulation of various things that usually cause changes
    in audio quality outside of source rate/depth. Oriented towards
    people who do music production.

  • Ethan Winer, High Definition Audio Comparison. Do your own personal test of
    "high-definition" vs. 44.1 KHz/16-bit!

  • Ethan Winer, Ethan's Magazine Articles and Videos. Lots of other good
    information on audio, listening tests, gear, and so on.





share|improve this answer

























  • Let us continue this discussion in chat.

    – Curt J. Sampson
    May 26 at 23:52






  • 6





    Maybe this is the moment to point out that there are Sound Design and a Signal Processing sites on SE...

    – Mr Lister
    May 27 at 14:57















23














The short answer: 16-bit 44.1 KHz PCM encoding, when properly sampled
and played back, is close enough to perfect reproduction for human
hearing in virtually all situations that it's unequivocally "good
enough."



The main caveats:



  1. The material must be recorded and reproduced with properly
    engineered sampling and playback systems. While this is not
    particularly difficult or expensive for a competent engineer using
    modern technology, there are a number of mistakes the engineer
    (both equipment designer and recording engineer) could make that
    could be mitigated by a higher sampling rate and/or more bit depth.

  2. There are situations where 16-bit depth will have an audible noise
    floor. These do not occur "naturally" and the vast majority of
    listeners and even audio engineers would have neither the
    inclination nor care to spend the money to produce an environment
    where this would occur. (The noise floor is below audible in places
    such as a soundproofed cinema in a quiet neighbourhood.)

  3. This applies to the storage format only: intermediate processing
    uses appropriate higher bit depths and sampling rates as necessary.
    As a simplistic example for bit depth, when mixing one would
    normally mix multiple 16-bit input siganls to a 24-bit output
    signal and then scale that output signal back to 16 bits. A
    simplistic example for sampling rate is that one might sample at 8x
    or more the 44.1 KHz final frequency in order to use analogue
    filters that distort the signal less when filtering out signals
    above the 22.05 KHz Nyquist frequency.

Now to the details.



A seemingly little known fact of digital sampling of analogue signals
is that, so long as the sampled signal has no frequency components
above the Nyquist frequency of 1/2 the sampling
rate, a properly reproduced playback of that sample will be an exact
copy of the analogue input waveform. All those stair steps you see in
pictures of sampling? They're nonsense; that's a made-up waveform that
cannot be generated by a proper reproduction system because such a
signal would have the "steps" removed by the output filter. I'm not
going to go into more details on this here, but if you are not
convinced or just want to learn more, see Monty Montgomery's "D/A and
A/D | Digital Show and Tell" in
video (also on
YouTube) or
text form.



Note that other answers here get this wrong, and it does seem to be
very difficult to believe for some people. As this post
puts it quite eloquently:




The concept of the perfect measurement or of recreating a waveform
perfectly may seem like marketing hype. However, in this case it is
not. It is in fact the fundamental tenet of the Nyquist-Shannon
Sampling Theorem on which the very existence and invention of
digital audio is based. From WIKI: “In essence the theorem shows
that an analog signal that has been sampled can be perfectly
reconstructed from the samples”. I know there will be some who will
disagree with this idea, unfortunately, disagreement is NOT an
option. This theorem hasn't been invented to explain how digital
audio works, it's the other way around. Digital Audio was invented
from the theorem, if you don't believe the theorem then you can't
believe in digital audio either!




This tells us that in theory, with what we know about human hearing
limits and the noise floors of professionally-designed low-noise
listening environments (such as recording studio or good cinema), the
frequency response and noise floor of 44.1 KHz 16-bit digital audio
recordings will be essentially perfect. (There's a lot more detail on
this in 24/192 Music Downloads...and why they make no
sense. As an
interesting aside, it also mentions that providing wider spectra may
actually make things worse: playback of ultrasonic signals of any
significant amplitude into standard analogue audio amplifiers may well
create intermodulation distortion products in the audio frequencies.)



So the question now becomes, can we do the reproduction well enough in practice?



Well, the way to do this is to test it, of course.



These sorts of tests have been rife with major problems, some as bad as comparing different recordings of the "same" material, such as an SACD remaster of an album against its original master mix from the CD. Even very skeptical experts on testing can accept badly-advised shortcuts such as not double-blinding the test. And of course the listening environment has a massive and difficult-to-correct-for influence on the audio. Even just small movements of your head can result in massive spectrum changes due to comb filtering.



That said, amongst the enormous number of bad tests, a few good ones have been done and they have all invariably shown that nobody, not even professional recording engineers or people with "golden ears," can tell the difference between 44.1 KHz 16-bit and higher rate/depth source recordings.



The canonical paper on this dates from 2006 or so: Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback. The abstract:




Claims both published and anecdotal are regularly made for audibly
superior sound quality for two-channel audio encoded with longer word
lengths and/or at higher sampling rates than the 16-bit/44.1-kHz CD
standard. The authors report on a series of double-blind tests
comparing the analog output of high-resolution players playing
high-resolution recordings with the same signal passed through a
16-bit/44.1-kHz “bottleneck.” The tests were conducted for over a year
using different systems and a variety of subjects. The systems
included expensive professional monitors and one high-end system with
electrostatic loudspeakers and expensive components and cables. The
subjects included professional recording engineers, students in a
university recording program, and dedicated audiophiles. The test
results show that the CD-quality A/D/A loop was undetectable at
normal-to-loud listening levels, by any of the subjects, on any of the
playback systems. The noise of the CD-quality loop was audible only at
very elevated levels.




I'd like point out particularly section 4 of the paper because I think may give some insight into how this whole "high-definition" audio" mess happened:




Though our tests failed to substantiate the claimed advantages of
high-resolution encoding for two-channel audio, one trend became
obvious very quickly and held up throughout our testing: virtually
all of the SACD and DVD-A recordings sounded better than most CDs—
sometimes much better. Had we not “degraded” the sound to CD quality
and blind-tested for audible differences, we would have been tempted
to ascribe this sonic superiority to the recording processes used to
make them. Plausible reasons for the remarkable sound quality of
these recordings emerged in discussions with some of the engineers
currently working on such projects. This portion of the business is
a niche market in which the end users are preselected, both for
their aural acuity and for their willingness to buy expensive
equipment, set it up correctly, and listen carefully in a low-noise
environment. Partly because these recordings have not captured a
large portion of the consumer market for music, engineers and
producers are being given the freedom to produce recordings that
sound as good as they can make them, without having to compress or
equalize the signal to suit lesser systems and casual listening
conditions. These recordings seem to have been made with great care
and manifest affection, by engineers trying to please themselves and
their peers. They sound like it, label after label. High-resolution
audio discs do not have the overwhelming majority of the program
material crammed into the top 20 (or even 10) dB of the available
dynamic range, as so many CDs today do. Our test results indicate
that all of these recordings could be released on conventional CDs
with no audible difference. They would not, however, find such a
reliable conduit to the homes of those with the systems and
listening habits to appreciate them. The secret, for two-channel
recordings at least, seems to lie not in the high-bit recording but
in the high-bit market.




Here are my references and some more reading if you want to get more deeply into this.




  • Audibility of a CD-Standard A/D/A Loop Inserted into
    High-Resolution Audio Playback.
    The best study I know of on this, though there are probably others.

  • Paul D. Lehrman, The Emperor's New Sampling Rate,
    Mix magazine. This is what led me to the article above, and it
    serves as a higher-level summary, along with some further
    information.

  • Monty Montgomery, "D/A and A/D | Digital Show and Tell"
    video (also on
    YouTube) or
    text form. If
    you don't instinctively think "rubbish" when you see a stair-step
    waveform associated with digital sampling, you really need to see
    this. Even if you prefer reading things, the video is well worth
    watching as the demonstrations of what's going on are very clear.

  • Monty Montgomery, 24/192 Music Downloads...and why they make no
    sense. The
    science behind hearing and why you can't hear "better" than 44.1
    KHz/16-bit, and some information on sampling. Includes 16-bit WAV
    files with 0 dB and -105 dB tones if you want to try to hear the
    full dynamic range of 16-bits. Also a long list of what listening
    tests may be testing instead of the source recording frequency and
    depth.

  • image-line.com, Audio Myths & DAW Wars.
    A quick recapitulation of various things that usually cause changes
    in audio quality outside of source rate/depth. Oriented towards
    people who do music production.

  • Ethan Winer, High Definition Audio Comparison. Do your own personal test of
    "high-definition" vs. 44.1 KHz/16-bit!

  • Ethan Winer, Ethan's Magazine Articles and Videos. Lots of other good
    information on audio, listening tests, gear, and so on.





share|improve this answer

























  • Let us continue this discussion in chat.

    – Curt J. Sampson
    May 26 at 23:52






  • 6





    Maybe this is the moment to point out that there are Sound Design and a Signal Processing sites on SE...

    – Mr Lister
    May 27 at 14:57













23












23








23







The short answer: 16-bit 44.1 KHz PCM encoding, when properly sampled
and played back, is close enough to perfect reproduction for human
hearing in virtually all situations that it's unequivocally "good
enough."



The main caveats:



  1. The material must be recorded and reproduced with properly
    engineered sampling and playback systems. While this is not
    particularly difficult or expensive for a competent engineer using
    modern technology, there are a number of mistakes the engineer
    (both equipment designer and recording engineer) could make that
    could be mitigated by a higher sampling rate and/or more bit depth.

  2. There are situations where 16-bit depth will have an audible noise
    floor. These do not occur "naturally" and the vast majority of
    listeners and even audio engineers would have neither the
    inclination nor care to spend the money to produce an environment
    where this would occur. (The noise floor is below audible in places
    such as a soundproofed cinema in a quiet neighbourhood.)

  3. This applies to the storage format only: intermediate processing
    uses appropriate higher bit depths and sampling rates as necessary.
    As a simplistic example for bit depth, when mixing one would
    normally mix multiple 16-bit input siganls to a 24-bit output
    signal and then scale that output signal back to 16 bits. A
    simplistic example for sampling rate is that one might sample at 8x
    or more the 44.1 KHz final frequency in order to use analogue
    filters that distort the signal less when filtering out signals
    above the 22.05 KHz Nyquist frequency.

Now to the details.



A seemingly little known fact of digital sampling of analogue signals
is that, so long as the sampled signal has no frequency components
above the Nyquist frequency of 1/2 the sampling
rate, a properly reproduced playback of that sample will be an exact
copy of the analogue input waveform. All those stair steps you see in
pictures of sampling? They're nonsense; that's a made-up waveform that
cannot be generated by a proper reproduction system because such a
signal would have the "steps" removed by the output filter. I'm not
going to go into more details on this here, but if you are not
convinced or just want to learn more, see Monty Montgomery's "D/A and
A/D | Digital Show and Tell" in
video (also on
YouTube) or
text form.



Note that other answers here get this wrong, and it does seem to be
very difficult to believe for some people. As this post
puts it quite eloquently:




The concept of the perfect measurement or of recreating a waveform
perfectly may seem like marketing hype. However, in this case it is
not. It is in fact the fundamental tenet of the Nyquist-Shannon
Sampling Theorem on which the very existence and invention of
digital audio is based. From WIKI: “In essence the theorem shows
that an analog signal that has been sampled can be perfectly
reconstructed from the samples”. I know there will be some who will
disagree with this idea, unfortunately, disagreement is NOT an
option. This theorem hasn't been invented to explain how digital
audio works, it's the other way around. Digital Audio was invented
from the theorem, if you don't believe the theorem then you can't
believe in digital audio either!




This tells us that in theory, with what we know about human hearing
limits and the noise floors of professionally-designed low-noise
listening environments (such as recording studio or good cinema), the
frequency response and noise floor of 44.1 KHz 16-bit digital audio
recordings will be essentially perfect. (There's a lot more detail on
this in 24/192 Music Downloads...and why they make no
sense. As an
interesting aside, it also mentions that providing wider spectra may
actually make things worse: playback of ultrasonic signals of any
significant amplitude into standard analogue audio amplifiers may well
create intermodulation distortion products in the audio frequencies.)



So the question now becomes, can we do the reproduction well enough in practice?



Well, the way to do this is to test it, of course.



These sorts of tests have been rife with major problems, some as bad as comparing different recordings of the "same" material, such as an SACD remaster of an album against its original master mix from the CD. Even very skeptical experts on testing can accept badly-advised shortcuts such as not double-blinding the test. And of course the listening environment has a massive and difficult-to-correct-for influence on the audio. Even just small movements of your head can result in massive spectrum changes due to comb filtering.



That said, amongst the enormous number of bad tests, a few good ones have been done and they have all invariably shown that nobody, not even professional recording engineers or people with "golden ears," can tell the difference between 44.1 KHz 16-bit and higher rate/depth source recordings.



The canonical paper on this dates from 2006 or so: Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback. The abstract:




Claims both published and anecdotal are regularly made for audibly
superior sound quality for two-channel audio encoded with longer word
lengths and/or at higher sampling rates than the 16-bit/44.1-kHz CD
standard. The authors report on a series of double-blind tests
comparing the analog output of high-resolution players playing
high-resolution recordings with the same signal passed through a
16-bit/44.1-kHz “bottleneck.” The tests were conducted for over a year
using different systems and a variety of subjects. The systems
included expensive professional monitors and one high-end system with
electrostatic loudspeakers and expensive components and cables. The
subjects included professional recording engineers, students in a
university recording program, and dedicated audiophiles. The test
results show that the CD-quality A/D/A loop was undetectable at
normal-to-loud listening levels, by any of the subjects, on any of the
playback systems. The noise of the CD-quality loop was audible only at
very elevated levels.




I'd like point out particularly section 4 of the paper because I think may give some insight into how this whole "high-definition" audio" mess happened:




Though our tests failed to substantiate the claimed advantages of
high-resolution encoding for two-channel audio, one trend became
obvious very quickly and held up throughout our testing: virtually
all of the SACD and DVD-A recordings sounded better than most CDs—
sometimes much better. Had we not “degraded” the sound to CD quality
and blind-tested for audible differences, we would have been tempted
to ascribe this sonic superiority to the recording processes used to
make them. Plausible reasons for the remarkable sound quality of
these recordings emerged in discussions with some of the engineers
currently working on such projects. This portion of the business is
a niche market in which the end users are preselected, both for
their aural acuity and for their willingness to buy expensive
equipment, set it up correctly, and listen carefully in a low-noise
environment. Partly because these recordings have not captured a
large portion of the consumer market for music, engineers and
producers are being given the freedom to produce recordings that
sound as good as they can make them, without having to compress or
equalize the signal to suit lesser systems and casual listening
conditions. These recordings seem to have been made with great care
and manifest affection, by engineers trying to please themselves and
their peers. They sound like it, label after label. High-resolution
audio discs do not have the overwhelming majority of the program
material crammed into the top 20 (or even 10) dB of the available
dynamic range, as so many CDs today do. Our test results indicate
that all of these recordings could be released on conventional CDs
with no audible difference. They would not, however, find such a
reliable conduit to the homes of those with the systems and
listening habits to appreciate them. The secret, for two-channel
recordings at least, seems to lie not in the high-bit recording but
in the high-bit market.




Here are my references and some more reading if you want to get more deeply into this.




  • Audibility of a CD-Standard A/D/A Loop Inserted into
    High-Resolution Audio Playback.
    The best study I know of on this, though there are probably others.

  • Paul D. Lehrman, The Emperor's New Sampling Rate,
    Mix magazine. This is what led me to the article above, and it
    serves as a higher-level summary, along with some further
    information.

  • Monty Montgomery, "D/A and A/D | Digital Show and Tell"
    video (also on
    YouTube) or
    text form. If
    you don't instinctively think "rubbish" when you see a stair-step
    waveform associated with digital sampling, you really need to see
    this. Even if you prefer reading things, the video is well worth
    watching as the demonstrations of what's going on are very clear.

  • Monty Montgomery, 24/192 Music Downloads...and why they make no
    sense. The
    science behind hearing and why you can't hear "better" than 44.1
    KHz/16-bit, and some information on sampling. Includes 16-bit WAV
    files with 0 dB and -105 dB tones if you want to try to hear the
    full dynamic range of 16-bits. Also a long list of what listening
    tests may be testing instead of the source recording frequency and
    depth.

  • image-line.com, Audio Myths & DAW Wars.
    A quick recapitulation of various things that usually cause changes
    in audio quality outside of source rate/depth. Oriented towards
    people who do music production.

  • Ethan Winer, High Definition Audio Comparison. Do your own personal test of
    "high-definition" vs. 44.1 KHz/16-bit!

  • Ethan Winer, Ethan's Magazine Articles and Videos. Lots of other good
    information on audio, listening tests, gear, and so on.





share|improve this answer















The short answer: 16-bit 44.1 KHz PCM encoding, when properly sampled
and played back, is close enough to perfect reproduction for human
hearing in virtually all situations that it's unequivocally "good
enough."



The main caveats:



  1. The material must be recorded and reproduced with properly
    engineered sampling and playback systems. While this is not
    particularly difficult or expensive for a competent engineer using
    modern technology, there are a number of mistakes the engineer
    (both equipment designer and recording engineer) could make that
    could be mitigated by a higher sampling rate and/or more bit depth.

  2. There are situations where 16-bit depth will have an audible noise
    floor. These do not occur "naturally" and the vast majority of
    listeners and even audio engineers would have neither the
    inclination nor care to spend the money to produce an environment
    where this would occur. (The noise floor is below audible in places
    such as a soundproofed cinema in a quiet neighbourhood.)

  3. This applies to the storage format only: intermediate processing
    uses appropriate higher bit depths and sampling rates as necessary.
    As a simplistic example for bit depth, when mixing one would
    normally mix multiple 16-bit input siganls to a 24-bit output
    signal and then scale that output signal back to 16 bits. A
    simplistic example for sampling rate is that one might sample at 8x
    or more the 44.1 KHz final frequency in order to use analogue
    filters that distort the signal less when filtering out signals
    above the 22.05 KHz Nyquist frequency.

Now to the details.



A seemingly little known fact of digital sampling of analogue signals
is that, so long as the sampled signal has no frequency components
above the Nyquist frequency of 1/2 the sampling
rate, a properly reproduced playback of that sample will be an exact
copy of the analogue input waveform. All those stair steps you see in
pictures of sampling? They're nonsense; that's a made-up waveform that
cannot be generated by a proper reproduction system because such a
signal would have the "steps" removed by the output filter. I'm not
going to go into more details on this here, but if you are not
convinced or just want to learn more, see Monty Montgomery's "D/A and
A/D | Digital Show and Tell" in
video (also on
YouTube) or
text form.



Note that other answers here get this wrong, and it does seem to be
very difficult to believe for some people. As this post
puts it quite eloquently:




The concept of the perfect measurement or of recreating a waveform
perfectly may seem like marketing hype. However, in this case it is
not. It is in fact the fundamental tenet of the Nyquist-Shannon
Sampling Theorem on which the very existence and invention of
digital audio is based. From WIKI: “In essence the theorem shows
that an analog signal that has been sampled can be perfectly
reconstructed from the samples”. I know there will be some who will
disagree with this idea, unfortunately, disagreement is NOT an
option. This theorem hasn't been invented to explain how digital
audio works, it's the other way around. Digital Audio was invented
from the theorem, if you don't believe the theorem then you can't
believe in digital audio either!




This tells us that in theory, with what we know about human hearing
limits and the noise floors of professionally-designed low-noise
listening environments (such as recording studio or good cinema), the
frequency response and noise floor of 44.1 KHz 16-bit digital audio
recordings will be essentially perfect. (There's a lot more detail on
this in 24/192 Music Downloads...and why they make no
sense. As an
interesting aside, it also mentions that providing wider spectra may
actually make things worse: playback of ultrasonic signals of any
significant amplitude into standard analogue audio amplifiers may well
create intermodulation distortion products in the audio frequencies.)



So the question now becomes, can we do the reproduction well enough in practice?



Well, the way to do this is to test it, of course.



These sorts of tests have been rife with major problems, some as bad as comparing different recordings of the "same" material, such as an SACD remaster of an album against its original master mix from the CD. Even very skeptical experts on testing can accept badly-advised shortcuts such as not double-blinding the test. And of course the listening environment has a massive and difficult-to-correct-for influence on the audio. Even just small movements of your head can result in massive spectrum changes due to comb filtering.



That said, amongst the enormous number of bad tests, a few good ones have been done and they have all invariably shown that nobody, not even professional recording engineers or people with "golden ears," can tell the difference between 44.1 KHz 16-bit and higher rate/depth source recordings.



The canonical paper on this dates from 2006 or so: Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback. The abstract:




Claims both published and anecdotal are regularly made for audibly
superior sound quality for two-channel audio encoded with longer word
lengths and/or at higher sampling rates than the 16-bit/44.1-kHz CD
standard. The authors report on a series of double-blind tests
comparing the analog output of high-resolution players playing
high-resolution recordings with the same signal passed through a
16-bit/44.1-kHz “bottleneck.” The tests were conducted for over a year
using different systems and a variety of subjects. The systems
included expensive professional monitors and one high-end system with
electrostatic loudspeakers and expensive components and cables. The
subjects included professional recording engineers, students in a
university recording program, and dedicated audiophiles. The test
results show that the CD-quality A/D/A loop was undetectable at
normal-to-loud listening levels, by any of the subjects, on any of the
playback systems. The noise of the CD-quality loop was audible only at
very elevated levels.




I'd like point out particularly section 4 of the paper because I think may give some insight into how this whole "high-definition" audio" mess happened:




Though our tests failed to substantiate the claimed advantages of
high-resolution encoding for two-channel audio, one trend became
obvious very quickly and held up throughout our testing: virtually
all of the SACD and DVD-A recordings sounded better than most CDs—
sometimes much better. Had we not “degraded” the sound to CD quality
and blind-tested for audible differences, we would have been tempted
to ascribe this sonic superiority to the recording processes used to
make them. Plausible reasons for the remarkable sound quality of
these recordings emerged in discussions with some of the engineers
currently working on such projects. This portion of the business is
a niche market in which the end users are preselected, both for
their aural acuity and for their willingness to buy expensive
equipment, set it up correctly, and listen carefully in a low-noise
environment. Partly because these recordings have not captured a
large portion of the consumer market for music, engineers and
producers are being given the freedom to produce recordings that
sound as good as they can make them, without having to compress or
equalize the signal to suit lesser systems and casual listening
conditions. These recordings seem to have been made with great care
and manifest affection, by engineers trying to please themselves and
their peers. They sound like it, label after label. High-resolution
audio discs do not have the overwhelming majority of the program
material crammed into the top 20 (or even 10) dB of the available
dynamic range, as so many CDs today do. Our test results indicate
that all of these recordings could be released on conventional CDs
with no audible difference. They would not, however, find such a
reliable conduit to the homes of those with the systems and
listening habits to appreciate them. The secret, for two-channel
recordings at least, seems to lie not in the high-bit recording but
in the high-bit market.




Here are my references and some more reading if you want to get more deeply into this.




  • Audibility of a CD-Standard A/D/A Loop Inserted into
    High-Resolution Audio Playback.
    The best study I know of on this, though there are probably others.

  • Paul D. Lehrman, The Emperor's New Sampling Rate,
    Mix magazine. This is what led me to the article above, and it
    serves as a higher-level summary, along with some further
    information.

  • Monty Montgomery, "D/A and A/D | Digital Show and Tell"
    video (also on
    YouTube) or
    text form. If
    you don't instinctively think "rubbish" when you see a stair-step
    waveform associated with digital sampling, you really need to see
    this. Even if you prefer reading things, the video is well worth
    watching as the demonstrations of what's going on are very clear.

  • Monty Montgomery, 24/192 Music Downloads...and why they make no
    sense. The
    science behind hearing and why you can't hear "better" than 44.1
    KHz/16-bit, and some information on sampling. Includes 16-bit WAV
    files with 0 dB and -105 dB tones if you want to try to hear the
    full dynamic range of 16-bits. Also a long list of what listening
    tests may be testing instead of the source recording frequency and
    depth.

  • image-line.com, Audio Myths & DAW Wars.
    A quick recapitulation of various things that usually cause changes
    in audio quality outside of source rate/depth. Oriented towards
    people who do music production.

  • Ethan Winer, High Definition Audio Comparison. Do your own personal test of
    "high-definition" vs. 44.1 KHz/16-bit!

  • Ethan Winer, Ethan's Magazine Articles and Videos. Lots of other good
    information on audio, listening tests, gear, and so on.






share|improve this answer














share|improve this answer



share|improve this answer








edited May 26 at 16:09

























answered May 26 at 8:53









Curt J. SampsonCurt J. Sampson

33517




33517












  • Let us continue this discussion in chat.

    – Curt J. Sampson
    May 26 at 23:52






  • 6





    Maybe this is the moment to point out that there are Sound Design and a Signal Processing sites on SE...

    – Mr Lister
    May 27 at 14:57

















  • Let us continue this discussion in chat.

    – Curt J. Sampson
    May 26 at 23:52






  • 6





    Maybe this is the moment to point out that there are Sound Design and a Signal Processing sites on SE...

    – Mr Lister
    May 27 at 14:57
















Let us continue this discussion in chat.

– Curt J. Sampson
May 26 at 23:52





Let us continue this discussion in chat.

– Curt J. Sampson
May 26 at 23:52




6




6





Maybe this is the moment to point out that there are Sound Design and a Signal Processing sites on SE...

– Mr Lister
May 27 at 14:57





Maybe this is the moment to point out that there are Sound Design and a Signal Processing sites on SE...

– Mr Lister
May 27 at 14:57











13














There are two separate issues here - resolution and frequency. And we also need to separate out recording and playback.



16-bit resolution is plenty good enough for playback. However when recording you want to allow extra headroom, because the worst thing you can do to a sampled signal is to clip it at the limits of its range. It's normal to record at -10dB or so to give that headroom. With 16-bit recordings we would lose substantial recording fidelity that way - but with 24-bit we're fine.



For playback, it's maybe possible to hear the difference, but you'd need good ears. More significantly, you'd also need good equipment. You won't notice the difference on anything short of decent studio kit.



44kHz is in theory good enough to reproduce 22kHz. The problem though is aliasing. If you don't cut everything above 22kHz when you record then those inaudible higher frequencies reflect back on the opposite side of the Nyquist frequency and become audible. When 20kHz is your threshold for hearing, that means your filter needs to pass 20kHz but have cut hard by 22kHz, which is really hard to do. We have filters now which can do it, but certainly older hardware (especially back in the early days of CDs) couldn't do it well at all. Recording at 96kHz though gives you a Nyquist frequency at 48kHz, and it's relatively easy to build a filter which passes 20kHz and cuts hard by 48kHz.



Again, this is for recording. Unless your ears can hear above 22kHz, you won't get any benefit from playback at 96kHz.



For playback though... All the above does assume that playback is done competently. It's not unknown for software (and hardware) to handle one sample rate better than another. I remember a few interesting articles about this in Sound On Sound back in the mid 00s. I doubt these issues still apply today, but it's worth mentioning.






share|improve this answer























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 7:57















13














There are two separate issues here - resolution and frequency. And we also need to separate out recording and playback.



16-bit resolution is plenty good enough for playback. However when recording you want to allow extra headroom, because the worst thing you can do to a sampled signal is to clip it at the limits of its range. It's normal to record at -10dB or so to give that headroom. With 16-bit recordings we would lose substantial recording fidelity that way - but with 24-bit we're fine.



For playback, it's maybe possible to hear the difference, but you'd need good ears. More significantly, you'd also need good equipment. You won't notice the difference on anything short of decent studio kit.



44kHz is in theory good enough to reproduce 22kHz. The problem though is aliasing. If you don't cut everything above 22kHz when you record then those inaudible higher frequencies reflect back on the opposite side of the Nyquist frequency and become audible. When 20kHz is your threshold for hearing, that means your filter needs to pass 20kHz but have cut hard by 22kHz, which is really hard to do. We have filters now which can do it, but certainly older hardware (especially back in the early days of CDs) couldn't do it well at all. Recording at 96kHz though gives you a Nyquist frequency at 48kHz, and it's relatively easy to build a filter which passes 20kHz and cuts hard by 48kHz.



Again, this is for recording. Unless your ears can hear above 22kHz, you won't get any benefit from playback at 96kHz.



For playback though... All the above does assume that playback is done competently. It's not unknown for software (and hardware) to handle one sample rate better than another. I remember a few interesting articles about this in Sound On Sound back in the mid 00s. I doubt these issues still apply today, but it's worth mentioning.






share|improve this answer























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 7:57













13












13








13







There are two separate issues here - resolution and frequency. And we also need to separate out recording and playback.



16-bit resolution is plenty good enough for playback. However when recording you want to allow extra headroom, because the worst thing you can do to a sampled signal is to clip it at the limits of its range. It's normal to record at -10dB or so to give that headroom. With 16-bit recordings we would lose substantial recording fidelity that way - but with 24-bit we're fine.



For playback, it's maybe possible to hear the difference, but you'd need good ears. More significantly, you'd also need good equipment. You won't notice the difference on anything short of decent studio kit.



44kHz is in theory good enough to reproduce 22kHz. The problem though is aliasing. If you don't cut everything above 22kHz when you record then those inaudible higher frequencies reflect back on the opposite side of the Nyquist frequency and become audible. When 20kHz is your threshold for hearing, that means your filter needs to pass 20kHz but have cut hard by 22kHz, which is really hard to do. We have filters now which can do it, but certainly older hardware (especially back in the early days of CDs) couldn't do it well at all. Recording at 96kHz though gives you a Nyquist frequency at 48kHz, and it's relatively easy to build a filter which passes 20kHz and cuts hard by 48kHz.



Again, this is for recording. Unless your ears can hear above 22kHz, you won't get any benefit from playback at 96kHz.



For playback though... All the above does assume that playback is done competently. It's not unknown for software (and hardware) to handle one sample rate better than another. I remember a few interesting articles about this in Sound On Sound back in the mid 00s. I doubt these issues still apply today, but it's worth mentioning.






share|improve this answer













There are two separate issues here - resolution and frequency. And we also need to separate out recording and playback.



16-bit resolution is plenty good enough for playback. However when recording you want to allow extra headroom, because the worst thing you can do to a sampled signal is to clip it at the limits of its range. It's normal to record at -10dB or so to give that headroom. With 16-bit recordings we would lose substantial recording fidelity that way - but with 24-bit we're fine.



For playback, it's maybe possible to hear the difference, but you'd need good ears. More significantly, you'd also need good equipment. You won't notice the difference on anything short of decent studio kit.



44kHz is in theory good enough to reproduce 22kHz. The problem though is aliasing. If you don't cut everything above 22kHz when you record then those inaudible higher frequencies reflect back on the opposite side of the Nyquist frequency and become audible. When 20kHz is your threshold for hearing, that means your filter needs to pass 20kHz but have cut hard by 22kHz, which is really hard to do. We have filters now which can do it, but certainly older hardware (especially back in the early days of CDs) couldn't do it well at all. Recording at 96kHz though gives you a Nyquist frequency at 48kHz, and it's relatively easy to build a filter which passes 20kHz and cuts hard by 48kHz.



Again, this is for recording. Unless your ears can hear above 22kHz, you won't get any benefit from playback at 96kHz.



For playback though... All the above does assume that playback is done competently. It's not unknown for software (and hardware) to handle one sample rate better than another. I remember a few interesting articles about this in Sound On Sound back in the mid 00s. I doubt these issues still apply today, but it's worth mentioning.







share|improve this answer












share|improve this answer



share|improve this answer










answered May 25 at 23:20









GrahamGraham

2,065414




2,065414












  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 7:57

















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 7:57
















Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 28 at 7:57





Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 28 at 7:57











3














Most "try it yourself" experiments on this are meaningless, because you have no way to know what your complete audio reproduction chain is doing to the digital data before you hear it. That doesn't just include the most obvious distortion source of the speakers or headphones, but also the digital-to-analog converter circuits in your CD player!



Also, there have been many psychoacoustic experiments, dating back long before the digital recording era, comparing live performers, live performers with an acoustic filter between performers and listeners, and recorded or broadcast-quality music. Many of those found that the general public preferred the limited frequency range of recorded music to the sound from live performance. One explanation is that this is simply an example of the general principle of "I never listen to X, therefore I don't like it" - most of the subjects in those early test would have heard much more music over low quality AM radio (with a frequency cut-off at only 8 KHz!) than live performance, and they preferred what they were accustomed to hearing.



A second reason why a test like Rick Beato's are meaningless is that the "uncompressed wav file" may already have had the high frequency content removed from the original recording. The upper frequency limit for FM radio broadcasting is 16kHz, so for commercial recordings there is no point producing a final mix that wastes bandwidth which can't be broadcast, when that bandwidth could be used to raise the apparent "volume level" of the mix by another fraction. In Beato's test, the classical piano recording might not have been filtered in that way, but all the other recordings certainly would have been. You can't hear the presence or absence of silence!



There is a fundamental theoretical issue here which is usually ignored. Most of the "basic" theory of digital signal processing is only applicable when the digital data has infinitely fine amplitude resolution. That includes statements like "you can reproduce the audio exactly up to the Nyquist frequency of half the sampling rate" which are bandied around as if they were incontrovertibly true.



To see the problem, consider the sampling rate of 44100 per second, and a signal of 9800 KHz. Each cycle of the 9.8 KHz signal take 44100/9800 = 4.5 samples of the digital data. Therefore, the digital data does not repeat exactly with a frequency of 9.8 Hz, but every 9 samples, i.e. every 4.9 kHz.



The original 9.8KHz signal (periodic, but not necessarily a sine wave) has only two harmonics in the typical human audio range, i.e. 9.8 and 19.6 KHz. However the digital audio signal has four. There are two more at 4.9 KHz and 14.7 KHz.



Of course the amplitudes of those two additional frequencies are "small", since they are only caused by the amplitude quantization of the original analog audio signal. But human hearing does not have a flat frequency response. It has a peak in its response curve around 3 kHz to 4 kHz (which most likely evolved to optimize the ability to process human speech.) A human brain's audio processing functions have evolved to detect quiet sounds at 3-4kHz mixed with lounder sounds in the rest of the frequency band - i.e. it is optimized to detect this sort of digital audio artefact!



These "ghost tones" are audible in controlled conditions and there is no way to remove them when converting the digital data back to analog. Dithering the digital signal (which is often done as the final step in processing) doesn't remove them, it simply smears them out across a range of frequencies.



Increasing the bit resolution from 16 to 24 does reduce them by a factor of 256. Increasing the sampling rate from 44.1k/sec to say 96k/sec can also reduce them, since a dithering algorithm can now "dump" all the "noise" into the inaudible frequency range above 22 kHz.






share|improve this answer

























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27















3














Most "try it yourself" experiments on this are meaningless, because you have no way to know what your complete audio reproduction chain is doing to the digital data before you hear it. That doesn't just include the most obvious distortion source of the speakers or headphones, but also the digital-to-analog converter circuits in your CD player!



Also, there have been many psychoacoustic experiments, dating back long before the digital recording era, comparing live performers, live performers with an acoustic filter between performers and listeners, and recorded or broadcast-quality music. Many of those found that the general public preferred the limited frequency range of recorded music to the sound from live performance. One explanation is that this is simply an example of the general principle of "I never listen to X, therefore I don't like it" - most of the subjects in those early test would have heard much more music over low quality AM radio (with a frequency cut-off at only 8 KHz!) than live performance, and they preferred what they were accustomed to hearing.



A second reason why a test like Rick Beato's are meaningless is that the "uncompressed wav file" may already have had the high frequency content removed from the original recording. The upper frequency limit for FM radio broadcasting is 16kHz, so for commercial recordings there is no point producing a final mix that wastes bandwidth which can't be broadcast, when that bandwidth could be used to raise the apparent "volume level" of the mix by another fraction. In Beato's test, the classical piano recording might not have been filtered in that way, but all the other recordings certainly would have been. You can't hear the presence or absence of silence!



There is a fundamental theoretical issue here which is usually ignored. Most of the "basic" theory of digital signal processing is only applicable when the digital data has infinitely fine amplitude resolution. That includes statements like "you can reproduce the audio exactly up to the Nyquist frequency of half the sampling rate" which are bandied around as if they were incontrovertibly true.



To see the problem, consider the sampling rate of 44100 per second, and a signal of 9800 KHz. Each cycle of the 9.8 KHz signal take 44100/9800 = 4.5 samples of the digital data. Therefore, the digital data does not repeat exactly with a frequency of 9.8 Hz, but every 9 samples, i.e. every 4.9 kHz.



The original 9.8KHz signal (periodic, but not necessarily a sine wave) has only two harmonics in the typical human audio range, i.e. 9.8 and 19.6 KHz. However the digital audio signal has four. There are two more at 4.9 KHz and 14.7 KHz.



Of course the amplitudes of those two additional frequencies are "small", since they are only caused by the amplitude quantization of the original analog audio signal. But human hearing does not have a flat frequency response. It has a peak in its response curve around 3 kHz to 4 kHz (which most likely evolved to optimize the ability to process human speech.) A human brain's audio processing functions have evolved to detect quiet sounds at 3-4kHz mixed with lounder sounds in the rest of the frequency band - i.e. it is optimized to detect this sort of digital audio artefact!



These "ghost tones" are audible in controlled conditions and there is no way to remove them when converting the digital data back to analog. Dithering the digital signal (which is often done as the final step in processing) doesn't remove them, it simply smears them out across a range of frequencies.



Increasing the bit resolution from 16 to 24 does reduce them by a factor of 256. Increasing the sampling rate from 44.1k/sec to say 96k/sec can also reduce them, since a dithering algorithm can now "dump" all the "noise" into the inaudible frequency range above 22 kHz.






share|improve this answer

























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27













3












3








3







Most "try it yourself" experiments on this are meaningless, because you have no way to know what your complete audio reproduction chain is doing to the digital data before you hear it. That doesn't just include the most obvious distortion source of the speakers or headphones, but also the digital-to-analog converter circuits in your CD player!



Also, there have been many psychoacoustic experiments, dating back long before the digital recording era, comparing live performers, live performers with an acoustic filter between performers and listeners, and recorded or broadcast-quality music. Many of those found that the general public preferred the limited frequency range of recorded music to the sound from live performance. One explanation is that this is simply an example of the general principle of "I never listen to X, therefore I don't like it" - most of the subjects in those early test would have heard much more music over low quality AM radio (with a frequency cut-off at only 8 KHz!) than live performance, and they preferred what they were accustomed to hearing.



A second reason why a test like Rick Beato's are meaningless is that the "uncompressed wav file" may already have had the high frequency content removed from the original recording. The upper frequency limit for FM radio broadcasting is 16kHz, so for commercial recordings there is no point producing a final mix that wastes bandwidth which can't be broadcast, when that bandwidth could be used to raise the apparent "volume level" of the mix by another fraction. In Beato's test, the classical piano recording might not have been filtered in that way, but all the other recordings certainly would have been. You can't hear the presence or absence of silence!



There is a fundamental theoretical issue here which is usually ignored. Most of the "basic" theory of digital signal processing is only applicable when the digital data has infinitely fine amplitude resolution. That includes statements like "you can reproduce the audio exactly up to the Nyquist frequency of half the sampling rate" which are bandied around as if they were incontrovertibly true.



To see the problem, consider the sampling rate of 44100 per second, and a signal of 9800 KHz. Each cycle of the 9.8 KHz signal take 44100/9800 = 4.5 samples of the digital data. Therefore, the digital data does not repeat exactly with a frequency of 9.8 Hz, but every 9 samples, i.e. every 4.9 kHz.



The original 9.8KHz signal (periodic, but not necessarily a sine wave) has only two harmonics in the typical human audio range, i.e. 9.8 and 19.6 KHz. However the digital audio signal has four. There are two more at 4.9 KHz and 14.7 KHz.



Of course the amplitudes of those two additional frequencies are "small", since they are only caused by the amplitude quantization of the original analog audio signal. But human hearing does not have a flat frequency response. It has a peak in its response curve around 3 kHz to 4 kHz (which most likely evolved to optimize the ability to process human speech.) A human brain's audio processing functions have evolved to detect quiet sounds at 3-4kHz mixed with lounder sounds in the rest of the frequency band - i.e. it is optimized to detect this sort of digital audio artefact!



These "ghost tones" are audible in controlled conditions and there is no way to remove them when converting the digital data back to analog. Dithering the digital signal (which is often done as the final step in processing) doesn't remove them, it simply smears them out across a range of frequencies.



Increasing the bit resolution from 16 to 24 does reduce them by a factor of 256. Increasing the sampling rate from 44.1k/sec to say 96k/sec can also reduce them, since a dithering algorithm can now "dump" all the "noise" into the inaudible frequency range above 22 kHz.






share|improve this answer















Most "try it yourself" experiments on this are meaningless, because you have no way to know what your complete audio reproduction chain is doing to the digital data before you hear it. That doesn't just include the most obvious distortion source of the speakers or headphones, but also the digital-to-analog converter circuits in your CD player!



Also, there have been many psychoacoustic experiments, dating back long before the digital recording era, comparing live performers, live performers with an acoustic filter between performers and listeners, and recorded or broadcast-quality music. Many of those found that the general public preferred the limited frequency range of recorded music to the sound from live performance. One explanation is that this is simply an example of the general principle of "I never listen to X, therefore I don't like it" - most of the subjects in those early test would have heard much more music over low quality AM radio (with a frequency cut-off at only 8 KHz!) than live performance, and they preferred what they were accustomed to hearing.



A second reason why a test like Rick Beato's are meaningless is that the "uncompressed wav file" may already have had the high frequency content removed from the original recording. The upper frequency limit for FM radio broadcasting is 16kHz, so for commercial recordings there is no point producing a final mix that wastes bandwidth which can't be broadcast, when that bandwidth could be used to raise the apparent "volume level" of the mix by another fraction. In Beato's test, the classical piano recording might not have been filtered in that way, but all the other recordings certainly would have been. You can't hear the presence or absence of silence!



There is a fundamental theoretical issue here which is usually ignored. Most of the "basic" theory of digital signal processing is only applicable when the digital data has infinitely fine amplitude resolution. That includes statements like "you can reproduce the audio exactly up to the Nyquist frequency of half the sampling rate" which are bandied around as if they were incontrovertibly true.



To see the problem, consider the sampling rate of 44100 per second, and a signal of 9800 KHz. Each cycle of the 9.8 KHz signal take 44100/9800 = 4.5 samples of the digital data. Therefore, the digital data does not repeat exactly with a frequency of 9.8 Hz, but every 9 samples, i.e. every 4.9 kHz.



The original 9.8KHz signal (periodic, but not necessarily a sine wave) has only two harmonics in the typical human audio range, i.e. 9.8 and 19.6 KHz. However the digital audio signal has four. There are two more at 4.9 KHz and 14.7 KHz.



Of course the amplitudes of those two additional frequencies are "small", since they are only caused by the amplitude quantization of the original analog audio signal. But human hearing does not have a flat frequency response. It has a peak in its response curve around 3 kHz to 4 kHz (which most likely evolved to optimize the ability to process human speech.) A human brain's audio processing functions have evolved to detect quiet sounds at 3-4kHz mixed with lounder sounds in the rest of the frequency band - i.e. it is optimized to detect this sort of digital audio artefact!



These "ghost tones" are audible in controlled conditions and there is no way to remove them when converting the digital data back to analog. Dithering the digital signal (which is often done as the final step in processing) doesn't remove them, it simply smears them out across a range of frequencies.



Increasing the bit resolution from 16 to 24 does reduce them by a factor of 256. Increasing the sampling rate from 44.1k/sec to say 96k/sec can also reduce them, since a dithering algorithm can now "dump" all the "noise" into the inaudible frequency range above 22 kHz.







share|improve this answer














share|improve this answer



share|improve this answer








edited May 26 at 9:35

























answered May 26 at 9:26









guestguest

472




472












  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27

















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27
















Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 29 at 22:27





Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 29 at 22:27











0














The German "Audio" magazine published an article some time 25-30 years ago. The found a high-end CD player that for some reason allowed to turn individual bits of the 16 bit signal on or off - why you would do that is beyond me, but that is what this CD player did.



What they found: Turning bit #16 off (with a top quality amplifier and top quality speakers) made no audible difference. Turning bit #15 off made an audible difference but there was no agreement in blind test which version was better or more accurate, just that there was a difference. Turning off bit #14 was a definite loss of quality.



Not in any way peer reviewed, just the published opinion of reporters who made their living reviewing and comparing high-end audio equipment. So according to them, 15 and 16 bit was indistinguishable.






share|improve this answer


















  • 3





    This isn't actually useful because they were misusing the system. Digitally recording an audio signal without using dither and noise shaping (which is effectively what they were doing) is comparable to doing analogue tape recording with an incorrect or missing bias signal: you are reducing the quality of the signal through incorrect use of your equipment. Had they properly downsampled 16-bit to 14-bit, it's unlikely anybody would have noticed the difference.

    – Curt J. Sampson
    May 28 at 1:46
















0














The German "Audio" magazine published an article some time 25-30 years ago. The found a high-end CD player that for some reason allowed to turn individual bits of the 16 bit signal on or off - why you would do that is beyond me, but that is what this CD player did.



What they found: Turning bit #16 off (with a top quality amplifier and top quality speakers) made no audible difference. Turning bit #15 off made an audible difference but there was no agreement in blind test which version was better or more accurate, just that there was a difference. Turning off bit #14 was a definite loss of quality.



Not in any way peer reviewed, just the published opinion of reporters who made their living reviewing and comparing high-end audio equipment. So according to them, 15 and 16 bit was indistinguishable.






share|improve this answer


















  • 3





    This isn't actually useful because they were misusing the system. Digitally recording an audio signal without using dither and noise shaping (which is effectively what they were doing) is comparable to doing analogue tape recording with an incorrect or missing bias signal: you are reducing the quality of the signal through incorrect use of your equipment. Had they properly downsampled 16-bit to 14-bit, it's unlikely anybody would have noticed the difference.

    – Curt J. Sampson
    May 28 at 1:46














0












0








0







The German "Audio" magazine published an article some time 25-30 years ago. The found a high-end CD player that for some reason allowed to turn individual bits of the 16 bit signal on or off - why you would do that is beyond me, but that is what this CD player did.



What they found: Turning bit #16 off (with a top quality amplifier and top quality speakers) made no audible difference. Turning bit #15 off made an audible difference but there was no agreement in blind test which version was better or more accurate, just that there was a difference. Turning off bit #14 was a definite loss of quality.



Not in any way peer reviewed, just the published opinion of reporters who made their living reviewing and comparing high-end audio equipment. So according to them, 15 and 16 bit was indistinguishable.






share|improve this answer













The German "Audio" magazine published an article some time 25-30 years ago. The found a high-end CD player that for some reason allowed to turn individual bits of the 16 bit signal on or off - why you would do that is beyond me, but that is what this CD player did.



What they found: Turning bit #16 off (with a top quality amplifier and top quality speakers) made no audible difference. Turning bit #15 off made an audible difference but there was no agreement in blind test which version was better or more accurate, just that there was a difference. Turning off bit #14 was a definite loss of quality.



Not in any way peer reviewed, just the published opinion of reporters who made their living reviewing and comparing high-end audio equipment. So according to them, 15 and 16 bit was indistinguishable.







share|improve this answer












share|improve this answer



share|improve this answer










answered May 27 at 20:16









gnasher729gnasher729

1471




1471







  • 3





    This isn't actually useful because they were misusing the system. Digitally recording an audio signal without using dither and noise shaping (which is effectively what they were doing) is comparable to doing analogue tape recording with an incorrect or missing bias signal: you are reducing the quality of the signal through incorrect use of your equipment. Had they properly downsampled 16-bit to 14-bit, it's unlikely anybody would have noticed the difference.

    – Curt J. Sampson
    May 28 at 1:46













  • 3





    This isn't actually useful because they were misusing the system. Digitally recording an audio signal without using dither and noise shaping (which is effectively what they were doing) is comparable to doing analogue tape recording with an incorrect or missing bias signal: you are reducing the quality of the signal through incorrect use of your equipment. Had they properly downsampled 16-bit to 14-bit, it's unlikely anybody would have noticed the difference.

    – Curt J. Sampson
    May 28 at 1:46








3




3





This isn't actually useful because they were misusing the system. Digitally recording an audio signal without using dither and noise shaping (which is effectively what they were doing) is comparable to doing analogue tape recording with an incorrect or missing bias signal: you are reducing the quality of the signal through incorrect use of your equipment. Had they properly downsampled 16-bit to 14-bit, it's unlikely anybody would have noticed the difference.

– Curt J. Sampson
May 28 at 1:46






This isn't actually useful because they were misusing the system. Digitally recording an audio signal without using dither and noise shaping (which is effectively what they were doing) is comparable to doing analogue tape recording with an incorrect or missing bias signal: you are reducing the quality of the signal through incorrect use of your equipment. Had they properly downsampled 16-bit to 14-bit, it's unlikely anybody would have noticed the difference.

– Curt J. Sampson
May 28 at 1:46












0














No - on some cell phones, audio recorded with HD video will be higher & there is a noticeable difference from the 16-bit default recording of the audio app to the 24-bit HD audio in the HD video recording. My family has a weird ear thing.. one hears lower pitches, one hears higher pitches. Both my brother & I have this, & we can both hear clear data loss when comparing those two files. The closer you are to recording the best native format for a live feed, the closer you are to perfection.



Just as 24-bit is better than 16, 32-bit is better than 24. However, frequency beyond 48 kHz is multiplied as a sampling of 44.1 or 48 kHz, so you may not hear the difference through frequency changes. Look at this through a speakers analogy on the receiving end.. if one sampling is 2 speakers, then for each next sampling it might be like the recipient is inside a circle of two more speakers. At what point does it all just become noise?



32-bit 48 kHz is a great recording level for #Audacity, & with a clean recording mixer, like #Cerwin-#Vega with the USB interface, just the right oxygen-free copper or silver wire leads, I really enjoy the 32-bit 48 kHz recordings far better than the lower settings.






share|improve this answer

























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 31 at 8:41















0














No - on some cell phones, audio recorded with HD video will be higher & there is a noticeable difference from the 16-bit default recording of the audio app to the 24-bit HD audio in the HD video recording. My family has a weird ear thing.. one hears lower pitches, one hears higher pitches. Both my brother & I have this, & we can both hear clear data loss when comparing those two files. The closer you are to recording the best native format for a live feed, the closer you are to perfection.



Just as 24-bit is better than 16, 32-bit is better than 24. However, frequency beyond 48 kHz is multiplied as a sampling of 44.1 or 48 kHz, so you may not hear the difference through frequency changes. Look at this through a speakers analogy on the receiving end.. if one sampling is 2 speakers, then for each next sampling it might be like the recipient is inside a circle of two more speakers. At what point does it all just become noise?



32-bit 48 kHz is a great recording level for #Audacity, & with a clean recording mixer, like #Cerwin-#Vega with the USB interface, just the right oxygen-free copper or silver wire leads, I really enjoy the 32-bit 48 kHz recordings far better than the lower settings.






share|improve this answer

























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 31 at 8:41













0












0








0







No - on some cell phones, audio recorded with HD video will be higher & there is a noticeable difference from the 16-bit default recording of the audio app to the 24-bit HD audio in the HD video recording. My family has a weird ear thing.. one hears lower pitches, one hears higher pitches. Both my brother & I have this, & we can both hear clear data loss when comparing those two files. The closer you are to recording the best native format for a live feed, the closer you are to perfection.



Just as 24-bit is better than 16, 32-bit is better than 24. However, frequency beyond 48 kHz is multiplied as a sampling of 44.1 or 48 kHz, so you may not hear the difference through frequency changes. Look at this through a speakers analogy on the receiving end.. if one sampling is 2 speakers, then for each next sampling it might be like the recipient is inside a circle of two more speakers. At what point does it all just become noise?



32-bit 48 kHz is a great recording level for #Audacity, & with a clean recording mixer, like #Cerwin-#Vega with the USB interface, just the right oxygen-free copper or silver wire leads, I really enjoy the 32-bit 48 kHz recordings far better than the lower settings.






share|improve this answer















No - on some cell phones, audio recorded with HD video will be higher & there is a noticeable difference from the 16-bit default recording of the audio app to the 24-bit HD audio in the HD video recording. My family has a weird ear thing.. one hears lower pitches, one hears higher pitches. Both my brother & I have this, & we can both hear clear data loss when comparing those two files. The closer you are to recording the best native format for a live feed, the closer you are to perfection.



Just as 24-bit is better than 16, 32-bit is better than 24. However, frequency beyond 48 kHz is multiplied as a sampling of 44.1 or 48 kHz, so you may not hear the difference through frequency changes. Look at this through a speakers analogy on the receiving end.. if one sampling is 2 speakers, then for each next sampling it might be like the recipient is inside a circle of two more speakers. At what point does it all just become noise?



32-bit 48 kHz is a great recording level for #Audacity, & with a clean recording mixer, like #Cerwin-#Vega with the USB interface, just the right oxygen-free copper or silver wire leads, I really enjoy the 32-bit 48 kHz recordings far better than the lower settings.







share|improve this answer














share|improve this answer



share|improve this answer








edited May 28 at 12:50

























answered May 28 at 12:27









Joseph PoirierJoseph Poirier

173




173












  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 31 at 8:41

















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 31 at 8:41
















Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 31 at 8:41





Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 31 at 8:41











-1














I've read that the ear cannot distinguish between anything higher than 16bit 44khz. Some studies show that as the sampling frequency increases above that level, the listener experiences a reduction in quality and loudness, or no improvement.



When you're recording, or working in a DAW, having higher bitrates and frequencies can increase the quality of your final mix (according to my research). This is probably because higher sampling rates helps the ability of the DAW to do the calculations necessary for mixing with higher resolution.



The post from @graham is very interesting, and also adds information to why working with higher bitrates in a DAW is ideal. I've read that they're helpful, but don't know about exactly why. However, I don't always cut my highs with a low pass. I've noticed that it changes the way the track sounds. I definitely reduce anything above 8khz, but a lowpass even at 20khz seems to change the way the audio sounds; for me, it's not always ideal. He's definitely right that cutting frequencies in this range will give your mix more headroom. And if recording with a higher sampling rate allows for more precise and wide ranging ability to cut frequencies, that would justify using higher sampling rates.



Blind test where they couldn't tell the difference:
https://www.tomshardware.com/reviews/high-end-pc-audio,3733-18.html



Another anecdotal example:
https://www.avsforum.com/forum/91-audio-theory-setup-chat/2909658-192khz-24bit-vs-44-1khz-16bit-audio-no-quality-difference.html



There are endless cases of people claiming they can't tell the difference. I know I can't. I actually like 16bit 44.1khz more than 24bit or 32bit when it comes to the final rendered product.



However, it's worth mentioning that if you're uploading to a website like soundcloud.com .. your track is going to get compressed into mp3 format for streaming. This goes with a lot of streaming services; they will compress your music. So basically, your track is going through another DAW after final render. If you're uploading to a website that will compress your music, it may be beneficial to render in the highest possible quality so that the track has maximum input resolution. I've uploaded tracks in 24bit and 16bit and can't tell the difference. I've listened to tracks rendered in 16/24/32 bit and I can't tell the difference. Note that in Fl Studio, dithering is only available in 16bit wav renders and I've read that dithering makes the final sound more palatable to the ear.



The quality of your recording setup, equipment, and personal ability is far more important to how the final sound will be perceived than anything above CD quality on your rendered product.






share|improve this answer

























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27















-1














I've read that the ear cannot distinguish between anything higher than 16bit 44khz. Some studies show that as the sampling frequency increases above that level, the listener experiences a reduction in quality and loudness, or no improvement.



When you're recording, or working in a DAW, having higher bitrates and frequencies can increase the quality of your final mix (according to my research). This is probably because higher sampling rates helps the ability of the DAW to do the calculations necessary for mixing with higher resolution.



The post from @graham is very interesting, and also adds information to why working with higher bitrates in a DAW is ideal. I've read that they're helpful, but don't know about exactly why. However, I don't always cut my highs with a low pass. I've noticed that it changes the way the track sounds. I definitely reduce anything above 8khz, but a lowpass even at 20khz seems to change the way the audio sounds; for me, it's not always ideal. He's definitely right that cutting frequencies in this range will give your mix more headroom. And if recording with a higher sampling rate allows for more precise and wide ranging ability to cut frequencies, that would justify using higher sampling rates.



Blind test where they couldn't tell the difference:
https://www.tomshardware.com/reviews/high-end-pc-audio,3733-18.html



Another anecdotal example:
https://www.avsforum.com/forum/91-audio-theory-setup-chat/2909658-192khz-24bit-vs-44-1khz-16bit-audio-no-quality-difference.html



There are endless cases of people claiming they can't tell the difference. I know I can't. I actually like 16bit 44.1khz more than 24bit or 32bit when it comes to the final rendered product.



However, it's worth mentioning that if you're uploading to a website like soundcloud.com .. your track is going to get compressed into mp3 format for streaming. This goes with a lot of streaming services; they will compress your music. So basically, your track is going through another DAW after final render. If you're uploading to a website that will compress your music, it may be beneficial to render in the highest possible quality so that the track has maximum input resolution. I've uploaded tracks in 24bit and 16bit and can't tell the difference. I've listened to tracks rendered in 16/24/32 bit and I can't tell the difference. Note that in Fl Studio, dithering is only available in 16bit wav renders and I've read that dithering makes the final sound more palatable to the ear.



The quality of your recording setup, equipment, and personal ability is far more important to how the final sound will be perceived than anything above CD quality on your rendered product.






share|improve this answer

























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27













-1












-1








-1







I've read that the ear cannot distinguish between anything higher than 16bit 44khz. Some studies show that as the sampling frequency increases above that level, the listener experiences a reduction in quality and loudness, or no improvement.



When you're recording, or working in a DAW, having higher bitrates and frequencies can increase the quality of your final mix (according to my research). This is probably because higher sampling rates helps the ability of the DAW to do the calculations necessary for mixing with higher resolution.



The post from @graham is very interesting, and also adds information to why working with higher bitrates in a DAW is ideal. I've read that they're helpful, but don't know about exactly why. However, I don't always cut my highs with a low pass. I've noticed that it changes the way the track sounds. I definitely reduce anything above 8khz, but a lowpass even at 20khz seems to change the way the audio sounds; for me, it's not always ideal. He's definitely right that cutting frequencies in this range will give your mix more headroom. And if recording with a higher sampling rate allows for more precise and wide ranging ability to cut frequencies, that would justify using higher sampling rates.



Blind test where they couldn't tell the difference:
https://www.tomshardware.com/reviews/high-end-pc-audio,3733-18.html



Another anecdotal example:
https://www.avsforum.com/forum/91-audio-theory-setup-chat/2909658-192khz-24bit-vs-44-1khz-16bit-audio-no-quality-difference.html



There are endless cases of people claiming they can't tell the difference. I know I can't. I actually like 16bit 44.1khz more than 24bit or 32bit when it comes to the final rendered product.



However, it's worth mentioning that if you're uploading to a website like soundcloud.com .. your track is going to get compressed into mp3 format for streaming. This goes with a lot of streaming services; they will compress your music. So basically, your track is going through another DAW after final render. If you're uploading to a website that will compress your music, it may be beneficial to render in the highest possible quality so that the track has maximum input resolution. I've uploaded tracks in 24bit and 16bit and can't tell the difference. I've listened to tracks rendered in 16/24/32 bit and I can't tell the difference. Note that in Fl Studio, dithering is only available in 16bit wav renders and I've read that dithering makes the final sound more palatable to the ear.



The quality of your recording setup, equipment, and personal ability is far more important to how the final sound will be perceived than anything above CD quality on your rendered product.






share|improve this answer















I've read that the ear cannot distinguish between anything higher than 16bit 44khz. Some studies show that as the sampling frequency increases above that level, the listener experiences a reduction in quality and loudness, or no improvement.



When you're recording, or working in a DAW, having higher bitrates and frequencies can increase the quality of your final mix (according to my research). This is probably because higher sampling rates helps the ability of the DAW to do the calculations necessary for mixing with higher resolution.



The post from @graham is very interesting, and also adds information to why working with higher bitrates in a DAW is ideal. I've read that they're helpful, but don't know about exactly why. However, I don't always cut my highs with a low pass. I've noticed that it changes the way the track sounds. I definitely reduce anything above 8khz, but a lowpass even at 20khz seems to change the way the audio sounds; for me, it's not always ideal. He's definitely right that cutting frequencies in this range will give your mix more headroom. And if recording with a higher sampling rate allows for more precise and wide ranging ability to cut frequencies, that would justify using higher sampling rates.



Blind test where they couldn't tell the difference:
https://www.tomshardware.com/reviews/high-end-pc-audio,3733-18.html



Another anecdotal example:
https://www.avsforum.com/forum/91-audio-theory-setup-chat/2909658-192khz-24bit-vs-44-1khz-16bit-audio-no-quality-difference.html



There are endless cases of people claiming they can't tell the difference. I know I can't. I actually like 16bit 44.1khz more than 24bit or 32bit when it comes to the final rendered product.



However, it's worth mentioning that if you're uploading to a website like soundcloud.com .. your track is going to get compressed into mp3 format for streaming. This goes with a lot of streaming services; they will compress your music. So basically, your track is going through another DAW after final render. If you're uploading to a website that will compress your music, it may be beneficial to render in the highest possible quality so that the track has maximum input resolution. I've uploaded tracks in 24bit and 16bit and can't tell the difference. I've listened to tracks rendered in 16/24/32 bit and I can't tell the difference. Note that in Fl Studio, dithering is only available in 16bit wav renders and I've read that dithering makes the final sound more palatable to the ear.



The quality of your recording setup, equipment, and personal ability is far more important to how the final sound will be perceived than anything above CD quality on your rendered product.







share|improve this answer














share|improve this answer



share|improve this answer








edited May 26 at 15:28

























answered May 26 at 2:18









hexagodhexagod

454




454












  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27

















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27
















Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 29 at 22:27





Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 29 at 22:27

















draft saved

draft discarded
















































Thanks for contributing an answer to Music: Practice & Theory Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmusic.stackexchange.com%2fquestions%2f85212%2fis-cd-audio-quality-good-enough-for-the-final-delivery-of-music%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020