Why do humans perceive sound loudness differently based on frequency and sound pressure?

Sound pressure is a physical quantity, while loudness refers to the perceived loudness by humans. The interaction between frequency and sound pressure relates to auditory characteristics, and understanding this aids in creating better acoustic environments.

 

From a physical perspective, sound possesses both sound pressure and frequency. Sound pressure, in particular, represents sound as pressure per unit area and can be considered the physical magnitude of sound. In contrast, loudness refers to the perceived size of sound by humans. When we commonly say a sound is heard as soft or loud, we are referring to loudness. However, loudness—the perceived size of sound by humans—can vary depending on the relationship between sound pressure and frequency.
If two sound sources at the same distance produce sounds perceived as different in loudness, people typically assume the source heard as louder has a stronger sound pressure. But this is not always the case. When humans hear sound, the hair cells in the cochlea respond. This response is transmitted to the brain, enabling sound recognition. These hair cells react differently depending on the frequency—they may be highly sensitive to some frequencies but less sensitive to others. Therefore, humans perceive sound loudness differently not only based on sound pressure but also based on frequency.
Furthermore, human hearing changes with age. Young children can hear a wider range of frequencies, but as people age, hearing deteriorates in the high-frequency range. This is related to the degeneration of hair cells, with high-frequency hearing loss occurring progressively, especially after the age of 20. These changes in hearing affect daily life, including music appreciation and communication.
The human ear exhibits an irregular response characteristic to frequency. For instance, it is relatively more sensitive to sounds in the 1,000 to 5,000 Hz range compared to other frequency bands. However, sensitivity is lower for frequencies below and above this range. Sounds below approximately 16 Hz and above 20,000 Hz are generally considered inaudible to humans. The equal loudness contour is a prime example illustrating these human auditory characteristics.
The equal loudness contour demonstrates that sounds with the same sound pressure level are perceived as having different loudness levels depending on their frequency. Acoustically speaking, it plots the sound pressure level required at each frequency to produce the same perceived loudness as a 1,000 Hz pure tone. For example, according to this curve, when the sound pressure of a 1,000 Hz pure tone is 30 dB, its perceived loudness is equivalent to that of a 125 Hz pure tone at 40 dB or a 4,000 Hz pure tone at 25 dB. If the sound pressure levels of all three pure tones were set to 30 dB, the 4,000 Hz pure tone would be perceived as the loudest.
Due to this auditory characteristic, when listening to one sound, other sounds may become difficult to hear. You may have experienced not being able to hear the voice of someone you’re talking to when there is a lot of surrounding noise. In this case, it is said that one sound (the voice of the person you’re talking to) is masked by another sound (the surrounding noise). The simplest example of masking is when pure tone A is present, making pure tone B inaudible within a certain frequency range. The range of sounds masked and rendered inaudible can vary depending on the sound pressure levels and frequencies of the two tones. Generally, increasing the sound pressure level of the masking tone broadens the masked range. Furthermore, masking tends to occur more readily when the two tones are pure tones at closely spaced frequencies.
This auditory phenomenon also plays a significant role in various real-world applications. For instance, technology amplifying the high-frequency range is sometimes used to deliver speech more clearly in noisy public places. Furthermore, this masking effect can be utilized in music production to emphasize the sound of specific instruments or reduce background noise. Moreover, hearing aids designed to compensate for hearing loss are engineered considering the user’s sensitivity to specific frequency bands.
Acoustic understanding also influences the design and placement of audio equipment. For instance, speaker placement is determined by considering the room’s acoustic characteristics and the listener’s position. This optimizes sound reflection and absorption at specific frequency bands, delivering clearer and more balanced sound.
Furthermore, acoustic design is critically important in venues like movie theaters and concert halls. In these spaces, the placement of speakers and acoustic absorbers, along with structural design, is engineered to ensure all audience members experience optimal sound. Acoustic designers strive to minimize sound variations based on audience location and ensure sound is delivered evenly across all frequency bands.
In summary, the complex interaction between the physical properties of sound and human hearing significantly impacts the quality and comprehension of the sounds we experience daily. Based on this knowledge, we can create better acoustic environments and advance audio technology. The development of acoustics will bring innovative changes not only in sound transmission but also in the realms of music and art. Research and understanding of acoustics will continue to grow in importance.

 

About the author

Writer

I'm a "Cat Detective" I help reunite lost cats with their families.
I recharge over a cup of café latte, enjoy walking and traveling, and expand my thoughts through writing. By observing the world closely and following my intellectual curiosity as a blog writer, I hope my words can offer help and comfort to others.