Drums, bones, and nerves: how we lose hearing and how technology can help

More than a billion people are affected by hearing loss. Partially, this problem can be solved by neural-interfaced implants. But first, let’s learn how our ears work.

Drums, bones, and nerves: how we lose hearing and how technology can help

We are very fortunate to have ears. They warn us against various dangers, register the direction from which the sound is coming, and enable us to recognize speech, music, and noises. No doubt, hearing is essential for one’s health and comfort.

Unfortunately, there are many ways hearing can fail at any age. According to the 2019 Global Burden of Disease study, about 20% of people experience hearing loss, affecting their quality of life. 

In recent decades, it became easier to help people with hearing impairments. To that extent, much progress has been made in auditory prosthetics—hearing aids and implants.

Now, let’s recap the nature of sound and the ear anatomy, see how our bodies process sound, what can go wrong, and how we can help it.

Air to receptor: how our ears hear

How do we hear? Our ears convert the vibrations of sound waves in the air into signals that our brains perceive as sound.

Like our natural hearing system, most hearing aids pick up the vibrations from the environment, process them, and deliver the sound to the brain in a suitable way for the user. This means that reading and processing sounds are essential steps for the hearing aid to work correctly. 

The properties of sounds include frequency and loudness. Frequency is measured in hertz (Hz) and loudness in decibels (dB). One hertz is defined as one cycle per second—in the case of sound, it means the number of times the wave reached its peak in one second. Thus, lower and higher frequencies correspond to lower and higher pitches of a musical note.

Frequencies from 20 to 20 000 Hz are the limits of the human hearing range—our hearing, however, is most receptive to sounds between 2000 and 5000 Hz. As for the loudness, a person hears sounds louder than 0 dB. Anything above 85 dB in loudness can harm your hearing, especially if you are exposed to it for a long time.

Hearing aids are designed to transmit sounds that humans can perceive. Therefore, knowledge of the human ear and its inner workings helps to create better systems. 

The human ear has three parts: the outer, middle, and inner ear.

The ear’s outermost (and visible) part is called a pinna (or an auricle). The sounds captured in the pinna follow the ear canal to the eardrum. From there, the vibrations travel to three tiny bones called the ossicles. Then, one of the ossicles knocks on an oval window to the inner ear.

Finally, in the inner ear, the vibrations of the oval window disturb the fluid inside the cochlea. That, in turn, causes a chemical reaction in the cochlear hair cells, and that’s when the movement of the fluid translates into nerve impulses.

Let’s ponder on the cochlea for a moment. This cone-shaped organ is twisted like a spiral in two and a half turns. The cochlea is divided into three channels, one of which contains the already mentioned hair cells.

Vibrations are translated into signals depending on the place the hair cells pick them up. The hairs at the base of the cochlea catch low frequencies, while those at the top specialize in high frequencies. The disturbed hair cells generate electrical impulses, which are transmitted to the auditory nerve. Course block

Hair cells are incredibly vulnerable and can be affected by diseases, aging, and over-exposure to loud noise. Moreover, they can’t regenerate once destroyed or damaged. 

Impulses to thoughts: how our brains process sound data

The ears only pick up the sound, but the brain is responsible for handling it.

The pathways of the central auditory system transmit neural impulses to the brain’s temporal lobes, where the signals are recognized and processed. 

The brain not only converts the impulses into perceived sounds but also picks up additional important information. For instance, the brain notices subtle differences in pitch, loudness, and intervals of the sound waves picked up by the left and the right ears. This allows determining the direction of the sound; the brain stem uses this information to determine whether the sound represents a threat. 

Install the Nerdish app with essential knowledge about everything in the world.

Learn something new every week