femme Neuroscience Notes

Some Topics


Sounds are used ubiquitously in the animal world to communicate. Animals listen for sounds that inform them about events happening at a distance. They make sounds to send messages to each other. If you sit anywhere in nature and listen, you begin to hear a symphony. Some sounds mark and defend territory, other sounds are mating signals, others sounds are emotional expressions, yet other sounds are warnings. Birds sing to declare ownership of territory, to attract mates, and to send messages to family members. Wolves howl on clear moonlet nights to speak to each other and to express a deep feeling and sensitive humans who hear them sing also share that deep feeling.

Sounds link animals in social groups. The continuous uttering of repetitive sounds is a common method of parent and infant communication. Infant Canada Geese, for example, emit a peeping every second or so and their parents emit a low pitch short” honk” every four or five seconds. This auditory link is more important than a visual link in keeping the family unit together. When the geese flock flies together, there is continuous honking that links the group. Since they cannot maintain visual contact in a V- flight formation, sound communication allows the group to stay together.

As I write, the constant calling of sea gulls outside reminds me of the chatter in human groups. I can hear the persistent high-pitched peeping a young gull demanding attention and food from the adults. I know without looking that the supplicant is crouched low with his head down bobbing up and down with each cry. The sound and the behavior are always linked. This is an example of a fixed action patterns. The young gull is persistent because the adults are ignoring him. He hopes that his fixed action pattern will act as an innate releaser and an adult will feed him. However, the adults have turned off their automatic response. He is now about five months old and should be getting his own food. Adolescents of many species have difficulty making the transition to adult self-sufficiency.

A meaningful sound can be compared with a molecular message that locks into a receptor and activates a response. Sound receptors are built into every brain to detect distant features of the environment that require identification and response. The reception of sight and sound signals together are important to identify and localize events out there. A novel sound triggers an orienting response; stop, look and listen carefully to identify the source of the sound. An unexpected loud sound triggers a startle response that is composed of orienting and flight movements, associated with fear.

The brain extracts several kinds of information from the components of sounds, such as pitch, loudness, timbre, location and direction of movement. The direction of the origin of a sound is determined in the brain by differences in volume, pitch and differences in the time of arrival in the right and left ears. If sound from the left is louder than the right, differences of arrival time of a few thousands of a second are enough for the brain to compute direction of origin of a sound. Changing arrival times inform about movement of the sound source. The ear is shaped to collect sounds from in front. Sounds from the rear are lower in volume. The Doppler effect is a sudden drop in pitch of a travelling sound that passes in front. An approaching train may sound its horn and, as it passes you, the pitch drops. When you are alerted by a sound to the presence of danger, you have to stop and listen carefully to identify and localize the sound. As with vision, scanning the environment provides more information. You may turn your and tilt you head slowly to get different samples of the sound.

Animal communication begins with sounds that declare specific meanings such as the alarm cries of squirrels and monkeys, bird songs that regulate mating and social activity and human grunts, shouts and cries that attract attention, signal danger and express emotion. Rhesus monkeys, for example, make 15 sounds that are associated with facial expressions. Monkeys in danger make short, sharp threatening calls with eyes wide, ears flat, mouth wide open. Relaxed monkeys 'coo', with lips pouting and open. Monkeys identified threat calls with facial expressions, just as human infants match voice to face, starting at two months old.All animal share fundamental strategies of sound communication.

As a rule, the meaning of sound communications is species specific, although there are some basic and old sounds that are shared among diverse animal species. Ehret and Riecke, for example, suggested that the squeaks made by baby mice in the nest are similar to human infant sounds. They found that mouse mothers react to the calls that contain word-like groups of three tones at 3.8, 7.6 and 11.4 kilohertz. The lowest frequency was the most critical as it is in human communication. It is difficult to understand a voice from which the bass frequencies have been removed.

Andlam and Fee studied the basal ganglia – forebrain interaction in birds learning to sing. They stated: ”Birdsong is a complex motor behavior that, like many human motor skills, improves with practice. Songbirds learn to sing by imitation, using auditory feedback to compare their own vocalizations with the memorized song of a tutor. Learning birds initially produce a highly variable juvenile song that, after thousands of repetitions, converges to a stable adult song, often a remarkably precise imitation of the tutor song. Like many learning tasks in mammals , this goal-directed behavior requires a basal ganglia-thalamocortical circuit known as the anterior forebrain pathway… evidence suggests that the basal ganglia are necessary to express recently learned behaviorand that changes in neural activity in response to learning appear first in basal ganglia circuits.”


Sound is created when waves in the air interact with our brain. Waves are a fundamental feature of the universe. Waves, are oscillations or vibrations. A vibrating string is fixed at both ends and transfers its motion to surround air molecules. My study of waves has taken many forms, but none more important than observing waves on water surfaces. When water surfaces move, they naturally create waves, often in complex patterns. Waves on water have the characteristics of sound waves, light waves, and the waves we create in our electronic circuits. We have brain waves that can be recorded with scalp electrodes. When we are resting and relaxing low frequency sine waves dominate in the range of 10 cycles per second (10 Hz). A sine wave is a pure wave with a smooth contour and a steady rhythm.

Waves can be measured in term of the their amplitude (height) and frequency (rate of oscillation). A wave length is the distance from one peak to the next. Water wave models can demonstrate the properties of sound waves which include propagation, reflection, refraction, absorption and interference patterns. When sound waves reflect off surfaces we hear resonances and echoes.

When you are considering a Tsumani wave, you want to know the speed of propagation of the wave so that you can estimate the time of arrival on a distant shore. Water waves involve water molecules in an up and down motion ( in an elliptical path). In a most intriguing fashion, a wave is an epiphenomena of water that moves forward even though water molecules move up an down. If you are sitting in a boat in the middle of the ocean a Tsumani wave moves your boat up and down but so gently, that you hardly notice. When the wave comes ashore, everything changes: the wave height increases as the ocean floor rises until the wave breaks and water lunges forward, crashing into the land with a powerful destructive force.

When we describe musical sounds, the pitch (frequency) and loudness (amplitude) are the two most important measurements. Sound waves in air at 20° C travel about 343 meters/second. Waves in air enter the ear canal and crash into the ear drum causing it to deflect and move the three tiny bones in the middle ear which move the tympanic membrane on the cochlea.

Sound waves transverse the ear canal and deflect the tympanic membrane which causes the small bones of the inner ear to produce a fluid wave in the cochlea. The organ of Corti in the cochlea converts the fluid waves into nerve impulses. The Organ of Corti is suspended on a membrane that supports hair cells that convert membrane fluctuations into nerve impulses. Hair cells are tuned to different frequencies, so that the output from the cochlea is already divided into frequency bands. Outer hair cells adjust the gain of the cochlea. According to Corey: ” Hair cells respond to the vibration of the basilar membrane by pushing back on it, exerting force with just the right amplitude and phase to amplify the vibration, especially for faint sounds, by 100-fold or more. The movement of the basilar membrane is amplified from hundredths of nanometers to around a nanometer for the quietest perceptible sounds, from a nanometer to several nanometers at conversational level, but not much at all for loud sounds. In a healthy ear, movement is therefore not linearly proportional to sound level, but shows a compressive nonlinearity… outer hair cells in each region of the long organ of Corti only amplify sound of a particular frequency so that each region is exquisitely tuned to a characteristic frequency (CF) and not to other frequencies. Inner hair cells then sense the amplified vibration, and send a frequency-specific signal to spiral ganglion neurons of the eighth cranial nerve.” (Corey, D. Sound Amplification in the Inner Ear: It Takes TM to Tango. Neuron, Vol. 28, 7–9, October, 2000)