Communication by means of sound is not always easy. Sound suffers much attenuation and degradation close to ground. Crickets have adapted to this by exploiting sharply tuned mechanical systems. A coevolution ensures that the calling song of males and the directional hearing of females are tuned to the same frequency.
Most studies of physiological mechanisms are, for obvious reasons, carried out in laboratories and on domesticated or captive animals. It may therefore be difficult for the physiologist to imagine how a particular mechanism can be adapted for the success of the animal in its natural environment. The present study illustrates this problem. Most people know that field crickets emit pure-tone calling songs, whereas most other insects, frogs, birds, and mammals emit calls with several or many frequency components. Is it a coincidence? Or are the pure tones an adaptation to the cricket’s special way of life? Before trying to answer these questions, we should become familiar with a few aspects of the biology of field crickets, which are the most-studied cricket (see Ref. 2 for review).
Field crickets dig burrows in the ground. The sexually mature male stands next to the entrance of his burrow and sings the calling song, ready to disappear into the burrow if a predator approaches. Sexually mature females, when properly motivated, walk toward the singing males. Both singing and walking are risky occupations, so the sexes have divided the risks associated with mating between them. When the female has found a male, the male switches to a courtship song, and the preliminary ceremonies of mating can begin.
Sound communications close to ground
Observations in the field have shown that the females march toward singing males that are up to several meters away. In other words, at this distance, they can both hear the songs and determine the direction of sound incidence. Humans can hear the calling songs at distances of several tens of meters. Our hearing is more sensitive, however, and we keep our heads almost 2 m above ground and experience only little excess attenuation of sound (that is, the attenuation in addition to that due to distance). However, the crickets are close to the ground, where the acoustic conditions are very different. Measurements with microphones and small loudspeakers placed a few centimeters above ground and some meters apart reveal a much larger excess attenuation than in free air. The excess attenuation increases greatly with sound frequency.
One other factor affects the choice of carrier frequency for the “long-distance” calling songs (relative to the body length of a cricket, a few meters is really a long distance!). Loudspeakers are only efficient sound emitters when they are not much smaller than the wavelength of the emitted sound. For the size of a cricket, one would expect the efficiency of sound emission to drop dramatically below a few kilohertz. On the other hand, as noted above, excess attenuation of sound increases with frequency in the communication channel a few centimeters above ground. These factors leave only a narrow frequency range that can be exploited for the calling songs: above 3 kHz and below 6 kHz! Therefore, crickets could not use broad-band songs like those of many grasshoppers or bushcrickets for their calling songs.
But why do crickets use pure tones? At a distance of, for example, 1 m from the ground, a pure-tone signal would be a bad choice because the interference of directly transmitted sound and sound reflected from the ground would produce “silent spots” (points in space where the sounds arrive totally out of phase and are thus cancelled). This does not happen close to the ground, where pure tones are “permissible.” Two possible explanations for the use of pure tones emerge: first, a pure tone emitted by a highly tuned resonator can have a high intensity, and, second, a pure-tone carrier makes it possible to obtain a large signal-to-noise ratio for the transmission. The evidence for the latter explanation is not quite convincing. The sound emitter is highly tuned, but the hearing threshold curve shows a much smaller preference for the carrier frequency of the calling song (~4.5 kHz in the two most-studied Gryllus species).
How crickets sing
The evidence for the first explanation is more convincing. Sound is produced in a two-step process. The first step is a frequency multiplication (known as stridulation), which is necessary because muscles are too slow to cause a 4- to 5-kHz vibration. Instead, fairly slow muscular contractions are used for rubbing the wings against each other. During the movements leading to a closure of the wings, a scraper on one wing hits a series of cuticular teeth on the other. The result is a wing vibration with a spectrum in which the tooth-impact rate (the number of teeth hit per unit time) corresponds to the lowest component in a harmonic series.
The second step is to obtain an efficient sound emission. This requires a reasonably large surface, which vibrates with a large amplitude. In the male cricket, this is obtained in a triangular part of the wing known as the harp. The harp is a very lightly damped resonating system, and its resonance frequency determines the carrier frequency of the calling song (12). The tuning to the resonance frequency has an average Q3 dB [see Fig. 1⇓ for explanation of Q3 dB] of 25, which means that the vibration amplitude is much larger at the resonance frequency than at most other frequencies (Fig. 1⇓). This in turn means that the lowest harmonic in the series produced during the stridulation will dominate the emitted sound. The cricket emits an almost pure tone.
With such a highly tuned resonator, it is very important that the tooth-impact frequency fits the resonance frequency of the harp. The harp is a dead mechanical structure, the properties of which cannot be affected by the cricket. In contrast, the stridulatory movements involve nervous and muscular elements. The tooth-impact rate may thus be expected to vary with temperature; yet, the song carrier remains constant within a wide temperature range. The solution proposed for this mystery is that the system works like the release mechanism in a grandfather clock (3). In this analogy, the muscular tension corresponds to the weight, the harp to the pendulum, and the scraper and the row of teeth to the release mechanism. The idea is that the large vibration from the harp helps the scraper to come free from a tooth, so that it can hit the next tooth a little later. This mechanical feedback between the oscillating harp and the wing movement exerts a very strict control over the movement of the wings. As a result, the tooth-impact rate is always equal to the harp resonance frequency, despite variations in tooth spacing and muscular tension. The hypothesis has been subjected to several tests (adding masses, removing the harp, removing some teeth, and so on), the results of which correspond to the predictions.
The need for a uniform carrier frequency
So far, we have seen that it is to the cricket’s advantage to produce an intense pure tone and that it has “invented” an ingenious mechanical system to achieve this. However, the story is not quite logical. Because of the dead nature of the harp, each cricket has to stick to its resonance frequency to be as loud as possible. However, why do all males in a population sing with almost the same frequency? In other words, is it a disadvantage for a cricket to be born with a harp that is tuned 500 Hz above or below the normal carrier frequency? As explained above, the clock mechanism adjusts the other components to follow the resonance frequency of the harp, so emitting pure tones at other frequencies than the normal one should not be a problem. Also, the hearing threshold of the females is only moderately tuned to the normal song frequency, so singers using slightly different frequencies should have almost identical chances of being heard. However, another aspect of hearing is sharply tuned: the ability to determine the direction of sound incidence. This has been found by several investigators, both in behavioral studies (e.g., Ref. 1) and in investigations of neuronal responses, but the reason for this frequency selectivity has only recently been discovered.
The physics of directional hearing
Humans use two strategies for determining the direction of a sound source. We can compare the times of arrival of sound at each ear, and we can also exploit the difference in sound pressure at the ears caused by the interference (diffraction) of the sound with our body. Neither of these mechanisms is available to crickets or most other small animals (see Refs. 4 and 6 for reviews). Instead, in small animals, the sound reaches both the inner and outer surfaces of their eardrums, transforming each ear into a directional sound receiver.
The ears of crickets are located in the front legs, just below the “knee.” The eardrum is freely exposed to sounds reaching its external surface, but it also receives sound at its inner surface. An air-filled tube known as the acoustic trachea connects the eardrum with a spiracular opening at the lateral surface of the thorax. The acoustic tracheae at the two sides of the body are connected by a transverse trachea. In addition to the sound acting at its outer surface, the eardrum may, at its inner surface, receive sounds that have entered the tracheal (respiratory) system at three different positions, i.e., at the ipsilateral and the contralateral spiracles and at the other ear, respectively. The cricket ear is thus, potentially, an acoustic four-input device (Fig. 2⇓).
For many years, the magnitudes and time relationships of the sounds arriving at the eardrum from these inputs were disputed. Some investigators, including myself, believed that the ear had to receive sound from both sides of the body to obtain a useful directionality. In contrast, the observation that disrupting the transverse trachea did not hinder sound localization in very homogeneous sound fields was, by other investigators, claimed to “toll the death of all such cross-body theories” (2).
A physical understanding of this complicated acoustical system requires a method for determining what happens to the sound propagating inside the narrow tracheal tubes. These tubes are just a few hundred micrometers in diameter, and microphones are much too large to fit into such tiny spaces. Almost 20 years after I first became interested in such systems, it finally occurred to me that I could use the animal’s own eardrum as a microphone (in hindsight, it is difficult to understand why such a simple idea can take so long to appear!). The vibrations of the eardrum are measured with laser vibrometry, a thin (1 mm in diameter) microphone probe is used to measure the local sound pressure, and a small sound source and some shielding ensure that sound can be applied to only one input at a time. First, the eardrum “microphone” is calibrated with sound at its external surface. Then, the transmission gain, i.e., the change in amplitude and phase, can be determined for sound propagating from an input to the inner surface of the eardrum.
For a physical analysis of the directional hearing (9), one also needs to know how the amplitude and phase of sound at the ears and spiracles vary with the direction of sound incidence. These data are obtained by placing the animal at the center of a carousel carrying a loudspeaker. The tip of the probe microphone is positioned at each sound input in turn, and sound is sent from 12 evenly spaced directions. From these data and the transmission gains, one can calculate the amplitude and phase of the sounds arriving at the eardrum. In Fig. 3⇓, three sounds [i.e., sounds at the outer surface and arriving to the inner surface from the ipsilateral and contralateral spiracles (IT, IS, and CS, respectively)] acting on the eardrum of the cricket Gryllus bimaculatus at the calling-song frequency (4.5 kHz) have been drawn as vectors. For this drawing, 180° have been added to the sounds acting on the inner surface, thus “moving” these sounds to the outer surface. Therefore, the pressure proportional to the force driving the eardrum (the dotted line vector) is found simply by adding the three vectors. The directional pattern thus calculated is very close to that observed when the eardrum vibrations are measured with laser vibrometry at the same directions of sound incidence. Thus it is reasonable to assume that only these three sounds make a significant contribution to the directional properties of the ear (sound arriving from the other ear is weak and can be ignored).
The importance of the phase relationships
The sigmoid directivity pattern in Fig. 3⇓ combines a good sensitivity in the forward (0°) and ipsilateral (90°) directions, with a much lower sensitivity to contralateral (270°) sounds. This is an ideal pattern for animals walking toward a sound source. The main contributor to this pattern is the changing phase of CS, which reflects the different times of arrival of sound at the two sides of the body. The point is, however, that the phase variation of CS would not lead to such a pattern if the coarse phase relationships of the three sounds had not been right. The crucial importance of the phase relationships can be confirmed by simple mathematical modeling.
The phase relationship between IT and IS is determined partly by the direction of sound incidence but mainly by the time needed for the sound to travel in the narrow tube from the spiracle to the ear. In contrast, the distance from the contralateral spiracle to the eardrum contributes only little to the relative phase of CS. Between 0 and 12 kHz, the phase part of the transmission gain for IS changes by ~200°, whereas for CS the change is no less than 980°, despite the fact that the physical distances are fairly similar.
The dramatic change of phase with frequency means that a proper phase relationship between CS, IS, and IT occurs only within narrow ranges of frequency. The calling-song frequency is at an optimum when the difference between the eardrum vibrations at the sound directions of 30° and 330° is ~10 dB (Fig. 4C⇓); 30° and 330° are typical extreme positions during phonotactic meandering and thus are useful measures of the gradient of auditory sensitivity in the forward direction. This gradient is narrowly tuned with a Q3 dB of 14.
The tuning of the directional hearing is a direct consequence of the dramatic phase development in the sound path from the CS to the eardrum. The physical mechanism responsible for the phase shifting is not known, but the central membrane in the transverse trachea is involved (5, 7, 15). A hole of 10–25% of its area (made with the tip of a human hair) is sufficient to change the phase to values in reasonable agreement with the physical distance of the sound path. The amplitude of the transmitted sound also decreases, and the sigmoid directivity pattern with a large forward gradient disappears (Fig. 4, A and B⇓).
Why does the cricket have to be tuned?
Thus the sound communication of the cricket is narrowly tuned. The Q3 dB for the harp and for the directional hearing are ~25 and 14, respectively. But why does the cricket need to have such a large gradient of auditory sensitivity in the forward direction at the frequency of the calling song? In the laboratory, the minimum angle for discriminating the side of the sound source is 12–15° (13, 14), and the limited directionality left after disruption of the transverse trachea or other operations does not hinder successful phonotactics.
Recently, we examined the degradation of directional sound cues close to ground in grassland (11) and found the amplitude and phase of sound at the ears and spiracles to vary in a much less predictable manner with sound direction than in the homogeneous sound fields in the laboratory. The degradation of the directional cues is especially prominent very close to ground, where it also increases greatly with frequency. The communication channel of field crickets is thus more difficult to use than previously expected. Apparently, both the excess attenuation and the degradation of directional cues increase with frequency, and this forces the crickets to use fairly low frequencies for the calling song. However, even at low frequencies, the directional cues are so degraded that the cricket needs several decibels more directionality than the 1–2 dB, which is sufficient in homogeneous sound fields.
When we try to understand the situation of the cricket, it is useful to consider other small insects. In grasshoppers, the two ears are connected with tracheal air sacs, and these animals also exploit a pressure difference mechanism in their directional hearing. In a study of a large and a small grasshopper (10), we found the directional hearing of the small grasshopper to be poor at frequencies below 8–10 kHz. The limiting factor for directional hearing was not the small magnitude of the directional cues but, rather, the short propagation time for sounds arriving at the inner surface of the eardrum from the other ear. Calculations showed that the directivity would have improved considerably if the grasshoppers had also “invented” a phase shifter for creating a larger phase delay.
The cricket thus stands out as an animal that has solved a major problem in auditory biophysics. The solution is not perfect, since it only operates within a narrow frequency range. However, combined with the highly tuned sound generator, this solution allows crickets to engage in long-distance sound communication through a channel with much excess attenuation and sound degradation. The prerequisite for successful communication is for the sound generator and directional hearing to be tuned to the same narrow frequency band. It is now easy to see that individuals deviating just 500 Hz from the normal frequency, either in their calling song or directional hearing, would have a serious problem.
Students of evolution may ask, Which came first, the tuning of the calling song or the tuning of the directional hearing? This “egg and hen” problem is open to speculation. Perhaps we may be able to guess about the evolution if we take an interest in the wide variety of crickets with different habitats and communication strategies. It should also be interesting to investigate if, and how, the phase shifting problem has been solved by other animals. In my opinion, comparative biophysics is a very interesting scientific field.
I am most grateful to Rohini Balakrishnan, Ole Næsbye Larsen, Bertel Møhl, and Roy Weber for comments on the manuscript. The Centre for Sound Communication is financed by the Danish National Research Foundation.
- © 1998 Int. Union Physiol. Sci./Am.Physiol. Soc.