Spoken digits, words, CV syllables, and Morse code are recognized and recalled better by the right ear than the left; nonverbal sounds, such as melodies, tones, sonar beeps, and toilet flushes are processed better by the left ear than the right. In general, however, ear differences have been obtained with dichotic, but not monaural stimulation. To account for these findings, Kimura proposed that under dichotic stimulation the ipsilateral auditory connections are inhibited, thus giving the right ear “privileged access” to the language‐dominant left hemisphere. Several findings involving monaural stimulation, verbal and nonverbal interference tasks, etc., are presented that indicate that Kimura's theory is inadequate. An alternate model based on interhemispheric transfer of information is proposed.