Electronic models that process signals to simulate omissions and distortions in perception by the deaf are useful analytical tools. These models can help in determining the degree to which the represented loss and/or distortion of acoustic speech cues, isolated from any central pathology, affects speech intelligibility. They can also help in evaluating the potential effectiveness of different modes of compensatory signal processing. Three examples of such models, designed on the basis of measured characteristics of the deaf subjects’ residual hearing, are described. A model of recruitment combined with accentuated high‐frequency loss is used to study speech perception in noise. It suggests an explanation for the previously reported ability of sensorineurals to tolerate the masking of speech by white noise at least as well as normals do, and for the special vulnerability of the speech perception of such deaf subjects to masking signals that have the spectral distribution of speech. A second model, that simulates the severely restricted dynamic range of hearing in profound deafness, predicts that this one factor can destroy the intelligibilty of amplified speech by making the perceived sound drop out at frequent intervals. Finally, a model that simulates reduced frequency‐discriminating ability is described, as an example of a model that deals with aspects of perception other than loudness.