David Kreiner. 21st Century Psychology: A Reference Handbook. Editor: Stephen F Davis & William Buskist. Volume 1. Thousand Oaks, CA: Sage Publications, 2008.
It seems easy for most people to say something that comes to mind, to understand the words coming out of another person’s mouth, or to read the words in a sentence like this one. We hardly even think about what our brains have to do in order to understand and produce language. However, our language abilities are so sophisticated that even the most powerful computers cannot match the language abilities of a five-year-old child. What makes it so easy for children to learn language? What is language?
The study of how we understand and produce language is an important part of modern psychology. In the years after World War II, there was a renewed interest in studying the mind. The previous decades, especially in American psychology, were dominated by the behaviorist perspective, which excluded the study of the mind per se. One spark that helped lay the groundwork for a new perspective was the argument that language abilities cannot be explained without studying the mind (Chomsky, 1959). This argument helped convince many psychologists that it was both acceptable and worthwhile to study mental processes in addition to overt behavior.
The questions of what language is and how it works have captured the imaginations of scholars in diverse fields including psychology, linguistics, philosophy, computer science, and anthropology. Many people feel that language is at the core of what it means to be human. This chapter begins by discussing how language can be defined. We will then review some of the methods used to study language abilities. Three central issues in the psychology of language will then be summarized: how language evolved, how language is acquired, and the relationship between language and thought. We will conclude by pointing out how the psychology of language can be applied to our lives.
What is Language?
Language is a way of communicating, but it is more than that. There are many ways of communicating other than using language. We can communicate with one another through art, music, and nonverbal behaviors such as facial expressions. What makes language different from these other communication systems? George Miller (1990) pointed out that language is the only communication system that is also a representational system. In other words, language allows us both to represent the world in our minds and to talk about those representations of the world with one another.
Properties of Language
Three defining properties of language are that it is hierarchical, rule-governed, and generative. Hierarchical means that language is composed of small units that can be combined into larger units, which can themselves be combined into larger units, and so on. Speech sounds can be combined into words, words can be combined into phrases, and phrases can be combined into sentences.
The way that the units are combined is not arbitrary, but is regular and rule-governed. The speakers of a language follow these rules even if they cannot explain exactly what the rules are. Any competent speaker of English knows that “The big cat ate the pancake” is a sentence, and that “Cat big the the pancake ate” is not a sentence. But most of us cannot state exactly what rules were followed in the first example and broken in the second example. We know the rules of language tacitly and follow them without having to think about them.
Language is generative in the sense that a relatively small number of units can be combined into a much larger number of possible messages. In fact, it is easy to show that there is an infinite number of possible sentences in any language. A new sentence can be generated from an existing sentence by adding another phrase, as in the following:
“The cat ate the pancake.”
“The cat ate the pancake and the waffle.”
“The cat ate the pancake and the waffle and the egg.”
We could continue adding phrases indefinitely, which demonstrates that there is an infinite number of possible sentences that can be produced by any speaker of the language. The generative property of language allows us to produce new sentences, including ones that have never been heard before, and still be understood. Therefore, it is impossible to learn a language by simply memorizing all the sentences. A language user must know the units that make up the language (such as speech sounds and words) as well as the rules for combining the units.
To determine whether a system for communicating is truly a language, we need to determine whether the system is hierarchical, rule-governed, and generative. Consider the use of gestures such as moving one’s hands to communicate. We often use gestures while we are talking to convey additional information to listeners. However, gestures do not have the properties of language when they are used in combination with spoken language. Gestures that are used in combination with speech are not made up of units that can be combined into larger units. Interestingly, though, when gestures are used on their own, without speech, to communicate, as in sign language used by hearing-impaired individuals, the gestures do have the properties of language (Goldin-Meadow, 2006).
Language is not a single process. This chapter will focus mostly on spoken language, but it is important to keep in mind that language can make use of other modalities, too. For example, language can be communicated in gestural form (such as American Sign Language). As you read this chapter, you are processing language in its written form. There are important differences among these modalities (speech, sign language, and written language).
For each of these modalities, scientists who study language investigate both the production and comprehension of language. When studying spoken language, we are interested in both speech production and in how listeners perceive and understand speech. When studying written language, we are interested in how writers translate their ideas to a series of ink marks on a page as well as how readers interpret those ink marks.
The Structure of Language
Our ability to produce and perceive language is remarkable when we consider the problems that confront our brains. The hierarchical structure of language means that language users must be able to understand and use several types of information just to understand a single sentence. In order to comprehend the spoken sentence “The cat ate the pancake,” the listener must be able to process information about the speech sounds, the meanings of the words, the grammatical structure of the sentence, and the goal of the speaker in uttering the sentence.
The speech sounds that make up words are called phonemes. The word “cat” comprises three phonemes. (Phonemes should not be confused with the letters used to spell the word, as we are considering a spoken sentence in this example.) The three phonemes roughly correspond to the sounds kuh, ah, and tuh. Each phoneme is actually a category of sounds because we perceive a variety of acoustically different sounds as the same phoneme. The kuh in “cat” is physically different from the kuh in “pancake,” even though most English speakers perceive them as the same. The acoustic information in a phoneme can vary dramatically depending on the word it is in, the voice of the speaker, and the rate of speech. Despite these differences, listeners perceive all the examples as one speech sound. The ability to accomplish this feat of classification is one reason why computer speech recognition software is inferior to humans’ ability for speech recognition.
Correctly perceiving phonemes is important for understanding speech, but it is not enough. The listener must be able to combine the phonemes into words and understand what those words mean. The meanings of words, phrases, and sentences represent the semantic level of language. Many words are made up of multiple parts, or morphemes, which carry meaning. For example, the word “cooked” is made up of two morphemes: “cook” and “ed,” with the suffix “ed” indicating that the cooking took place in the past.
To understand what someone is saying, we need to do more than just understand the individual morphemes and words. We need to understand the syntax, or grammatical structure, of the phrases and sentences. Consider the following sentence:
“His face was flushed but his broad shoulders saved him.”
The word “flushed” can be understood two different ways, depending on how the listener processes the syntax. If the listener understands “flushed” to be an adjective that describes the color of the person’s face, then the rest of the sentence is difficult to understand. The listener is left trying to understand why the person’s broad shoulders might have saved him from being embarrassed. But the word “flushed” can also be understood as the past tense of the verb “flush.” Interpreting the syntax in this way completely changes the meaning of the sentence, as it is now obvious that the person’s shoulders saved him from going down the toilet. Note that the phonemes, morphemes, and words are identical, but the meaning is different depending on the listener’s understanding of the syntax.
Even with an understanding of the syntax, the listener or reader can sometimes misunderstand what the speaker or writer intended. Language users must make use of pragmatics, or understanding the purpose of the communication. The question “Can you sit down?” may be interpreted as a request for information about the abilities of the listener. If a doctor asks this question of a patient visiting her office with concerns about joint pain, the patient might conclude that the purpose of the question is to determine whether the patient is capable of sitting. An appropriate response might then be, “Yes, but my knees hurt when I sit for a long time.” Suppose instead that the question “Can you sit down?” is asked by a spectator at a baseball game, directed at an individual standing in front of her. In this context, the listener might interpret the question to mean, “I can’t see the ball game because you’re in my way. Please sit down so I can see.” The listener’s response might be to sit down. Note that in both examples the phonemes, morphemes, words, and syntax are the same, but the meaning of the utterance still differs.
Methods for Studying Language Processing
The study of language is multidisciplinary, with scientists from many different fields investigating language using different methods. The diversity of techniques that have been used to investigate language has resulted in a rich research literature on language processing.
Psychologists often use different types of experimental designs to determine what factors influence particular behaviors. This approach has been helpful in testing theories and models of language processing, which often yield predictions about factors that should influence how quickly or accurately individuals respond in language tasks. For example, some theories of reading predict that high-frequency words (those that are very common, such as “dollar”) should be recognized more quickly than low-frequency words (those that are rarely encountered, such as “vestal”). Numerous experiments have supported this prediction (e.g, Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001). Such evidence has been helpful in evaluating theories that explain how we recognize words.
A more recent tool for testing theories of language is computational modeling, or the use of computer simulations. The idea is to implement the major claims of a theory in computer software, and then test the software to see if it produces results similar to the results found in experimental studies with human participants. Coltheart et al. (2001) demonstrated that their theory of reading, when implemented as a computer model, produced results similar to a wide range of experimental studies with humans. For example, the computer model has difficulty with the same types of words that human readers find more difficult to recognize. The similarity in the pattern of performance between the computer model and the human data shows that the model is a plausible one. A particular advantage of computer modeling is that it forces the theorists to implement their ideas in very specific ways and allows for clear tests of what the model claims. In scientific psychology, a model that yields clear, testable predictions is always preferable to a model that is too vague to be tested decisively.
Studies of Brain Activity
Advances in technology in recent decades have allowed scientists to study patterns of brain activity related to language processing. It is now possible to learn how the brain processes language by measuring brain activity in individuals as they engage in various language tasks. Brain-imaging technology such as the PET scan has demonstrated, for example, that different brain areas are active when individuals read words and when they listen to words. Researchers have also used electrodes to stimulate portions of the cortex as a method of determining which brain areas are important for language; this technique is called Cortical Stimulation Mapping (J. G. Ojemann, G. Ojemann, Lettich, & Berger, 1989). A variety of brain-imaging technologies can be used to compare individual differences in brain structure to performance in language tasks.
Studies of Language Deficits
One way to find out how a complicated ability such as language works is to study what goes wrong when individuals suffer from impaired abilities. The strategy is to determine what particular abilities have been impaired by damage to the brain and to match those impairments to specific brain areas. When this matching can be done, it provides good evidence that a particular brain area is important for a particular language ability.
Two well-known examples are Broca’s aphasia and Wernicke’s aphasia (Caplan, 1994). Broca’s aphasia is characterized by difficulty producing speech and is associated with damage to an area in the frontal lobe that is now called Broca’s area. An individual suffering from Broca’s aphasia typically has no difficulty understanding language. The individual knows what he or she wants to say, but has great difficulty in producing fluent speech. Broca’s area appears to play an important role in the production of speech.
Individuals who have Wernicke’s aphasia can speak fluently but have difficulty understanding speech. As a result, these individuals may produce sentences that are grammatical but that do not seem to mean anything. Wernicke’s aphasia results from damage to a particular area in the temporal lobe that is now called Wernicke’s area. We can infer that Wernicke’s area plays an important role in helping us comprehend speech.
The location of these areas was first proposed by physicians who performed autopsies on individuals who had demonstrated these impairments. More recently, the locations of Broca’s area and Wernicke’s area have been documented by recording brain activity in living patients, with techniques such as fMRI (functional magnetic resonance imaging).
Analysis of Language Samples
Some of the simplest but most informative evidence about language comes from the analysis of patterns in language samples. Patterns of errors in speech can be revealing about the underlying mental processes that must have produced them. Saying “the nipper is zarrow” instead of “the zipper is narrow” indicates that individual phonemes can be switched between words (Fromkin, 1973). Professor Spooner of Oxford University is reputed to have made a habit of switching phonemes, resulting in amusing sentences such as “You have tasted the whole worm” instead of “You have wasted the whole term.” The linguistic structure of the sentence must be planned before the first word comes out of the speaker’s mouth. If this planning does not take place, it is difficult to explain where the “n” sound came from in “the nipper is zarrow.” It seems obvious that it was borrowed from the beginning of “narrow,” a word that occurs later in the sentence.
A second example of the use of language samples to provide evidence about language processing is the analysis of writing samples. Pennebaker and his colleagues (e.g., Pennebaker & Graybeal, 2001) reported that patterns of word use in language samples can predict differences in health, academic performance, and a variety of behaviors. For example, Pennebaker and colleagues found that greater use of cognitive words (e.g., “know” or “understand”) in writing samples is a good predictor of health improvement.
Biological and Anthropological Research
Language is a biological phenomenon, and valuable lessons can be learned when it is approached from this perspective. Biological and anthropological evidence can be particularly informative about how language abilities evolved. In many respects, we are similar to our primate cousins, but there are noticeable differences in language use. Although other primates sometimes communicate through vocalizations, they do not talk, at least not in anything resembling human language.
Why is it that primates such as chimpanzees cannot be taught to speak? Part of the answer is that their vocal tracts do not allow them to produce the wide range of speech sounds that are included in human languages. A key biological difference between humans and other primates is the location of the larynx (or “voicebox”) in the throat. In humans, the larynx is located lower in the throat, which allows for a much wider range of sounds to be produced. The cost of this anatomical difference is that humans are at much higher risk of choking to death on their food. The evolutionary advantages of human language must be powerful enough to overcome the disadvantage that arises from the risk of choking. Lieberman (2000) pointed out that our ancestors who benefited from this anatomical change in the vocal tract must have already had language abilities. This species could not have benefited from the lowered larynx unless they already had language abilities that could be improved with the altered vocal tract.
The Evolution of Language
An important and controversial question about language is how language abilities could have evolved. The ability to use the same system both to represent the world and to communicate about past, present, and future situations clearly gave our species enormous advantages in surviving and reproducing.
We can get some clues about how language evolved by studying living species that are similar to us, such as chimpanzees. A common misconception is that modern humans evolved from modern chimpanzees or monkeys. This progression is clearly not the case; instead, we share common ancestors with other living species. We can learn what physical and behavioral characteristics our common ancestors had by comparing humans to other species. If two genetically similar living species share a particular characteristic, it is likely that their common ancestor also shared that characteristic. A species does not simply develop a behavior such as language out of thin air; instead, it must evolve from precursor abilities that were present in ancestral species.
One of the differences between humans and other primates is that humans have much greater control over the face and mouth, allowing us to produce a wide variety of sounds and to combine these sounds into rapid sequences. These differences can be traced to a specific gene called FOXP2 (Marcus & Fisher, 2003). Although many other animals have the FOXP2 gene, the human version differs from that in other animals, and this difference is only about 200,000 years old. It is certainly not the case that FOXP2 is the language gene in the sense that it is solely responsible for our language capability. It is, however, a useful clue about how language abilities may have evolved. The change in this particular gene may have allowed our ancestors to develop their existing vocal repertoire into considerably more complex and useful forms of communication.
Other living primate species also communicate with vocal signals. For example, vervet monkeys use alarm calls that alert other vervet monkeys to the presence of a predator. However, vervet monkeys use these alert calls equally for their companion monkeys who are already aware of the presence of the predator and those who are not aware of the predator. In contrast, humans take into account the mental states of the individuals with whom they are speaking. However, some primates show an apparent precursor to this pragmatic use of language; they change their vocalizations depending on their own social status (Zuberbuhler, 2005).
In terms of speech perception, there are also interesting similarities to and differences from other species. Recall that a phoneme is really a category of speech sounds: The kuh sound in “cat” is physically different from the kuh sound in “pancake,” but humans readily perceive the two sounds as the same phoneme. This ability is referred to as categorical perception. Other species (including chinchillas and quail) can learn to perceive phonemes in specific categories. An interesting difference is that other species require large amounts of systematic training and feedback in order to show categorical perception of phonemes, while human infants require little or no training. This difference suggests that, although other species may be able to acquire some language abilities, the mechanisms for acquiring them are not the same.
We have focused mainly on the evolution of spoken language. It is worth noting that written language abilities are very different in terms of evolutionary history. Writing systems appear to date back only about 6,000 years. For most of history, then, very few individuals have been able to read and write. These facts indicate that the human brain could not have evolved specific abilities for written language. Instead, we must rely on other cognitive abilities, which may explain why humans are so much more successful at acquiring spoken language than at learning to read and write.
Is Language Learned or Acquired?
A central issue in the psychology of language is the question of how we acquire our language abilities. The mystery is that children become proficient in producing and understanding speech very early in life and without formal education. Children do not have to be taught how to speak; they show up for the first day of kindergarten with language abilities approaching those of adults.
Early language development proceeds in a very similar way for different children. Development of speech ability follows a regular sequence, from babbling beginning at about six months of age, to single words by about one year, and then longer utterances. The precise age of each development does vary across children, but the sequence is similar. This developmental sequence is also similar across different languages.
Speech perception abilities also develop in a regular sequence. Infants are born with general speech perception abilities that apply to all languages; they can perceive distinctions between phonemes that occur in any language. However, by about age one, infants have tuned their speech perception abilities to the language spoken in their environment, losing the ability to perceive distinctions in other languages but retaining the ability to distinguish phonemes in their own language (Juscyzk, 1997).
The speed with which most children acquire language is remarkable. Children learn an average of about eight words per day between the ages of 18 months and 6 years. How do children learn words so quickly? The answer is that there is more than one way to learn words. Children make use of several types of information. Younger infants rely mainly on perceptual salience: They associate a word with the most interesting object that they can see. By age two, children rely much more on social cues such as associating a word with an object at which a caregiver’s gaze is directed (Golinkoff & Hirsh-Pasek, 2006).
One of the key problems we face in learning words is being able to segment speech into individual words. As anyone can attest who has listened to people speaking an unfamiliar foreign language, it is very difficult to determine where one word ends and another begins. How do children learn where the word boundaries are? Children appear to take advantage of statistical patterns in speech. For example, infants can take advantage of the fact that the syllable ty is reasonably likely to follow the syllable pre, but ba is very unlikely to follow ty (Saffran, 2003). Thus, when hearing a sentence such as “What a pretty baby,” the infant is more likely to perceive a word boundary between “pretty” and “baby” than in the middle of “pretty.”
The ability to produce and understand grammatically correct sentences develops as vocabulary is acquired. As children acquire language, they do not do so by simply memorizing lots of sentences. Children often produce sentences that their parents and caregivers never utter, and they can understand sentences that they have never heard before. The generative, hierarchical, and rule-governed properties of language come naturally to children.
We know that children learn a language by learning its rules because they sometimes overgeneralize those rules. In English, the general rule for making a verb past tense is to add the morpheme-ed to the end of the verb. The past tense of “walk” is “walked” and the past tense of “ask” is “asked.” But some verbs in English are exceptions; they do not follow the rules. The past tense of “run” is “ran” and the past tense of “eat” is “ate.” As children begin to use these irregular verbs, they typically use the verb correctly: “I ran away.” The same child who earlier used “ran” correctly will later learn the past-tense rule but will begin to overgeneralize it, producing sentences such as “I runned away.” Eventually, with experience, the child will learn which words are exceptions from the rule and will go back to using “ran.”
It is more accurate to say that children acquire language than that they learn it. Children spontaneously acquire language as long as they are neurologically normal and in an environment with language. They acquire language rapidly and show evidence of learning the underlying structure of the language, not just memorizing sentences. They follow a similar developmental sequence, which suggests that we come into the world biologically prepared to acquire language quickly.
These facts do not imply that we are born with specific knowledge about our language or even about human languages as a group. Saffran (2003) suggested that common patterns of language development may instead be due to constraints on learning. As noted previously, children can make use of statistical properties in speech as they acquire the language. Saffran argued that some statistical properties are easier to learn than others, and those statistical properties that are easiest for our brains to learn are more likely to be included in the world’s languages. Languages that included patterns that were more difficult to learn could not have become popular. In other words, it may be that human languages have evolved so that they are easy for children to learn.
Language and Thought
Our language abilities clearly depend on a variety of cognitive processes such as attention, pattern recognition, memory, and reasoning. In order to understand a spoken sentence, the listener must pay attention to what the speaker is saying, recognize the phonemes and words, relate those words to pre-existing knowledge, and make reasonable inferences about what the speaker is trying to communicate.
Language abilities rely on these cognitive processes, but to what extent are these cognitive processes influenced by language? This question is a central issue for the psychology of language as well as for cognitive psychology.
Benjamin Whorf was an inspector for an insurance company as well as an amateur linguist. Whorf wondered whether some accidents that he investigated might have been a result of language limiting the way that individuals could think about a situation. In one case, a fire had occurred in a warehouse after an employee had discarded a match into a gasoline drum. A sign near the drum had indicated EMPTY. Whorf thought that the employee’s understanding of the word “empty” might have limited the way the employee could think about the situation. If the drum is empty, then nothing is in it, and it is safe to discard a match. The reality, of course, was that the “empty” drum still contained gasoline vapors, which ignited on contact with the match. Based on observations such as this one, in addition to his study of various languages, Whorf proposed that language can limit or determine thought (Carroll, 1956). This idea, referred to as the linguistic relativity hypothesis, has been the subject of considerable research.
One way to test the linguistic relativity hypothesis is to find a way in which languages differ from one another and then find out whether the speakers of those languages differ in how they think. For example, languages differ in the number of words for different colors. In the Dani language, spoken by natives of New Guinea, there are only two color words: mola for bright colors and mili for dark colors. Dani speakers do not have separate words for blue, green, red, yellow, and so on. If Whorf was correct that language determines thought, then we should expect Dani speakers to differ in how they categorize colors when compared to English speakers. The evidence on this point is mixed. Early studies indicated no differences. Rosch (1973) reported that both Dani and English speakers have better memory for focal colors or the “best” examples of a color category. Most people will agree that a similar shade of red is the “best” red. That shade is a focal color, and Rosch’s study suggested that it was easier to recall having seen that shade than other, nonfocal shades. The crucial point is that this finding was also true for Dani speakers, whose language does not have a specific word for the color red.
More recent studies have reported differences in color categorization depending on language. Roberson, Davies, and Davidoff (2000) studied speakers of the New Guinea language Berinmo, which has five color words that differ from the color words in English. Both English speakers and Berinmo speakers are better at perceiving differences between colors that cross color boundaries in their own language. For example, blue and green are represented by the same basic color word in Berinmo. English speakers are better at discriminating shades of blue from shades of green than shades within either blue or green, while Berinmo speakers did not show this advantage. However, Berinmo speakers are better at telling the difference between colors that are represented by different words in their language. It is not the case that Berinmo speakers or English speakers are better at discriminating between colors. Instead, this research shows that the colors that are easiest to discriminate depend on what language the person speaks. This type of finding is consistent with the linguistic relativity hypothesis.
The linguistic relativity hypothesis can be characterized as having a strong form and a weak form. The weak form of the hypothesis is that language can influence the way that we think. In some ways, this point is obvious. After all, there is rarely a reason to use language except to influence the way that other individuals think.
The strong form of the hypothesis is that language limits or determines what we think. Are speakers of Dani even able to think about the difference between colors such as blue and green, given that they do not have separate color words for them? Rosch’s research suggests that they can think about these colors even without having words for them. We often think about concepts for which we do not have particular words. Almost everyone is familiar with the awkward situation in which two people are walking down a hallway from opposite directions, notice that they are on a collision path, and take a step to one side. Both individuals happen to step the same way, leaving them on a collision path again. This dance sequence may be repeated several times, and finally the two individuals can pass each other, perhaps both smiling. We can understand this situation without having a particular word to describe it. Thus, the strongest form of Whorf’s hypothesis is untenable. We can, however, reasonably conclude that language affects cognitive processes.
Applications of the Psychology of Language
Nearly everything we do in our daily lives has something to do with how we use language. An understanding of the psychology of language can be useful in helping us think about many aspects of modern life.
Teaching of Language
What we know about acquisition of language skills can inform our decisions about teaching language to children. Children’s abilities to speak and to understand speech develop naturally, without the need for formal instruction. Learning spoken language is more like learning how to walk than learning how to ride a bicycle. Children do not really learn how to walk (or how to speak); instead, they acquire these abilities as they develop biologically. The regular sequence of development across children and across cultures and the development of these abilities in children who do not receive any formal instruction demonstrate that it is not necessary to teach children how to walk or how to speak. Biologically normal children, in reasonably normal environments, just do it. Of course, this naturally occurring developmental process does not prevent many parents and caregivers from attempting to teach spoken language to children. Parents may believe that, because they have made special efforts to teach their children how to speak, and because the children did learn how to speak, their instructional efforts were necessary for proper language development. This relationship is illusory, as children whose parents do not make such efforts still acquire spoken language abilities. Children are notoriously resistant to correction when it comes to language. Recall that children typically learn language rules and overgeneralize them such as saying “runned” instead of “ran.” Caregivers often find it excessively difficult to stop children from making these sorts of mistakes.
It is useful for parents and caregivers to understand that children are typically capable of understanding more than they can say. In the normal developmental sequence, children are able to understand sentences before they are able to produce sentences of the same grammatical sophistication. A sentence such as “The pancake that I saw in the kitchen was eaten by the cat” may be easily understood by a two-year-old, but that same child is unlikely to produce sentences with that level of grammatical complexity.
Another interesting application of the research literature on language acquisition is what it can tell us about learning a second language. We are born with the capacity to acquire any human language. The same abilities that make it easy for us to acquire a native language can make it more difficult to learn a second language later in life. Consider the development of our speech perception abilities. Newborn infants are capable of discriminating among phonemes in any human language. By age one, we lose the ability to perceive distinctions between phonemes that are not in our language, which makes it challenging to learn a second language that has a different set of phonemes. In English, there is a distinction between the sounds luh and ruh. In Japanese, there is no such distinction, making it difficult for a native speaker of Japanese to tell the difference between words such as “rip” and “lip,” which differ in only the initial phoneme. It is possible for an adult to learn such distinctions, but it appears to require substantial amounts of practice, which presents a challenge for individuals who wish to learn to speak a second language without an “accent.”
Learning to Read
Although we have not addressed in any depth the processes underlying reading skill, we can make several points about learning to read. First, learning to read is a qualitatively different process from acquiring spoken language. Written language is so recent in our history as a species that we could not have evolved special abilities that would help us learn to read. In some ways, learning to read is unnatural. There is no particular reason why a pattern of ink marks on a page should correspond to particular words and sentences. Many children—and adults—fail to learn to read at a fluent level. Those who do learn to read fluently almost always do so as a result of a considerable amount of formal instruction and practice.
Languages differ substantially in how they use written characters to represent spoken language. English is written using a system that is primarily alphabetic, meaning that written symbols represent speech sounds. Other languages, such as Chinese, rely on ideographic writing systems in which written symbols can represent words or concepts. Still other languages such as one of the Japanese writing systems are syllabic: written symbols represent syllables. Depending on the language, the beginning reader must not only figure out the type of writing system involved but also master the correspondences between written symbols and phonemes, syllables, or words.
Simply being able to tell the difference between different letters in an alphabet presents a problem for which our brains are not well prepared. The letters d and b are mirror images of each other. Physically, they are the same, just in different orientations. For almost everything else in our environments, an object is the same whether it is pointed to the left or to the right. A bird flying to the south does not become a different bird merely by turning around and flying north. But a d becomes a different letter, representing a different sound, just by facing a different direction than a b.
We should not be surprised that learning to read is difficult and time-consuming compared to the acquisition of speech, which is rapid and almost effortless. This fact has important consequences for how we design our educational systems and for how we interpret difficulties in learning language.
What Constitutes a Language?
We have addressed the question of what makes a language different from any other system of communication. This knowledge can be helpful in addressing misconceptions about what is and is not a “proper” language.
Does sign language count as a language, or is it simply a spoken language that has been translated from sounds into gestures? Many people are familiar with finger-spelling, in which letters are translated to specific configurations of the fingers so that words can be spelled out. Finger-spelling is a slow and inefficient way of communicating, and it does not constitute a separate language. However, finger-spelling is not the primary way that individuals communicate through sign language. True sign languages are actually distinct languages, in the same way that French is a different language from English. One example is American Sign Language (ASL). In ASL, the same levels of language structure exist as in any spoken language, with the exception that signs take the place of phonemes. ASL has syntax, semantics, and pragmatics. The same properties that define spoken language also apply to ASL and other sign language systems: They are generative, hierarchical, and rule-governed. ASL signs are not simply translations of English words. The developmental sequence in which children acquire sign language parallels the developmental sequence for spoken language. Children who are acquiring sign language even “babble” with their hands in the same way that children acquiring spoken language babble by repeating syllables such as ba ba ba ba.
One difference between spoken language and sign language is that signs often bear a resemblance to the words or concepts that they represent. In spoken language, the sound of a word has nothing to do with what the word means, with a few exceptions known as onomatopoeia (e.g, the word “meow” sounds something like a cat meowing). There is nothing about the sound of the word “cat” indicating that it represents a cat; speakers of English simply must learn that association. In contrast, the ASL sign for “cat” is a gesture indicating the cat’s whiskers.
Despite differences between spoken and signed language, it is misleading to suggest that sign language is not a “real” language. Children who master both a spoken language such as English and a sign language such as ASL are bilingual. Individuals who communicate primarily through sign language should not be considered somehow deficient in their language abilities, or lacking a real language, any more than an English speaker who is not proficient in French should be considered deficient.
Similar points can be made about speakers of dialects within a language. An interesting and controversial example is how many people perceive and react to black English vernacular (BEV), sometimes referred to as Ebonics. Black English is sometimes assumed to be merely a lazy version of English in which certain sounds and grammatical endings are routinely dropped. A similar argument is that black English is a degraded version of standard English and that the use of it portends the decline of the English language in general.
Such beliefs were the basis of a controversy in the Oakland, California, school district in the 1990s. The school district was criticized for a plan to train teachers in how to understand Ebonics so that they could help their students learn standard American English (SAE). However, arguments that BEV is a degraded version of English are not accurate and ignore what we know about the structure and function of language. Pinker (1994) described how BEV is, in fact, just as grammatical, rule-governed, and regular as SAE. One misconception is that speakers of black English leave out more words and word endings in their speech, but this point is not correct. In any language, the rules transform over time, often becoming more efficient rather than “lazier.” A good example in SAE is the use of contractions such as “I’ve” instead of “I have.” Although there are some patterns in BEV in which a word may be regularly omitted, there are also patterns in SAE in which a word is regularly omitted that is left intact in BEV. Pinker pointed out that, in BEV, the sentence “He be working” is regular and grammatical, with the word “be” making a meaningful distinction. The inclusion of “be” in this sentence indicates that the person generally does work, that is, has regular employment. In contrast, “He working” in BEV indicates that the person is working right at the moment. This distinction is not made in SAE, where “He’s working” could be interpreted in either of these two ways. Does this example indicate that SAE is lazier, or a more degraded form of English, than BEV? No, nor is it reasonable to conclude that BEV is less regular or meaningful than SAE.
Artificial Language Systems
Much of the modern interest in the psychology of language grew out of attempts to create artificial systems for understanding or producing language. For example, researchers trying to build reading machines for the blind found that it was very difficult to get the machines to produce fluent, comprehensible speech. This difficulty led to an explosion of research on speech perception and speech production. The difficulties of constructing such systems have helped us understand just how sophisticated and difficult language processing is, and how remarkable it is that our brains do it so well.
Clearly there have been advances in recent years. Software that takes dictation and translates it into typed words is used widely. Voice-activated customer service systems are no longer a rarity, and they function effectively in most cases. Two aspects of these recent technological advances are illuminating. First, it took many decades and the investment of huge amounts of resources to make such systems function effectively. Second, these systems still do not approach the language abilities of humans. These artificial language systems are successful only because they are tailored to specific situations and types of language. The software that allows you to dictate a paper to your computer is not capable of understanding those sentences and summarizing their meaning. Voice-activated customer service systems are at a loss if the customer says something that is unexpected, but a human customer service agent can understand virtually anything that the customer might say. These two points highlight the amazing language capabilities of the human brain.
We have learned that language is a fascinating ability that is unlike any other type of communication. Language has been studied from a variety of perspectives with a variety of methods, yielding interesting conclusions about the nature of language, how language might have evolved, how language abilities are acquired, and how language might affect thought.
A key theme is the extent to which language relies on mechanisms in the brain that are specific to language, as opposed to relying on mechanisms that are shared with other cognitive abilities. On one hand, it is apparent that our brains are exceptionally prepared to master a complex communication system that is, to date, beyond the reach of even the most advanced technology. Spoken language (but not written language) is learned with ease and without the need for any formal instruction. On the other hand, many of the problems that the brain confronts in using language are problems that also occur in other aspects of cognition. For example, consider the problem of categorical perception: We perceive physically different sounds as belonging to the same phonemic category. We must solve a similar problem when we learn to categorize a variety of dogs as belonging to same concept of dog, or when we perceive physically different sounds as being examples of the same musical note. The fact that language presents difficult problems for the brain to solve and that the brain in fact solves these problems, does not by itself indicate that language is a special ability, distinct from other human abilities. Nor does the existence of similar problems for other abilities demonstrate that the same underlying cognitive skill is shared; it is possible that the same problem, such as categorical perception, could be solved with different mechanisms.
It is clear that much remains to be learned about our language abilities. Scientists have made substantial progress over the past half-century in understanding how language works. Still, what George Miller noted in 1990 remains true today: “[W]e already know far more about language than we understand” (p. 7).