Great Depression: Coming of Age in the 1930s

Great Depression: People and Perspectives. Editor: Hamilton Cravens. Santa Barbara, CA: ABC-CLIO, 2009.

A New Stage in the Life Cycle

The 1930s was a significant decade in the economic and political history of the United States. It also resulted in important social and cultural changes, especially for the generation that came of age during such difficult years. For adolescents growing up during the 1930s, the question, “How old are you?” held a much greater significance than it had for earlier generations of young Americans. Among adolescents, by 1940, attending and graduating from an age-graded high school became not only the prescribed normative experience, but also a practice adopted by the majority of young Americans for the first time in U.S. history. School, rather than wage labor, became the work of adolescence. In addition, although going to college was still not a common experience, it was no longer viewed as a luxury attainable only by young people from wealthy families. As the prescription for school-based and age-level graded education expanded to eighteen year olds, the 1935 Social Security Act recognized a federal responsibility to protect the welfare of adolescents through high school graduation. In return, Americans gave up some of the autonomy and independence possessed by most individuals in earlier generations.

The important shift extending dependency through adolescence that occurred in the 1930s happened in a perfect storm that mixed public policy, popular culture, economics, and a shift of public opinion about teens in an America that seemed increasingly removed from its past. The economic crisis of the Great Depression resulted in changes that universalized ideas about American adolescence stirring for several decades. By the early 1940s, the term teenager was a part of American culture and defined a distinct period of life separated by chronological age from childhood and adulthood. Being identified as a teenager assumed that a young person was in a transitional time of experimentation, attended full-time school-based education, enjoyed a social life centered on peers, and extended financial dependency on adults.

Until the 1930s, most adolescents spent more time working for wages, laboring as an apprentice, or putting in long hours on the family farm than in school. Youth was a fluid term used to describe the period of life between childhood dependence and the independence of adulthood, but it was not specifically defined by age. A young American’s transition from childhood to youth was linked to physical capacity. For boys, the ability to do men’s work moved them into the transitional stage most commonly known as youth. For girls, the ability to have children noted their new status. Marriage generally marked the transition from youth to full adulthood, with females reaching that stage earlier than males.

The idea that adolescence was a special period of life that needed special protections from adult responsibilities grew in importance first among middle-class urban parents in the mid-nineteenth century. The urban middle class grew in size and influence during the balance of the nineteenth century, but only a minority of teens went to high school. Secondary education seemed an unnecessary and impractical luxury for many young Americans.

By the early twentieth century, child welfare experts began to push for a more universal definition of adolescence built on the middle-class ideal. Psychologist G. Stanley Hall published an influential two-volume work in 1904, Adolescence: Its Psychology and Its Relations to Physiology, Anthropology, Sociology, Sex, Crime, Religion, and Education. According to Hall, adolescence was a distinct period of life defined by biology. He argued that this transitional period required special protections and restrictions, because ignoring these needs would hinder an individual young person’s successful transition to healthy adulthood. Hall criticized adults for celebrating adolescent precociousness; a practice that had been common in the past. He maintained that the teen years were a period of awkwardness, confusion, vulnerability, and eccentricities exaggerated by dramatic physical change. Not all experts agreed with Hall, but his theories soon dominated the twentieth century’s cultural definition of adolescence.

Public policies that influenced the lives of young Americans were slower to reflect the new definition. However, the economic upheaval of the 1930s called many public policies into question that encouraged the extension of dependency in adolescence. For example, as noted earlier, for most of American history, the majority of teens spent more time working than in the classroom. This practice changed for more than a majority of teenagers in the 1930s. More education seemed essential for getting a job and a secure future as an adult in a world in which jobs were scarce. In addition, most people agreed that the limited pool of jobs should be reserved for men supporting families. The lack of jobs, public pressure to build greater access to a quality high school education, and shifting opinions about the importance of going to school among parents and teens contributed to a dramatic rise in secondary education. In 1929, less than 20 percent of all Americans had earned a high school diploma. Most young people left school before or immediately after completing the eighth grade. Over the next ten years, the rate of fourteen through seventeen year olds still in school full time rose 43 percent, and the number of high school graduates in the population doubled, from 667,000 to 1,221,000. For the first time in American history, staying in school long enough to earn a high school diploma became the norm rather than the exception. At the same time, full-time employment rates among teens dropped, and attending college or some form of post-high school education was more common.

The identification of teens as a distinct group paralleled the rapid growth of radio. Advertisers faced with falling consumer markets quickly recognized radio as a new marketing tool that could reinvigorate business. The vast appeal of radio that also enabled broadcasters to target specific markets quickly exposed adolescents as a desirable and distinct group of consumers. At the same time, movies provided a visible stereotype of the ideal American adolescent. Public policy was another important factor that fueled the nation’s transition to an ideal that defined teenagers as a generation that shared extended dependency, emotionalism, the need for experimentation, and the necessity for quality school-based education through at least age eighteen.

Hard Times and America’s Adolescents

The onset of the Great Depression hit younger Americans especially hard. Unemployed and underemployed parents left many children and teens without even the basic necessities. Individuals coming of age during the 1930s shared the experience of growing up during a time when opportunities for their future seemed limited. Natural disasters, such as the floods that struck parts of Arkansas, Kentucky, and Tennessee in 1931; similar events in 1937 along the Ohio River and its tributaries; and drought that hit the nation’s Dust Bowl from 1934 through 1936 only made matters appear worse. National unemployment rates rose dramatically and reached a height of 25 percent by the winter of 1932-1933. Despite federal efforts to turn the tide, unemployment rates remained in the double digits until the United States entered World War II. The problem was especially acute for young Americans. The U.S. Department of Labor estimated that unemployment was twice as high among teens seeking full-time work. This was a special burden since by the early 1930s, 28 percent of American households did not include a single wage earner.

In this atmosphere, adult job-seekers made it difficult for teens looking for paid work. In 1932, sixteen-year-old Duval Edwards decided that quitting school and finding a job would be a good way to help his parents. In Bossier City, Louisiana, Edwards saw a sign that read, “Dishwasher Wanted—only college graduates need apply.” A high school dropout, Edwards asked about the job anyway. The owner responded, “We mean it, sonny. We are helping those who have finished college and can’t find any other work” (Edwards 1992, 31).

The lack of jobs and a viable future for teens like Edwards was part of what some policymakers and activists began to call “America’s Youth Problem.” On July 4, 1936, members of the American Youth Congress (AYC), a volunteer organization made up of high school and college-age youth, passed a manifesto they called, “A Declaration of the Rights of American Youth.” The document outlined what members of the AYC believed was the lamentable situation of young Americans. They asked that adolescent dependency be extended for young Americans to have time to adequately prepare for adulthood. Members of the AYC argued that special circumstances in the 1930s demanded new strategies: “Today our lives our threatened by war; our liberties threatened by reactionary legislation; and our right to happiness remains illusory in a world of insecurity” (American Youth Congress 1936). The AYC declared that young Americans had a right “to a useful, creative, and happy life … the guarantees of which are: full educational opportunities, steady employment at adequate wages, security in time of need, civil rights, religious freedom, and peace” (American Youth Congress 1936). Careful to showcase their loyalty to the United States, the AYC continued,

We look at this country of ours. We love it dearly; we are its flesh and marrow: Therefore, we the young people of America reaffirm our right to life, liberty and the pursuit of happiness … To those ends we dedicate our lives, our intelligence and our unified strength. (American Youth Congress 1936)

Such ideas were popular. By 1939, the organization claimed 4,697,915 members and had the influential ear of First Lady Eleanor Roosevelt.

Like Eleanor Roosevelt, many adults shared the AYC’s fears and sentiments about the insecure future facing young Americans. In 1932, the U.S. Children’s Bureau, a federal agency dedicated to investigating and lobbying on behalf of the nation’s children and adolescents, released a report outlining the special problems brought on by the Great Depression. As evidence of a national crisis, the bureau noted that an estimated 250,000 homeless young people were hitchhiking the nation’s roads and hopping its freight trains. Most were boys and youth from thirteen to twenty-five years old. The bureau made it clear that unlike earlier times when most Americans believed that transients became hobos by choice or immorality, the current army of young was not composed of irresponsible runaways, drug addicts, juvenile delinquents, or illiterates. Instead, most were simply adolescents and young people in their early twenties looking for work. Far from illiterate, many had spent at least some time in high school or college. Sociologist Kingsley Davis echoed the bureau’s findings in another publication and reminded Americans that the generation coming of age in the 1930s faced a world with circumstances quite different from those in the past. “The last frontier disappeared some forty years ago,” Kingsley remarked.

When young men now want to move on, they find there is no place to go … The machinery by which young people are drawn into the work of the nation had broken down; and youth, bearing the burden of this breakdown were seeking blindly for some way out. (Davis 1935, 8, 29)

For Davis, rising crime rates and even revolution were inevitable if something was not done to help the nation’s adolescents and youth.

The Media and America’s Youth Problem

By the fall of 1933, newspapers, magazines, and movies also began to pay attention to “America’s Youth Problem.” The fall 1933 release of Warner Brothers’ Wild Boys of the Road raised public debate about society’s responsibility for young transients. The movie’s director, William A. Wellman, hoped the film would act as a cautionary tale for boys and girls considering hitting the road. He also wanted to highlight the costs of ignoring the vulnerability of American adolescents and youth during the Great Depression.

The film starred sixteen-year-old Frankie Darro as “Eddie Smith.” Eddie was an honest and optimistic teenager from an unidentified town in the Midwest. He lives with his middle-class parents and attends high school. Eddie’s best friend, “Tommy,” played by a young-looking twenty-year-old Edwin Philips, is not as lucky. Tommy does not have a father in his life and his mother makes a poor living taking in boarders. Tommy has always struggled, but the Great Depression also hits Eddie’s family when his father is laid off from a job at the local cement plant. Wanting to help, Eddie sells his “jalopy” car for $22.00, but it is not enough to help his devastated parents. Eddie convinces Tommy that the two friends should hit the road in search of jobs. The boys hop a freight train and soon see that they are not the only young Americans on the road in search of work. They soon team up a freckle-faced “boy” who turns out to be a girl named “Sally,” played by another young-looking nineteen year old, Dorothy Coonan.

The three friends experience difficulties at every turn. Eddie, Tommy, and Sally are happy to find friends and companionship among the other young wanderers. However, they also face violence and sexual abuse from older transients, police, and railroad authorities. The film’s drama rises as the trio settles in with young squatters in a railroad yard in Columbus, Ohio. Local authorities violently clear the area and Tommy’s leg is cut off by a passing train as he tries to flee. Tommy is hospitalized and survives, but he is unable to buy a prosthetic leg that will help him continue to travel with his friends. In response, Eddie breaks into a local prosthetics store and steals an artificial leg. The honest and earnest all-American boy has become a criminal and feels justified in his crime.

In the movie’s final scenes, Eddie, Tommy, and Sally make their way to New York City. They are fooled into committing a crime and arrested. All three are charged with “vagrancy, petty theft, resisting police, breaking and entering and hold-up” and are sent to juvenile court. They refuse to give their full names, stating that they cannot go home because their parents are too poor to care for them. “What good will it do for you to send us home to starve?” Eddie asks the judge. “You say that you’ve got to send us to jail to keep us off the streets. Well, that’s a lie. You’re sending us to jail ‘cause you don’t want to see us; you want to forget us,” Eddie argues. “Well, you can’t do it ‘cause I’m not the only one. There’s thousands like me and there’s more like me hitting the road every day!” Eddie’s tough exterior unravels as he breaks into uncontrollable sobs. Aware that the tough young man is really a sympathetic teen, the judge wants to help. He tells him that “Things are going to be better now.” Franklin Roosevelt’s election means that “unemployed parents would be going to go back to work soon.” The assumption for the audience is that the teens would soon be able to resume their normal places in society—a place where adolescents live with their parents, experience relatively carefree lives, and attend high school fulltime in preparation for adult independence (Wellman 1933).

Like Wellman, Thomas Patrick Minehan also wanted to stir Americans to do more to help the younger generation. A graduate student in sociology at the University of Minnesota, Minehan undertook a study of transients from 1931-1933. He went undercover and conducted 1,465 interviews (1,377 boys and 88 girls) and documented 509 case studies (493 boys and 16 girls) with other young people on the road. Minehan published his finding in a 1934 book, Girl and Boy Tramps of America. The young people in Minehan’s sample were primarily in their teens and early twenties. Of the 548 individuals identified by age, 291 were thirteen through sixteen year olds, 282 were seventeen through twenty-one year olds, and only four were under thirteen. All twenty-nine girls in the study were less than nineteen years of age. Ninety-five percent of the transients in the sample were born in the United States and most claimed to be Protestant, Catholic, or Jewish. Sixty-two identified themselves as “nonbelievers.” Only twenty-seven had earned a high school diploma, but more than half had graduated from at least the eighth grade, and twenty-five had attended a few college classes. Most had been on the road more than six months but less than two years. As further evidence of the widespread nature of the problem, they came from both rural and urban America in thirty different states and looked to many adults like the girl and boy next door (Minehan 1934, 253-262).

“Hard times” was the most frequent reason young transients gave for leaving home. Some admitted they sought adventure. Ina Máki left her home in rural Minnesota for a summer of hopping trains because “there was little else to do” (Uys 2003, 9-30), and a few simply hated school. “Nothing I ever did ever satisfied anybody in school,” one girl told Minehan. Some ran away to escape abusive parents or guardians. Asked “if he ever got a licking at home,” a boy named Nick told Minehan, “That’s all I ever got. The old man would lick me if I did something. The old lady if I didn’t” (Minehan 1934, 35, 48, 51).

Minehan concluded that being on the road was not good for the young people he met or for the future of the nation. The longer the boys and girls stayed on the road, the more likely they were to develop attitudes and behaviors contrary to mainstream American values. A boy nicknamed Happy Joe said that when begging for food it was a good strategy to lie. “Tell them you got a sick mother and a lot of younger kids at home hungry … That way you get more and will have a nice lunch for later” (Minehan 1934, 77-78). Young transients stole from anyone and did not seem to feel guilty about their behavior. A few even talked openly about injuring others or even killing someone. Many talked about sex in ways that shocked adults. In one incident, Minehan tried to help a sixteen-year-old girl having sex with men in a freight train boxcar. Asked by a railroad brakeman if she was alright, the girl “came to the door a little bit drunk and very undressed.” Cursing and yelling at the brakeman, “You big fat fool. You Y.M.C.A. dummy. Why do you have to spoil it all? Why can’t you let a girl alone when she isn’t hurting you? Everything was fine, all right, and now you’ve spoiled it all” (Minehan 1934, 141-142). Through his study, Minehan warned Americans that such circumstances left the nation in danger of corrupting an entire generation and thereby shaking the foundation of the nation’s future.

A New Deal for Youth

Calls by advocates such as Minehan contributed to Franklin Roosevelt’s efforts to address America’s youth problem as part of his administration’s New Deal. On March 30, 1933, members of Congress approved the selection of enrollees for the president’s new Civilian Conservation Corps (CCC). The CCC was designed as a work relief program for unmarried males age eighteen to twenty-six from families receiving aid and for unemployed veterans of the Great War of any age. It also evolved into a modest education program and expanded to seventeen year olds in 1935. Some CCC recruits earned their General Education Diploma (GED) as part of the program. Before ending in 1943, the CCC established camps in Hawaii, Alaska, Puerto Rico, and the Virgin Islands as well as throughout the continental United States. During its existence, approximately 3 million unmarried teens and youth spent time as “Soil Soldiers” in “Roosevelt’s Tree Army.” The average CCC enrollee was a teen who had just reached his eighteenth birthday. He had completed the eighth grade, but read at only a sixth-grade level. His health was good, but he was probably malnourished and underweight. Sixty percent of CCC recruits came from rural areas and small towns. Most were native-born whites, because black enrollment was limited to 10 percent; the same proportion of African Americans as in the general population. It should be noted that this was much lower than poverty rates among black families in the United States. The CCC added special units for American Indian recruits on reservations and about 80,000 served.

The popularity of the CCC program encouraged the creation of the similar program for girls in 1934. This effort that became known as the “She-She-She” was much smaller and short lived. Only 8,000-10,000 adolescent girls and young women took part, ending in 1937. Many Americans did not support the idea of sending daughters away from their families; even to gender-segregated camps. In addition, most Americans did not view unemployment among girls as a danger to the nation’s future. In an era of strong gender-based stereotypes, the failure and lack of support for a CCC-type effort among girls is not a surprise.

The seventeen-year-old minimum age requirement and rule that CCC enrollees come from families on relief also limited the program’s effectiveness for males. Young male transients under seventeen were ineligible, and those who were did not have families did not qualify. Another problem was that the program’s emphasis on work over education made it somewhat inconsistent with the model for American teens that emphasized high school education.

Despite its shortcomings, the CCC was a very popular New Deal program, even among most recruits. Camps operated under rules similar to military service. Recruits earned $30 per month in exchange for forty-hour workweeks on conservation projects. Of a recruit’s earnings, $25 per month went home to his family. One mother of a CCC recruit explained to a Federal Writers’ Project interviewer that her son’s CCC pay saved his family from starvation. Still, when the boy’s father was unable to find work, the boy had to give up returning to high school his senior year.

President Roosevelt tried to address such dilemmas by taking the CCC idea further with Executive Order No. 7086, released in June 1935. Roosevelt entitled his new program for young Americans the National Youth Administration (NYA). It included girls as well as boys and lowered the age of eligibility to sixteen with an upper limit of twenty-five. In another important shift, the NYA emphasized full-time education over work relief and required participants to live at home with their parents or a guardian. NYA participants received stipends similar to today’s work-study programs. An NYA youth from Pittsburgh wrote Eleanor Roosevelt in 1937 praising the program: “Words cannot express my gratitude to our President who has made this possible for me and thousands of others, and I trust that you and the President will continue your good work and remain at the White House for a ‘long time’” (Cohen 2002, 158). Another NYA participant, an eleventh-grader from Des Moines, Iowa, noted that he had stayed in school because of the program. “When I started school in September, I did not know whether I was going to continue to go or not. When I got my first check I was so tickled I could have shouted.” He explained that the money enabled him to go “to town that evening and [get] some bread for my brothers’ and sisters’ and my own lunches. Next I got some shoes. Even if they weren’t high priced, I was proud of them, because I had bought them with my own money” (Cohen 2003, 158).

During its existence, the NYA enrolled 1.5 million high school students and 600,000 college recruits. An additional 2.6 million unemployed teens and young people in their early twenties who had already left school also participated in NYA training programs. The NYA’s director, Aubrey Williams, saw the effort as not only practical, but also as a social experiment promoting civil rights and greater social equality. As part of that vision, Williams named African American reformer Mary McLeod Bethune to head a special NYA division for black youth. For Williams and Bethune, access to high-quality school-based education and vocational training for all young Americans was the key to leveling the playing field for a generation that faced special difficulties linked to the Great Depression. It could also help to eliminate historical inequities connected to racism, socioeconomic class, and ethnic prejudice.

Other New Deal efforts also contributed to efforts designed to level America’s playing field. For example, the federal government channeled money into ailing schools through a wide range of New Deal programs. Work relief efforts paid for new school construction or the renovation of existing facilities. These efforts coupled with an acceptance of the idea among the general public that school-based education through high school graduation was a necessity for success as an adult. As noted earlier, in the 1930s, the majority of high-school-age teens attended high school for the first time in American history. The irony is that, during the first years of the Depression, many school districts closed or cut short the academic year because of a loss of tax revenue. For example, authorities from Chicago’s public schools testified before Congress in 1933 that the system was bankrupt and teachers had not been paid in eight months. Students and teachers protested by marching in the streets. About the same time, many school districts in the rural South failed to open at all, even for limited three-month terms.

The Roosevelt administration responded by sending federal funds to schools through a variety of New Deal programs. The FERA paid teachers’ salaries and offered work-relief employment for parents so that teenage sons and daughters could stay in school. In addition, bolstered by an influx of federal funding, states and local communities across the United States constructed new schools and consolidated small schools into state-of-the-art education facilities. Consolidated schools offered broader curriculums than the small schools of the past. The idea was to keep all young Americans in school through graduation; even those not planning to attend college. The transition to large multipurpose high schools that ran both traditional academic programs and vocational education curriculums had occurred in most cities by 1930. The influx of federal money sent to the states as part of the New Deal helped to bring the change to rural America as well. Not every community benefited, but for the first time in history, a majority of adolescents had the opportunity to attend high schools with a more diversified curriculum within a reasonable distance of their homes.

The changes in New London, Texas, during the mid-1930s were a good example of the trend favoring consolidated schools. Families in the area were drawn by jobs in the region’s recently tapped oil fields. The influx of families strained the region’s tiny and outdated schools, which rarely accommodated students over fourteen years of age. The new $1 million consolidated school that opened in the fall of 1936 housed modern facilities and offered an up-to-date curriculum for students from the first grade through high school. Although most parents did not have a high school diploma themselves, they believed that extended education opportunities were good for their own children and the community’s future.

A massive explosion at the school during its first year rocked the New London community and the nation. The New York Times reported that on March 19, 1937, natural gas used to heat the New London campus ignited resulting in a massive explosion, collapsing the building’s walls and the ceiling. Tons of debris came crashing down on the 700 children and forty teachers inside the facility before they could escape. The building was fireproof, so there was little fire, but the explosion’s concussion and falling debris killed most of those inside. Approximately 10,000 area residents quickly gathered at the disaster site, witnessing the call of victims still buried under the rubble. Some students and teachers were saved; by nightfall, the school’s powerful stadium floodlights revealed a sad scene composed of grief-stricken parents, relatives, and onlookers next to a lengthening row of bodies covered by white bed sheets. In the end, more than 500 students and teachers were killed. Approximately 200 individuals escaped with minor to serious injuries. Fifteen-year-old Doris Derring told a New York Times reporter that she witnessed “100 of her classmates blown from their desks into the schoolyard” (New York Times, March 21, 1937).

Before the heartbreaking disaster, the New London consolidated school campus served as an important source of community pride. It showed the working-class parents’ and community leaders’ commitment to providing quality education opportunities for the area’s children and adolescents. An investigation after the blast, however, showed that even high-minded goals could include costly shortcomings. To save on long-term utility costs, planners used natural gas in an unsafe manner to heat the building and hot water supply. News of the disaster led some Americans to question the wisdom of putting so many children together in large, consolidated schools under any conditions. But even in the midst of such fears, the trend toward consolidated schools continued. As an article in the New York Times three days after the explosion concluded, larger modern schools allowed for expanded curriculums and better opportunities for students. In addition, even with the risk of disasters, modern school buildings were safer than the old-fashioned wooden structures of the past. New schools are

made of sand and stone,” explained the Times. They “have stairways of slate and cement, and are classed as fireproof. Broad corridors, fire towers, fire escapes, sprinkling systems are provided. They furnish a striking contrast to the wooden firetraps still used as schools. No structure, however built, when converted into a holder for inflammable gas . can be made to resist explosion. (New York Times, March 24, 1937)

The rapid spread of consolidated schools marked the end of an era in American education. During the 1930s, a combination of federal, state, and community funds shifted the emphasis on schools in the American countryside from small one- and two-room facilities to grade-level and age-based programs for students in the elementary grades through high school. In 1916, 200 of every 1,000 schools in the United States employed only one teacher. Further progress was made in cities during the 1920s. By 1940, only 114 of every 1,000 schools employed only one teacher. By 1960, such tiny schools virtually disappeared from the landscape as viable education alternatives, even in rural America.

Besides broader curriculums and more modern facilities, larger schools also redefined the experience of being an adolescent in America. As part of the effort to keep young people in school through graduation, school authorities also paid more attention to extracurricular activities. Schools used sports for boys and girls, as well as special interest clubs to engage teens in their education outside the classroom. These activities put teens, for greater parts of their day, under the supervision of adults other than their own parents. High schools across the United States sponsored dances and made junior and senior proms a rite of passage among older adolescents. Even rural high schools organized football teams, sometimes with only six players on the field at a time, and published yearbooks as a way to build school pride. Peer pressure could have been used to keep teens in school, but it also might have had negative consequences. Many young Americans expressed increasing dismay at not being able to afford the clothing, carfare, or other expenses associated with going to high school. For example, frustrated and depressed teens wrote Eleanor Roosevelt asking for money and hand-me-down clothing that would help them keep up with their peers. Physically handicapped youngsters found little accommodation to their special needs in most schools. Racial segregation and economic disparity linked to a schools’ surrounding community persisted. Despite such hurdles, school attendance increased. What was called “quitting school” before graduation in the 1930s, transformed to the more negative “dropping out” as earning a high school diploma grew into the expected norm for all American teenagers.

Adult-monitored recreational activities outside schools also adopted the school-based model and increased in the 1930s as by-products of the New Deal. The Works Progress Administration (WPA) alone constructed 770 community swimming pools and 5,598 athletic fields as part of the federal government’s $750 million spent for recreational facilities throughout the United States. Such places created safer environments for teens to gather. For example, in New York City after the opening of ten WPA pools in 1936, drowning deaths in the city fell from 450 in 1934 to less than 300. Schools and public recreational facilities helped focus the social world of adolescents into more adult-controlled environments. Churches, synagogues, and other religious institutions copied the school- and community-based model as well. School and social life built around age-based activities, even outside schools, rose along with the expansion of adolescence through age eighteen.

Still, even with these dramatic changes, the playing field was not level for all American adolescents. Fourteen-year-old Margaret Williams and Lucille Scott attended Colored School 21, a racially segregated one-room school house in Cowdensville, a small community of African Americans located in Baltimore County, Maryland. At the end of the 1934-1935 school year, Williams’ and Scott’s teacher recommended them, along with a male classmate, for promotion to the eighth grade. A school board official denied the request, saying he saw, “no reason to pass the girls, because by the time [they] were fifteen or sixteen years old, [they] would be having babies” (Orser 1937). Williams and Scott also had to overcome the problem that in racially segregated Baltimore County, no high school admitted black students. Beginning in 1926, under pressure to make an accommodation for ambitious African American students, the county offered to pay the tuition of “qualified” black students to attend Baltimore City’s all-black junior and senior high schools. The privilege included the requirement that black students pass a test, but no white students had to face a similar requirement to attend the county’s white-only high schools. Lucille Scott and Margaret Williams took the test, but they failed to get a score high enough to qualify for the tuition program.

Just before the start of the new school year in 1935, a National Association for the Advancement of Colored People (NAACP) lawyer, Thurgood Marshall, met with Margaret Williams’ and Lucille Scott’s parents to discuss the situation. He urged them to try and attend the high school nearest their homes. They tried, but the school principal told them that the matter was out of his hands because policy dictated that black students could not attend the white-only high school. Marshall filed a lawsuit on the girls’ behalf, Williams v. Zimmerman, demanding that the court require the Baltimore County School Board to admit Margaret Williams to the high school nearest her home. The judge rejected Marshall’s argument and decided the case in favor of the county.

Lucille Scott eventually retook the county’s required examination and passed. Finally eligible for the tuition program, Scott attended Baltimore City’s Booker T. Washington Junior High School and graduated from Frederick Douglass High School in 1941. Margaret Williams did not go to public school, but she earned a high school diploma from St. Frances Academy, a comprehensive elementary through secondary school run by the Oblate Sisters of Providence (a Roman Catholic Order of African American nuns founded in Baltimore in 1829).

The Williams v. Zimmerman case highlights the hurdles that still blocked many teenagers from accessing a level playing field centered on a high school education and a social life centered on school-based activities. However, another outcome of the case shows that times were changing. Fearing another expensive lawsuit, the Baltimore County School Board opened three “separate but equal” secondary school facilities for black students at the start of the 1938-1939 academic year. Williams and Scott were not able to take advantage of the change in policy, but their bravery set the stage for change. Learning from his failed suit in Baltimore County, over the next fourteen years, Thurgood Marshall successfully constructed a more aggressive argument that won in the 1954 Brown v. Board of Education, Topeka, Kansas case.

Adolescents and Work

The growth of high schools and new emphasis on recreation reflected new attitudes about how teens should be spending the majority of their time. Since the late nineteenth century, child welfare advocates promoted local and state compulsory school attendance laws as a good strategy for curbing exploitive child labor practices. The early focus centered on children under age of fourteen, but over time, advocates also called for changes that regulated the work of adolescents as well. In 1918, the U.S. Supreme Court declared the first attempt at federal regulation of the issue unconstitutional. Congress then passed a constitutional amendment regulating child labor, but the necessary number of states never ratified the measure. As an alternative, advocates worked for the inclusion of federal child labor guidelines in the National Recovery Administration’s (NRA) codes that were passed as part of the early New Deal. The U.S. Supreme Court declared the NRA unconstitutional in 1935, however, and nullified those protections. Strikes by adolescent textile workers in Pennsylvania in 1933 highlighted the continued exploitation of young workers. Testimony before a state sweatshop commission revealed cases of sexual harassment, work for no or little pay, and unsafe and unhealthy working conditions.

Finally, passage of the Fair Labor Standards Act in 1938 (FLSA), and its eventual acceptance by the Supreme Court, put child labor regulations into federal law. President Roosevelt said the legislation “end[ed] child labor” in the United States. The president overstated the law’s consequences, but the FLSA signaled an important shift and remains the foundation of labor regulations for adolescents in America today. The law prohibited the employment of individuals less than sixteen years of age in industries engaged in interstate commerce or deemed hazardous by the U.S. Department of Labor. The legislation also connected school attendance to work by prohibiting working past 10:00 P.M. on school nights for sixteen and seventeen year olds. The FLSA included restrictions on employing adolescents in the sugarbeet industry. This restriction was a specific response to a study conducted on that industry by the U.S. Children’s Bureau. The specific mention failed to protect other young agricultural workers and made no mention of domestic service. Both agriculture and domestic work touched many young Americans, but strong resistance among adults to regulating such work, linked to traditional family responsibilities, were not included in the FLSA. Another loophole was the fact that fourteen and fifteen year olds could apply for special work permits. While the FLSA did not eliminate child labor, it created an atmosphere in which school was recognized by the federal government as the most important work of childhood and adolescence. In addition, the special regulations for sixteen and seventeen year olds emphasized adolescence as a transition period to adulthood. Entrance into World War II somewhat reversed this trend among adolescents, but in the years immediately following the conflict, children under sixteen were virtually eliminated from the paid labor force and most sixteen and seventeen year olds only worked part time in a subgroup distinct from adults.

Robert Omata of Hanford, California, began serving customers at the meat counter of his family’s grocery business at age twelve in 1933. Despite his responsibilities to his family, Omata attended middle and high school, excelled in his studies, and qualified for the National Honor Society. He even found time to play football for Hanford Union High School and earned a diploma in 1938. Likewise, fifteen-year-old Jasper Harrell had family responsibilities. He lived with his single mother in Marion County, South Carolina. No one in Harrell’s family had ever graduated from high school and his two older brothers quit school in the sixth grade. Harrell went to school each morning and worked at a local grocery each day from 3:00 P.M. to 6:00 P.M. In the early evening, a neighbor helped Harrell with his schoolwork, but he returned to the grocery every night at about 8:30 P.M. so that he could help clean up and close the business for the day. For his efforts, Harrell earned from $2.00 to $3.00 a week. He used the money to pay for school supplies and other expenses.

Andy Hardy’s America

The movies, radio, and advertisers also provided messages that encouraged adolescents like Robert Omata and Jasper Harrell to stay in school. In the late 1930s, Hollywood child-star Mickey Rooney became the most visible symbol of the modern American teenager through his role in the very successful Andy Hardy film series. Rooney’s fictional character, Andy Hardy, shared his days in high school with same-age friends from happy and stable middle-class families in small-town America. Rooney first appeared as Andy Hardy in the 1937 MGM film, A Family Affair. By the series’ fourth film, Love Finds Andy Hardy (1938), Rooney was the major character and symbolized modern American teens. Young female screen actors such as Judy Garland and Lana Turner also had roles in the series. They provided the stereotypical sweater girl model that framed the stereotype of female adolescence in America’s high schools. In the world according to Andy Hardy, the teen years were a period focused on same-age peers, school, self-exploration in a safe environment guided by wise adults and romantic experimentation limited to what became known as puppy love. There was no mention of the Great Depression or hard times. Some Hollywood films, such as the Dead End Kids (released in 1937), showed a more class-conscious and troubled adolescence, but the Andy Hardy series far surpassed such examples in popularity and staying power. Andy and his fictional friends lived lives that Americans wanted for teens and that many adolescents apparently wanted for themselves.

Radio provided another commercial avenue that spread the American adolescent ideal. At the start of the decade, less than half of all American households included a radio. In 1940, more than 80 percent owned at least one radio. Families often listened together to radio broadcasts, but teens also heard programs in isolation that connected them to their generation. Radio precipitated the rapid spread of information, cultural norms, and especially music for adolescents coming of age in the 1930s.

Swing defined teen music by the mid-1930s. In 1932, the elegant African American bandleader and composer Duke Ellington coined one of his most popular tunes, “It Don’t Mean a Thing (If It Ain’t Got That Swing).” Ellington did not invent Swing music, but his song proved a name for the sound that became known as Swing. Two years later, twenty-five-year-old Benny Goodman was working as the leader of a twelve-piece band in New York City. He and his group joined NBC radio’s Saturday night broadcast, Let’s Dance. The Goodman band’s segment did not air until the program’s last hour from 12:30-1:30 A.M. Although there was only a small audience by this hour, it was generally young and larger on the West Coast because of the time difference. Goodman’s band’s offerings included music arranged by Fletcher Henderson, the most successful African American jazz bandleader of the 1920s. In 1935, they went on a road tour and found their biggest success in California among the young fans of Let’s Dance. Goodman’s hometown of Chicago gave him a similar response and likewise labeled the new sound Swing. In June 1936, the CBS network aired a radio program entitled Saturday Night Swing Session featuring Benny Goodman and his band. Hollywood also noted the group’s popularity and the Goodman band made two films: The Big Broadcast of 1937 and Hollywood Hotel.

Swing bands usually included a piano player, trombones, coronets, saxophones, clarinets (also known as “licorice sticks”), and various drums and percussion instruments that formed the basis of the sound’s syncopated four-beat rhythm. Fans expressed their enthusiasm through spontaneous shouts and applause as well as raucous dances such as the Lindy Hop, Suzy Q, and the Big Peach—collectively known as Jitterbug. Radio shows spread Swing’s sound, and dance venues celebrating the style sprang up throughout the United States. Harlem, New York’s Savoy Ballroom was the Mecca for young Swing fans. The Savoy attracted both blacks and whites. Such integrated settings were rare, but pointed to the racial mixing that became part of rock ‘n’ roll culture and controversy in a later generation. Record companies in the 1930s took advantage of teens’ love for Swing music, but no avenue was more important to the genre’s success than radio. Some adults worried that Swing music encouraged antisocial sexual behavior among teens. Despite such concerns, the music prospered and radio made it impossible to stop Swing’s appeal among young Americans across the United States.

Advertisers recognized that the teen market could make up losses in adult buying power. Comic books, first published in 1933, were directly marketed to adolescents. Advertisers such as Charles Atlas used psychology to appeal to teenage boys who were insecure about the physical changes they were experiencing in adolescence. Atlas sold a body-building method and health program that he said would turn any “97 lb. weakling” into “a new man.” For girls, companies such as Breck shampoo sold the idea that “ordinary girls” could be models if they used the right products. The “Breck Girl” was one of the most successful advertising campaigns of the mid-twentieth century. The desire for healthy skin and elimination of acne also drew the teen market. Sanitary napkins and other female products were marketed in schools. Teens were told that the road to successful adulthood included improvements in their physical appearance as well as their education achievements.

Extending American Adolescence through Dependency

Hindsight provides a perspective highlighting the important shifts in the 1930s that had long-term consequences influencing ideas about adolescence in the United States. By the end of the decade, Americans exchanged some of the autonomy adolescents had traditionally held for new protections that extended their dependency. New federal labor laws limited teens’ access to the job market, but provided legal protections for young workers. Compulsory school attendance laws regulated by state governments required most adolescents to stay in school until they reached at least sixteen years of age. New attitudes about adolescents emphasized this period of life as a time that should be absent the full range of adult responsibilities.

This shift is perhaps most clearly evident in moves by states that raised minimum-age-at-marriage laws. Beginning in the 1880s, states raised minimum-age-of-sexual-consent laws from as low as age eight to at least sixteen. Such laws meant that any male having sex with a female younger than the specified minimum age could be charged and convicted of rape, even if the girl had consented to intercourse. The Women’s Christian Temperance Union led the effort to raise the minimum age of consent as part of a broad purity crusade. By the early twentieth century, child welfare advocates, psychologists, and morality crusaders generally agreed that adolescents were too immature, both physically and emotionally, to indulge in most adult vices (such as tobacco, alcohol, and gambling). They turned to states to pass laws restricting such behaviors to adults. Engaging in sexual intercourse and having children were viewed as the most important signals that someone had reached adulthood. Like age of consent, the minimum age of marriage was a legal marker regulated by the states. In the 1920s, attempts to pass a federal marriage law included raising and standardizing the minimum age of marriage, but the effort failed. By the mid-1930s, most states had altered their marriage laws to include prohibitions against adolescent marriage before age eighteen, without parental consent, although some states provided for girls to marry at younger ages if their parents approved.

The extension of adolescent dependency, however, has not been uniform. Americans continue to harbor some ambivalence about the status and autonomy of teens. Attending and graduating from college is now viewed as even more important than in the past, but accessibility to college remains a hurdle for many. High school graduation rates are up to 89 percent, but there is great disparity in the quality of the education. Eighteen year olds were eligible for military service during World War II and thereafter, but eighteen, nineteen, and twenty-year olds did not receive the right to vote until 1971. Juvenile courts generally accept seventeen as the maximum age of individuals under their jurisdiction. The idea became fairly uniform among states by the late 1930s, but in recent years some states have reversed longstanding practices. Many states now try teens accused of murder as adults. At another level, more teens die in car accidents than from any other cause, but sixteen year olds may drive in all fifty states. Only recently have some reformers linked graduated drivers’ licensing to protecting the lives of young Americans. Purchasing and consuming alcohol, however, is restricted to those twenty-one and over in the United States, but it is lower in most European countries. Eighteen-year-olds may buy cigarettes in all fifty states, but most adults who smoke admit they started at much younger ages. The U.S. Supreme Court upholds the right of states and the federal government to restrict access to birth control and abortion by females under age eighteen at the same time that the average age when girls start menstruating has declined to eleven. These are just a few examples of the contradictions concerning American adolescents imbedded in public policy. Perhaps keeping in mind that such policies and opinions are shaped by a combination of public attitudes, popular culture, economics, public policy, and biology will help Americans to make wiser decisions that influence the lives of young Americans.

The dramatic economic crisis of the 1930s established an atmosphere that established a strong foundation for extending adolescence as a distinct period of life that should be available to all young Americans. Inequities and unevenness remained, but by 1940 Americans shared the idea that during the teen years education outweighed all other responsibilities as preparation for adulthood. Respect for individual differences and the elimination of racial, ethnic, and gender stereotypes persisted. Still, calls by the American people for the federal government to do something to end the economic crisis of the 1930s resulted in new public policies that intentionally and sometimes unintentionally reshaped American adolescence for that decade and the balance of the twentieth century. Members of the generation that came of age during the 1930s expected a more level playing field for their own children.