Tuesday, September 30, 2014

Going GoPro



It’s all a matter of perspective.

Despite the promise of the ad above, the television I grew up with was a third-person medium. In drama and comedy, the preferred shot was a medium-length two-shot, because the camera was bulky and mounted on a rolling tripod. For sports, the distance was even greater; cameras were often mounted high up, far from the playing field, using long telephoto lenses. The viewer was, in fact, a spectator, and even more isolated from the action than if he or she were present at the event. At least at the event one can choose where to look; on television, one saw only what the director chose.

From this point of view, we saw many spectacular things, of course. One of my earliest experiences occurred in 1955, during qualifying for the Gold Cup hydroplane race in Seattle.


The pride of Seattle, Slo-Mo-Shun V, was near the end of its qualifying run: Spectacular, unforgettable … but distant, out “there”. We could never know what driver Lou Fageol felt as the boat’s bow lifted, floated higher, soared into the sky, and then crashed back down onto the concrete-hard water. In some ways, we might as well have been reading about it.

So it was with most sports; viewers were “outside” the action, distanced and divorced from the players on the field or racecourse. No matter how much we might identify with a jersey, no matter how loudly we might shout at the screen during a touchdown or a homerun, television kept us away.

No longer. Television technology has evolved along a steady path taking us from passivity to participation. Want to know what driving an unlimited hydroplane feels like? Watch this:



 From first-person shooter gaming to soldiers in combat, we’ve transformed our experience into something more closely resembling real life. Instead of being told about the experience by a narrator, we’re, as much as possible, sharing it (without the sometimes painful consequences). And instead of consumers of video, we have increasingly become producers of it, documenting our lives as they happen in a way never before possible.

And what’s really remarkable is how quickly this has all happened and how quickly we’re internalized it as “normal”. You Tube was only founded in early 2005, nine years ago. Go Pro, which manufactures the cameras that have made first-person video so popular, was founded in 2002. The kinds of stunning action sequences that were once available only to moviemakers like Steven Spielberg that were so technically revolutionary (like the opening sequence of Saving Private Ryan in 1998) are now available to us all. In terms of living vicariously, of “participating” in lives and events we could never know in “real” life, television has finally, after some 500 years, pulled even with reading—maybe even ahead.

Friday, September 12, 2014

It Was (Is?) A White, White World

            When I first encountered television, the “people” who populated it were surprisingly diverse (kinda).  Yes, they were stereotypes, but at least characters from different races, ethnicities and classes were present.  Amos and Andy made a brief transition from radio (and demonstrated that blatant stereotypes that had been relatively inoffensive on radio couldn’t survive the visibility of television), and Eddie “Rochester” Anderson was the comic foil to Jack Benny (again, using a time-worn vaudeville stereotype, but at least visible.)  The Cisco Kid chased outlaws in the Old West.  Ricky Ricardo spoke English with a Cuban accent and even, as her on-air pregnancy made apparent, slept with red-haired Lucy.  The Lone Ranger had Tonto to back him up, even if Tonto did speak an invented “Indian” language.  I Remember Mama celebrated an immigrant Norwegian family, and, of course, The Honeymooners honored the work of a bus driver and a sewer maintenance worker.  The characters were exaggerated and unredeemably offensive to modern-day audiences—but they at least gave an impression of an America that was composed of many streams.     
  
            Then it virtually disappeared.  The screen went white.  It matched the rest of my world.

            Seattle in the 1950s was a segregated city, not by law but by bank policy and social practice. Before the war the population had been overwhelmingly Nordic.  While wartime airplane production at Boeing had brought an influx of African-American workers, primarily from the South, bank redlining and restrictive covenants had created the “CD” (Central District); the Lake Washington Ship Canal created a northern boundary as effective as a moat.  The Lake City neighborhood I grew up in remained entirely white—with new homes for white Boeing workers entering the middle class. 

            Radio had been relatively cheap and accessible to all segments of American culture.  But television was expensive, and advertisers wanted to attract those with the money.  So the dramatic and comedic world of television quickly came to look like my community—not the one south of the Ship Canal.  And, symbiotically, my mental universe came to look increasingly like the world of television.

            Starting in the late 1960’s, a Hungarian immigrant named Dr. George Gerbner began formulating a theory of media influence called cultivation.  I think it explains a lot.

            Gerbner started with an observation.  We have historically learned about ourselves through stories.  Who are the good people?  Who are the bad?  How’d we come to be here?  What’s expected of us?  What happens if we make bad decisions?  How, in short, are we supposed to behave as decent, civilized human beings?  Yes, we can observe those around us, and we do—but our personal experience is limited, often extremely limited, and distorted.  Storytellers supplement our experience.  They shape our view of the world, both within and without our village.

            Storytellers captivated us around the campfires and the Homeric halls.  They told the tales of prophets and the exploits of gods.  They wrote the scripts that were acted out on the Elizabethan stages.  And they, in the guise of scholars, wrote our histories.

            Gerbner realized that, in America in the 1960s, the storytellers were on television, and they were promoting the myths and values of the corporate titans who fed and sustained them. 

            The measure of a story’s value was the number of people (or, more precisely, the number of the “right kind” of people) who listened to it, and to maximize that, the storytellers told stories that comforted rather than challenged.  They created a televised world of stability, predictability, and familiarity—a mirror that showed viewers not who and what they are, but rather who and what they thought they are. 

            It was a world of befuddled white men who conversed with palomino horses.  White patriarchs ran households of sons from My Three Sons to Bonanza.  White men tamed the frontiers, caught the criminals and solved the crimes, and lived with sexy witches and genies.

            The problem with this world wasn’t just that it told patriarchal stories; it was that those stories were ubiquitous, across every genre, available at all hours.  In Nazi Germany, Joseph Goebbels had demonstrated, with his “big lie” theory, that even the most preposterous tales could be accepted as truth if they were repeated often enough and allowed to go unchallenged by conflicting stories.  Something of the sort happened with American television.

            Gerbner called the phenomenon “cultivation.”  An individual stereotype here or there, he discovered, was unlikely to change most peoples’ perceptions.  But a relentless barrage of such stereotypes, day after day, could eventually come to be more “real” than reality.  Especially for heavy television viewers, the television world could become the real world, especially the more it reinforced (or “cultivated”) pre-existing attitudes.

            So, for a child growing up in an all-white, middle-class suburb populated by commuting dads and stay-at-home moms, a televised world of white, middle-class commuting dads and stay-at-home moms became the norm, the default.  In the world of television, those who conformed to this norm succeeded; those who did not either failed or, if their racial or ethnic sexual characteristics made it clear that they could never fit in, simply disappeared.

            For over three decades, Gerbner and his research assistants did massive studies of the characters who populated the television world, watching hundreds of hours of entertainment programs and methodically cataloguing the demographics of the casts.  The numbers alone were startling enough, and they remained remarkably stable over the years.  For every woman character, there were three men.  And women didn’t age as well as men, either; as they got older, they became evil—or they disappeared altogether.  Poor people disappeared, too, appearing only about one percent of the time, and then most likely as criminals.  African-Americans appeared in roughly the same proportion as their actual number in the “real” America—but almost always as secondary characters, seldom as leaders or successful professionals.  Asian-Americans and Native Americans were virtually invisible.

            Professor Gerbner died in 2005, and his studies would seem to be out of date.  Our television screens seem to be full, now, of successful people of color and women, even characters who are identifiably gay.  Mad Men has revealed to millions how artificial and stilted that image was.  We even have a black President.  The “default” has surely changed, hasn’t it?  

            It’s a work in progress.  Two years ago, Cheerios produced a commercial featuring an unbelievably cute little girl asking her mom if it were true that Cheerios were good for the heart.  Assured that they were, she then covered her sleeping dad with them.  It was a warm and funny family moment.


            The mom is white.  The dad is black.  The reaction, from some quarters, was vicious.  Little Gracie’s family may be reality, but for many, it still hasn’t displaced 50 years of television “reality.”

Saturday, September 6, 2014

The Remote Revolution

            On the last day of February, 1983, CBS broadcast the final two-hour episode of M*A*S*H*.  The doleful guitar, the opening strains of “Suicide is Easy”, the Bell helicopter landing in a dusty field—we gathered around to see what would become of Hawkeye, Margaret, Radar, B.J., and the rest of a company of battlefield surgeons we’d come to know over the last decade.  And America, or at least a good portion of it, came to a stop.

            That episode, “Goodbye, Farewell, and Amen”, was seen by over 125 million people—all at the same time.  Its rating was 60.2, which means that over 60% of all televisions in the country were tuned to it.  Even more astounding was its share, the percentage of televisions that were actually turned on at the time; that was 77.  For two hours that night, three-quarters of all television viewers shared the same experience.

            For years, stories had been told of streets being emptied of cars and crime rates plummeting during such program finales.  One popular report (probably an urban myth) said that water consumption in major cities spiked enormously when people rushed to the bathroom during commercials.  But this was the high-point of the phenomenon.  Huge audiences still watch certain programs—the Super Bowl for one obvious example—but television no longer has the power to hold us in thrall, to command an entire culture, or at least a major portion of it, to sit down and watch when network executives dictate.

            It’s ironic.  We viewers overthrew the corporate titans by adopting a technology that helps us be lazy:  the remote control.


            Oddly enough, M*A*S*H*, the program that eventually earned the largest audience in television history, was almost cancelled for poor ratings its first season.  It was saved by being moved in the network schedule to follow the highly-popular All In The Family and it finally took off on its own.  The strategy was common in network television at the time (and still is today):  pair a ratings success with a new or struggling program to follow it.  The tactic was rooted in a simple assumption:  most people would rather sit passively and watch whatever comes next rather than get up, walk a few steps, and manually change the channel.  It worked.

            It also worked for the economics of network television.  Blocks of commercials could precede and follow programs and, of course, interrupt them at predictable intervals.  The audience would wait patiently and absorb the commercials’ messages.  It was an efficient, tidy scheme, all based on passivity.

            The very first remote control, a wired device marketed by Zenith in 1950, was called the “Lazy Bone”.  A classic example of the Law of Unintended Consequences:  the remote was conceived as a device to enhance our passivity.  In practice, it had a startlingly different result, one whose consequences are still reverberating. 

            Look at the ad above.  One of the advertised benefits of “Flash-matic” is that it allows the viewer to mute “long, annoying commercials.”  It also, of course, allows the viewer to change the channel without getting up.  Sounds simple enough—but it was the storming of the network’s Bastille.

            If viewers don’t have to listen to commercials, then what’s the point of paying for them?  And if viewers have more choice over what to watch (disregarding the fact that they always had to choice to watch nothing at all), then what’s the point of devising elaborate schedules?  This wasn’t a major problem in the 1950s, when there were only three networks and a handful of local stations to choose from and all the networks ran commercials at the same time.  But what if … ?

            What if, instead of merely flipping channels and muting commercials, viewers could flip time itself?  What if they could decide when to watch a particular program?  What if they could alter the speed of commercials so that they could speed through them and not even have to watch silent images?  What if they could pause to go to the bathroom or make a meal or go back and watch something they’d missed?  Again, a new technology and the law of unintended consequences raised—and answered—these questions.

            Video tape was, in 1956, a powerful problem-solver for national networks; it allowed them to avoid repeating programs for different time zones and to store them much more easily than with the film they’d used before.  But the machines were complicated and expensive, far too much for home users.  And besides, who’d want to record TV programs, anyway?  That would take even more effort than standing up to change the channel.  Maybe the movie studios could sell some old movies to play on them … that appeared to be the best future use.

            Yes, a few companies, including Sony and RCA, sold a few videotape machines for home users in the 1960s.  But they had limited capacities and reel-to-reel mechanisms; only the geekiest were attracted by them. As a community organizer in the early 1970s, I used one, with a black and white camera, but it was bulky, complicated, impossible to edit, and more of an annoyance than a useful tool.

            Sony, though, struck again in 1975 with the first Betamax, using a cassette tape that was easy to insert and eject.  And  everything changed.  We at home could actually use these things, even though the “flashing 12:00” (from the user’s bafflement over how to program it to record) became iconic.  With competition and the VHS format came affordable prices.  With a reason to own one came a consumer revolution.

            We still have at home boxes full of old VHS tapes:  tapes of movies recorded off the air, complete with commercials.  Tapes of all available early episodes of “Dr. Who” (which my wife discovered on late-night TV after getting home from a night job).  Tapes of sequential episodes of favorite programs, which could have been the material for primordial binge watching.  And, yes, pre-recorded tapes of movies.  Granted, we were a bit atypical perhaps (OK, obsessive)—but we weren’t alone.  And the corporate giants finally recognized the monster they’d created and went all the way to the Supreme Court to try to kill it.  Why, recording programs was theftThey owned television, not us!  Jack Valenti, representing the film industry, put it this way, testifying before Congress:  "I say to you that the VCR is to the American film producer and the American public as the Boston strangler is to the woman home alone."

            They lost; we won the right to watch whatever they offered, whenever and however we wanted.  And, with the growth of Cable in the 1980 (which the networks also did everything in their power to stifle), we had far more to choose from as well.


            Think of it as the Berlin Wall of broadcasting.  With the “Flash-matic” and the Betamax, the crumbling of corporate power began.

Wednesday, September 3, 2014

Zero Dockus ...

Zero dockus, mucho crockus, hallabolooza bub,
That’s the secret password that we use down at the club.

            Almost every afternoon after I walked home from school I’d rush to my bedroom, turn the portable TV knob on, wait for a tiny point of light to expand into a full-screen image, and sing along with a local icon, Stan Boreson.  And then I’d settle in for an afternoon of stupid puns, parody songs, old movies and ethnic stereotyping. 

            In the 1950s, before the advent of telecommunication satellites and continental microwave relays, much of television was local.  KING, the first and, for four years, only station on the air in Seattle, was, at that time, an ABC affiliate, but that didn’t mean an awful lot.  Network programming had to be physically delivered by messenger from the production centers in New York and Hollywood, and ABC itself was a struggling, nearly bankrupt network without much to offer, at least until Walt Disney got involved.

            With a growing demand for more hours of programming, stations like KING enlisted local performers to host daytime chat shows for housewives and “educational” variety shows for children.  The sets were cardboard (on the same level of sophistication as 1920s German Expressionist films or early Dr. Who), and the performers came cheap—and the audience, entranced by the very existence of television, wasn’t very critical.  By the mid-fifties, almost all stations had hours of programming for children:  Wunda Wunda, a woman in a clown get-up who read stories; Captain Puget, a seafarer who showed old movies and told stories; Brakeman Bill, a railroad engineer who showed old movies and told stories; and, at the top, J.P. Patches, an improvisational clown, and King’s Klubhouse with Stan Boreson, a former radio performer of some renown.

            There were others as well, some supplied by the networks:  Kukla, Fran and Ollie, Howdy Doody,  Shari Lewis and Lambchop, and, of course, Mickey Mouse Club.  But, at least in the first years, it was the locals who “hooked” us on television, who gave us hours of entertainment, and who implanted corny skits and silly songs we can still recite verbatim after half a century.  And they gave us more:  these were the creators of an identity—the postwar, baby boom generation that first experienced the world through a medium totally foreign to anything their parents had known.  And they taught many of us what it meant to come from the Pacific Northwest.

            Stan Boreson, in particular, created the “Scandahoovian.”  Seattle, before the war, had been heavily dominated by immigrants from, primarily, Sweden and Norway, and Boreson developed an exaggerated Scandanavian persona, complete with exaggerated accent, ever-present accordion for accompaniment, and a plethora of “Scandahoovian” folk songs like The Lutefisk Song, Catch a Pickled Herring, I Just Go Nuts at Christmas, and Valking in My Vinter Undervear.  Even for those of us who were not Scandanavian, the Northwest became a place of immigrants who spoke something other than Harvard English.  His constant companion was a nearly-inert Bassett hound, No-Mo (even that name, a play on the Unlimited Hydroplane Slo-Mo-Shun IV, had regional meaning.)  His humor was broad, irreverent but gentle, and full of puns, and there was utterly no didactic content.

            Even with cardboard and plywood sets, it couldn’t last forever, and Boreson went off the air in 1967, replaced by nationally-syndicated and network shows and Hanna-Barbera cartoons. So, too, disappeared all of the local contemporaries, including, last of all, J.P. Patches.  There was still childrens’ programming, of course (at least while the F.C.C. still required it of licensees), but it was slicker, and the identity more urban and national.  As much as Sesame Street and Mr. Rogers’ Neighborhood have contributed, they could never have given a Seattle kid as strong a sense of regional identity as Stan Boreson and No-Mo did.

            And zero dockus, mucho crockus, hallabolloza ban,
Means now you are a member of King’s TV club with Stan.