Saturday, January 30, 2010

The Fight Against Early Millennial Extinction

A developing, but increasingly popular debate today deals with how the Internet, with its seemingly immeasurable degree of information, affects the minds of the young people who depend on it. While another popular debate about kids using electronic devices – which the NY Times hosted an online discussion about only yesterday –concludes that scaling back the use of is ultimately beneficial for a child, the one I’m interested in doesn’t have such a clear answer. The biggest reason for this is that, for all its diversion and superfluity, the Internet has quickly become our generation’s most valuable tool for finding information, whereas most electronic devices don’t carry such value.

An article I read last May in New York Magazine called “The Benefits of Distraction and Overstimulation” provided perhaps the most extensive investigation into this issue to date. With greater panache than I will ever have, author Sam Anderson takes a deep look into the many problems plaguing us so-called Millennials: our inability to focus on and complete tasks from start to finish; our relatively weak tolerance for distraction; and our desire to remedy these problems by “doping” our minds into focus (my college audience shouldn’t have any trouble comprehending this).

In spite of all these hurdles impeding our ability to truly focus (if we even can), Anderson concludes that we will emerge with a unique ability to multitask and bring disparate pieces of information together to create something original. Here’s a little summary: “More than any other organ, the brain is designed to change based on experience, a feature called neuroplasticity. As we become more skilled at the 21st-century task [David] Meyer calls “flitting,” the wiring of the brain will inevitably change to deal more efficiently with more information. The neuroscientist Gary Small speculates that the human brain might be changing faster today than it has since the prehistoric discovery of tools. Research suggests we’re already picking up new skills: better peripheral vision, the ability to sift information rapidly.”

While I don’t doubt that our brains are adapting to meet the ever-changing needs of an ever-abundant Internet, I do doubt that this change is necessarily good for everyone. All it really means is that those with an inherent ability to multitask will benefit while those with less natural ability will struggle to keep up. Obviously this is all part of the Darwinian1 selection process, but it also raises a number of interesting questions about what implications it may have. Chief among these questions, I think, is whether or not some of us are sacrificing our “natural” talents in favor of Internet-related skills that society tells us we should be good at.

Now, as a senior getting ready to graduate I recognize my own subjectivity and stake in what I’m about to say (especially making it on a blog). Yet I can’t help but notice how employers, particularly media employers, look for an increasingly varied skill set: HTML coding, proficiency in Adobe programs, proficiency in Microsoft programs, knowledge of social media/networking sites, or plain Internet “savvy,” just to name a few. I understand that having a diverse range of skills is helpful in today’s occupational environment, but I also think cursory knowledge of many skills leads to less than expert knowledge of any one skill.

I know that many people are multi-talented and can learn to be proficiently skilled at more than one thing. I also know that people can adapt their talents to the requirements of the online culture. What I’m really arguing is that everyone who has talent is not necessarily talented at more than one thing, and pushing that person to try may only inhibit his or her ability to pursue that one thing.

The Internet culture we’ve become so accustomed to I believe has forced us into the mindset that we are losing if we can’t do everything, and that we are stupid if we don’t know everything. It seems devoting real attention to one aspect of our lives is often an effort in futility, with everything else we may be missing out on. Recently I read that, due to the acceleration of the media, the amount of information up until 1940 equals the amount of information since 1940.2 While this wasn’t the first time I came across this statistic, it was the first time I thought about what it actually means for our cultural psyche. I thought about the competitive, almost ruthless nature with which we, as self-respecting cultural citizens, are expected to digest this galaxy of information. We expect that because access to this information has increased, we should all know more. Yet we can’t possibly make ourselves know more. We can know about a lot more things, but the more things we know about, the less we know about each of them, which gets to the heart of the problem with our new information age. We simply know less about more.

I suppose this might not be a problem that I have with culture (who am I to argue for or against a trend?) but more of a problem that culture has with me. I’ll be the first to admit that I’m not the type of person who likes to load my plate with thousands of different stimuli. Whereas many of my peers listen to music or watch TV while doing work, I typically have no such distractions. Whereas the average person in my generation has 8 tabs open in his or her Internet browser, I rarely have more than 4 and even become anxious when I have more because I think I forgot to do something or left it there for a reason (self-diagnosed “browser-tab anxiety”). Yet I am the type of person who likes to think through almost everything I encounter, who won’t stop until I understand something to its absolute core, and who lays awake at night thinking about concepts like “post-irony” until I can wrap my head around them.

Does this mean I’m dying breed? Well, not exactly. I used to lament the fact that I wasn’t born 100 years ago, where everything seemed so much simpler and I wouldn’t have had to worry about these types of problems. But I now realize that it’s people like me who are forming the impending, nascent backlash against such lofty ideals as “cultural globalization,” where people adopt universal and/or random ideas into the way they act because there’s simply nothing new. We form the link from new age eclecticism back to specialization and individuality. We will usher in a new era where people will no longer attend to 30-second sound bites and 500 word blog posts in favor of depth that requires a greater and more nuanced attention span.

So yes, my societal value may be down for now, but as we all know history moves in cycles. I’m no dinosaur; I just hope I subsist long enough to witness my revival.

1I am loath to say “natural selection” due to the fact that we are talking about something so diametrically opposed to nature in the Internet.
2I also read this in a book published over 20 years ago, which means that, due to the advent of the Internet, the latter has now probably far surpassed the former.

Sunday, January 24, 2010

Defusing Avatar (without footnotes, for now)

If the past is any indication – and it almost always is – then “Avatar” shouldn’t be bringing home the Oscar for Best Picture this year. While “Avatar” deserves praise for its impressive computer graphics, costume design, and cinematic innovation, it plainly falls short of what we’ve come to expect out of a Best Picture winner. The reasons for this may be manifold, but for now I want to focus on the one quality that is the film’s most telling foible, at least in the eyes of those in the Academy, metaphor.

Beyond its import in “Avatar,” metaphor has become the primary reason why science fiction has lasted as a successful and veritable film genre. Science fiction/fantasy filmmakers usually project society’s current problems onto a future representation where these problems are usually exaggerated. Add a few special effects, some compelling drama, a fabricated language and voila, you’ve created an imagined world that almost everyone can relate to. Audiences (myself included) time and again eat this formula up and everything else we buy into is just added bonus – like masks, toys, conventions, or anything the studio can dream up to make up for the astronomical budget it used to depict humans living in an alternate universe.

For this reason, Academy voters generally prefer authentic, lifelike portrayals to their metaphorical, fantasy counterparts, though this isn’t to say that the latter have gone unrecognized. “The Lord of the Rings: The Return of the King” received the 2003 award for Best Picture despite it clearly being in the fantasy category. Even the 1999 winner “American Beauty” is a highly metaphorical and implausible depiction of society, though it carries a markedly poignant and novel message.

Clearly the past doesn’t provide a precise rubric for how the Academy votes, but it does make one thing quite clear: as beautiful as it might look, idealistic filmmaking isn’t rewarded unless it’s a wholly unique achievement. And that’s where “Avatar” runs into trouble. For all its glamour and technological progression, it doesn’t really tell us anything new about the human condition (or the Na’vi, for that matter). While not all award-winning movies can do this – it’s in fact rare that any movie in a given year can – expectations increase enormously for a science fiction contender like “Avatar.” To be taken seriously, the movie must approach perfection in almost every aspect that it can be judged.

Yet James Cameron’s vision for what his characters would do on Pandora frankly pales in comparison to his vision for Pandora itself. His stunning creation remained the only payoff in the midst of a predictable and derivative storyline. Cameron himself even acknowledged that the story draws upon the “Indian removal” theme from movies like 1990 Best Picture winner “Dances With Wolves.” Though they might make me eat my words, for now I’ll give the Academy more credit than to reward a plot that’s already been done.

What’s even more interesting about that storyline is what it’s intended to mean. Remember what I was saying before about how all science fiction films are metaphoric representations of society’s imperfections? Well let’s quickly think about what aspects of society Cameron is commenting on. The film’s primary target areas are American imperialism/militarism, capitalism, and environmentalism. While important, these are also issues that probably don’t necessitate a $237 million budget to represent (making it that much more ironic for doing so). Yet another contender for this year’s award tackles many of these concepts just as effectively, and without the excessive spending or disillusionment.

“The Hurt Locker,” directed, ironically, by Cameron’s ex-wife Kathryn Bigelow, tells the riveting tale of three soldiers in the army’s venerable EOD (explosive ordinance disposal) squad in Iraq. Unlike “Avatar,” Bigelow’s flick doesn’t hit you over the head with political statements about the army’s occupation and depravity; this serves only as the pretext. The director understands the audience’s knowledge of the situation’s severity and is only out to expose the humanity (or inhumanity) within it. What we are left with are characters that aren’t entirely heroic like Jake Sully or entirely villainous like Colonel Miles Quaritch. Despite their unmatched talents in defusing bombs and resisting threats on the battlefield, the soldiers also exhibit tangible human traits. They drink, they curse, they smoke, some are even racist, but in the end are a relatable, albeit contemptible, bunch. As viewers we can’t help from being drawn into the tension and raw unpredictability that surround their lives and, subsequently, the lives of the thousands who serve in the Middle East.

Though “The Hurt Locker” merits accolades on a number of different levels, the one I hope it’s recognized for most is its authenticity. In an age where everything is meant to have more than one meaning and stories unfold just the way we expect them to, the film is refreshing in that we never know what’s coming next. Unlike almost anything else I’ve seen, it actually gives you the experience of being at war, from its handheld camera work, to the rattling of a dilapidated car frame during an explosion, to the volatile, genuine human emotions elicited by characters you seem to truly know and care for.

Academy Award voters – along with all movie critics – typically differ in their beliefs about cinema’s capacity to capture reality versus its ability to create something new and magical. While in the past they have recognized both, this year they have the unique opportunity to choose between them. If it comes down to it, this choice will be the difference between a hopeful, less than subtle message in a future that will never exist and an actual manifestation of that message in a world that for some is all too real (the parallel between the military’s incursion into Iraq and that of Pandora should be more than obvious, though I’ll include it parenthetically just in case).

Although budget allowances, camera innovations, and swanky computer graphics are hard to ignore, they can’t make up for obvious platitude and naive expectancy. Like I have, I hope voters will look past the glitz and finally reward something that’s real.

Sunday, January 17, 2010

Don't Get Fooled, Again

A couple of months ago I had a relatively brief conversation with one of my roommates about the effectiveness of different network news stations. Among other things, we debated whether we think a partisan news network like Fox News is more “effective” than a self-proclaimed unbiased network like CNN. If you judge “effectiveness” by advertising revenues and viewership, it’s no contest; Fox blows CNN out of the water, along with every other news network for that matter.

A week ago The New York Times reported that Fox News makes more money than CNN, MSNBC, and the evening newscasts of ABC, NBC, and CBS combined, and it’s no surprise why. Fox is the only network that explicitly caters to a conservative audience and thus stands out as the lone dissenting voice in a purportedly “liberal” media. Looking at it from this point of view, I told my friend that it’s almost impossible for any unbiased news entity to compete because people can’t remain on the fence forever and will inevitably make up their minds. This may be partially true, but there are also a host of well-researched reasons behind why this is, the most prominent being that people look to Fox News to feed a priori beliefs they have on current affairs. This means that the network’s constituents generally view current issues a certain way — which is why they are watching that network to begin with – and confirm these views through facts and arguments presented by the network.

Yet there’s another aspect to “effectiveness” that can’t necessarily be measured by profit margins or numbers of viewers. This has to do with the degree to which viewers’ beliefs are shaped by the very media enterprises they depend on for news. I’m not talking about a priori beliefs mentioned at the end of the previous paragraph either, even though these may be related. What I’m interested in is the viewpoint that arises once any previously unadulterated issue enters the news. To be clear, this isn’t, for example, about those for/against health care reform and watch MSNBC/Fox News to support their belief. It’s about those for/against health care reform because of what they learned on MSNBC/Fox News, and I think we can all agree this is a whole different story.

A slightly different, albeit cynical, perspective on this dilemma says that anything we know, or pretend to know, through the media is manufactured in a way that we have no control over. Not only does it apply to what’s in the news, but anything we learn in any form of media. For instance, I could tell you off the top of my head that the distance from home plate to the right-field foul pole in Yankee Stadium is 314 feet. The reason I know this isn’t because I’ve been to the stadium so many times to observe, but because I’ve watched so many Yankee games on television that it’s embedded in my memory. Yet theoretically, the number on the outfield fence could turn out to be some televised illusion1, in which case I would have no empirical evidence for the distance and would probably begin to question whatever else I think television might be lying to me about. It may seem farfetched, but I can’t really argue with the logic and find it pretty fascinating nonetheless.

But how does it apply to media bias? Naturally, I’ll acknowledge two ways in which it’s a stretch. First, every time we receive information through the media we enter into an unspoken contract with said media that the information is accurate, based primarily on a past reputation of journalistic integrity and truthfulness that they would have no discernible reason to deviate from. Second, a fact is a fact, and regardless of whether we get it from Fox News or the Associated Press, it’s doesn’t make it any less true. The way it does apply, however, has nothing to do with distorting facts or misreporting news (that we can leave to Jon Stewart). It’s that instead of not trusting news providers enough to report information accurately, we tend to trust them too much. We’ll believe anything they say as long as it sounds good and can be backed by a few verifiable facts. We therefore struggle to perceive the essential line between reporting of news/information and the analysis of it, a line that some try all too hard to blur.

As insidious a gambit as it may seem, subjective media producers have many a good reason to distort the line between fact and opinion (if it is in fact their goal to push a particular political agenda onto their viewers). One reason has to do with the scarcity of on-air time dedicated to any specific issue in the news. Since most broadcasts must fit into an allotted time slot, producers must be selective with what they’re going to show on the air. This affords them the latitude to devote as much time as they want, within the aforementioned time restrictions, to any matter they feel is important enough or that they feel strongly about. Instead of discussing a variety of issues that all have great import, they can discuss certain issues which they deem significant, for a seemingly inordinate amount of time, under the guise that they actually don’t have enough time to talk about everything. As viewers, we complacently let this happen because (a) we have strong beliefs on certain topical matters and (b) think it’s important to remain knowledgeable about these matters. The creators know this, and therefore give us more detail about less things (especially if they have to do with politics) rather than less detail about more things, which we probably didn’t care about in the first place.

What they also know is that most Americans have day jobs (though maybe not for long) and that they don’t watch TV at these jobs. But they love to when they get home, in perfect time to hear Bill O’Reilly or Chris Matthews proselytize on their screens about what was “important” that day. This, as you might have guessed, isn’t exactly a coincidence.

I’m not entirely sure about this, but prima facie the intended American news cycle goes something like this: (1) Americans read the morning newspaper, an allegedly objective presentation of the day’s news; (2) Americans discuss some of that news with co-workers; (3) Americans go home to listen to “expert” analysis from gloating cable news personalities. Unfortunately (or fortunately, for some) it doesn’t always work that way.2 Believe it not, some Americans actually receive certain news items during these prime-time broadcasts, which makes it virtually impossible for them to disentangle what actually happened from the broadcaster’s spin on it. There you have the type of dilemma I brought up earlier: people whose knowledge and opinions are fabricated almost exclusively by what they learn from any one news network. Sorry it took so long to get to.

Now, this might sound way too simple and contrived, and there are thousands of other variables to consider like other forms of print media, radio, and of course the Internet. Yet this brings me to my next point, and why I hope this problem will become less and less relevant to my generation. Ostensibly, the Internet eliminates the two problems confronting televised news broadcasts: there is an almost unlimited scope for content and it can be accessed at any time. Unlike the previous few generations, we don’t depend on the 3-step process mentioned earlier to receive and analyze our news. This relative3 level of autonomy on the Web is why we can turn over 50 years of normative media culture on its head.

Obviously those who hold the reigns will try everything in their power to prevent this from happening and I must admit that their efforts to discredit the Internet as a viable news medium aren’t entirely spurious. For all its breadth and diversity, the Web has many more people than TV seeking to push radical or mendacious content on their readers. Also, just because people get their news on the Web doesn’t mean they are getting it from more than one source, which may or may not present information objectively. Despite all this, the success of sites like the Huffington Post – which aggregates content and opinions from different sources – may be a testament to an online community looking to rectify its own subjectivity and extremism. The Huff Post may have tapped a heterogeneous audience longing for equality and diversity in the news4, though it’s also just very good at what it does.

Look, I understand I’m not the first person to claim some quixotic Internet revolution and I’m probably not even the first to make this observation. But I genuinely believe that the Internet returns a possibility that as media consumers we haven’t faced in a while. Perhaps more than at any other point in our scant media history, we have the opportunity to think on our own. Now it’s time to start taking advantage.

1Similar to the way advertisements are superimposed onto backstops behind home plate.
2Especially due to the growing irrelevance and obsolescence of daily newspapers.
3I say “relative” because of the theory I depicted earlier; we still have no control over the information we receive through the Internet. It’s ultimately always up to someone else.
4I understand that the Huff Post publishes mostly liberal opinion pieces, but this clearly isn't the only reason why it has had such sucess.

Saturday, January 9, 2010

Procrastinating Post-Irony

This week I completed 85% of the essays in Chuck Klosterman’s new book, Eating the Dinosaur. As this is the third of Klosterman’s books I’ve read (or in the process of reading), I believe this makes me 96% prepared1 to share my thoughts on his writing.

My usual comprehension process when reading his prose entails reading, re-reading what concepts I failed to understand the first time, pondering these concepts for a few moments, and then completely forgetting whatever the fuck I just learned approximately 1.5 years later. This time, however, I’m hoping to have a little more longevity – this is the first time I’m actually translating my ideas into writing – though I still can’t say I won’t forget this all in a year and a half.

The one piece that I’m having the most trouble trying to extricate from my memory is titled “T is for True,” in which Klosterman discusses three individuals (Ralph Nader, German filmmaker Werner Herzog, and Weezer frontman Rivers Cuomo) who are all too literal for our universally ironic society. His basic premise is that as media consumers our minds are almost always trying to pin down what’s ironic that we struggle to discern anything intended to be taken at face value. This I generally accept; we all understand that guy who never ceases to be disingenuous or sarcastic, we all understand basically everything that’s funny about the show The Office, and we all question the “reality” contained within certain reality television. Yet I’m still struggling to wrap my head around this later hypothesis: “I often wonder if we would all be better off if we looked at all idioms of art in a completely literal fashion, all the time. It would be confusing as hell for the first twenty or so years, but I suspect the world would eventually make more sense than it does now. At least we could agree on whatever it is we’re pretending to understand.”

For the moment, I’m going to loosely-term what Klosterman just described as “post-irony.” Although he never uses this term and would likely lament my doing so, I believe it’s a fitting label based upon sheer logic and the cursory Google investigation I just conducted. This proved “post-ironic” to be an incipient term describing anything from self-recognized sincerity following a period of sarcasm to “a way for pretentious assholes to be even bigger pretentious assholes” that ultimately ends in nothing (see: Youtube, The Qwesi). For the purposes of this argument, I’m going to create an ambiguous2 spin on the former definition and state that any artistic endeavor considered post-ironic should satisfy these conditions: (1) the work must be a genuine attempt in whatever genre it’s categorized in without being self-deprecating or sarcastic in any way; (2) there can be no ulterior, superfluous, or otherwise sexual association therein; and (3) said work must demonstrate a recognition of ironic possibility but continue to deny it in favor of something more real.

The third condition is what separates that which is simply very literal from that which is post-ironic, and also why I didn’t rush to classify Klosterman’s quote under that category. To use one of his examples, when Ralph Nader said that Obama needed to decide whether he’s going to be “Uncle Sam for the people of this country, or Uncle Tom for the giant corporations,” he was using the term “Uncle Tom” in the strict definition outlined by Harriet Beecher Stowe, not in the cultural context it later received. Even though Nader was probably aware of the racial connotation to the term, he did not use it because of this connotation or try to reverse the normative perspective on it. He is therefore not post-ironic because irony seems to rarely, if ever, penetrate his mind.

However, irony does penetrate the minds of lots and lots of people, and especially the minds of really important people like television producers. Every time they conceive a new idea for a show, they must make certain audio and visual decisions based upon how they think it should be laid out. While many of these decisions are uninteresting to anyone outside the established media and more importantly irrelevant to this discussion, there is one that I believe at least scratches the surface of post-irony as a quantifiable idea. And that would be the marooned stepchild of American television, the laugh track.

As many of you have noticed, canned laughter has become all but extinct in recent years, with a growing trend in preference of sitcoms like Curb Your Enthusiasm and It’s Always Sunny in Philadelphia that don’t include it. It has become so obsolete in fact that those in the Academy of Television Arts and Sciences hardly continue to associate it with serious artistic merit: in 2000 Sex and the City was the only show nominated for the Outstanding Comedy Series Emmy not to use a laugh track, while How I Met Your Mother was the only show nominated in the same category in 2009 to use one.

It may not be a surprise to anyone whose read his work that Chuck Klosterman is not a fan of the laugh track, though it’s possible he may be more than he realizes. In an earlier essay from Eating the Dinosaur, Klosterman says that he “can’t think of anything philosophically stupider than laugh tracks.” Twelve pages later he concludes, “canned laughter is a lucid manifestation of an anxious culture that doesn’t know what is (and isn’t) funny.” Let me start by saying that on most levels, I agree with this assertion. But I also don’t think it’s anything new to say that the American audience is generally ignorant and relies on the media to direct whatever it’s supposed to feel. Direction is everywhere, from the canned laughter in sitcoms to the melodramatic orchestras in tear-jerking dramas. Yet what makes the laugh track a particularly interesting narrative device, especially in today’s culture, is that it remains a viable way for television studios to get people to laugh. Despite the frequency of ironic parodies3 and attempts to render it kitsch, the laugh track continues to survive in the form of a few CBS sitcoms (The Big Bang Theory, Two and A Half Men, HIMYM) and a host of children’s shows.

Why? Well before I get into my spiel about how the laugh track is the paragon of post-irony and how the folks at CBS may or may not4 be geniuses, let me say that I have no clue what goes on in production meetings and if the laugh track is or isn’t a selling point. What I do know is that, ostensibly, the idea of integrating a laugh track into a sitcom is post-ironic. This is defined by the three conditions I set forth earlier:
(1) the sitcom is using the laugh track for its primary intention – to make people laugh;
(2) the laugh track may indicate sexual or even ironic moments within the context of a particular plot, but this is about the laugh track immaterial of content (in other words, the shows themselves are not post-ironic, just the fact that they use a laugh track);
(3) somewhere along the line of production there was, for whatever reason, a conscious decision to include a laugh track, which contrasts with the steady trend in post-2000 sitcoms to do otherwise.

Now, if you’ve made it this far you might think this is just a solipsistic gimmick I set up to make myself feel good about my own deranged theories. While I don’t doubt that some of that may be going on, I’m probably more interested in understanding the reasoning behind using the laugh track as a post-ironic device. Do the producers at CBS really think we are that stupid and need our social cues spoon-fed? Or are they consciously attempting to fill a collective cultural void that yearns for honesty and is blighted by irony?

I think it’s probably some combination of the two but either way, heralding a completely literal, post-ironic world becomes dangerous when you start to consider what might become the norm. Unfortunately for Klosterman and the rest of its detractors, the laugh track almost has to exist in a post-ironic society. Yet I’m not advocating or accepting irony as the normative cultural standard either; I agree that it can lead to confusion, deception, and possibly even worse. All I’m saying is that wholesale sincerity or post-irony isn’t the solution – a show like How I Met Your Mother wouldn’t even exist were irony not such a prevalent comedic device – and we should recognize that irony can inhabit a reasonable place within society.

For one reason or another, irony became ingrained into our cultural code and maybe even our brains. It may be time to scale it back, but I can say with certainty that it’s also not going anywhere.

1Calculated based on the number of chapters I’ve read divided by the number of chapters he’s written in these 3 books.
2Ambiguous because post-irony and irony can exist at the same time, as some aspects of the work fall into each category. This will hopefully become clearer as this progresses.
3At least three episodes of Family Guy have used canned laughter and/or applause to play up the absurdity of the tactic. Anyone who has seen even one episode understands why.
4Even if this were what they were going for, would this strategy even work? Maybe it's best to save that for another entry.