Daily Interesting News: longreads twitter

Daily Interesting News Daily news about generic category, long form, long lenght articles, for all read fan. Miscellanous news, general news, amazing news, story, stories, long articles story, internet story.

ADS

Hot

Post Top Ad

Visualizzazione post con etichetta longreads twitter. Mostra tutti i post
Visualizzazione post con etichetta longreads twitter. Mostra tutti i post

venerdì 24 marzo 2017

THE MOVIE WITH A THOUSAND PLOTLINES

07:34 0
Daniel Kwan and Daniel Scheinert, young directors who go by the joint film credit Daniels, are known for reality-warped miniatures—short films, music videos, commercials—that are eerie yet playful in mood. In their work, people jump into other people’s bodies, Teddy bears dance to hard-core dubstep, rednecks shoot clothes from rifles onto fleeing nudists. Last year, their first feature-length project, “Swiss Army Man”—starring Daniel Radcliffe, who plays a flatulent talking corpse that befriends a castaway—premièred at Sundance, and left some viewers wondering if it was the strangest thing ever to be screened at the festival. The Times, deciding that the film was impossible to categorize, called it “weird and wonderful, disgusting and demented.”Daniel Kwan and Daniel Scheinert, young directors who go by the joint film credit Daniels, are known for reality-warped miniatures—short films, music videos, commercials—that are eerie yet playful in mood. In their work, people jump into other people’s bodies, Teddy bears dance to hard-core dubstep, rednecks shoot clothes from rifles onto fleeing nudists. Last year, their first feature-length project, “Swiss Army Man”—starring Daniel Radcliffe, who plays a flatulent talking corpse that befriends a castaway—premièred at Sundance, and left some viewers wondering if it was the strangest thing ever to be screened at the festival. The Times, deciding that the film was impossible to categorize, called it “weird and wonderful, disgusting and demented.”

Perhaps it is no surprise, then, that when the Daniels were notified by their production company, several years ago, that an Israeli indie pop star living in New York wanted to hire them to experiment with technology that could alter fundamental assumptions of moviemaking, they took the call.

The musician was Yoni Bloch, arguably the first Internet sensation on Israel’s music scene—a wispy, bespectacled songwriter from the Negev whose wry, angst-laden music went viral in the early aughts, leading to sold-out venues and a record deal. After breaking up with his girlfriend, in 2007, Bloch had hoped to win her back by thinking big. He made a melancholy concept album about their relationship, along with a companion film in the mode of “The Wall”—only to fall in love with the actress who played his ex. He had also thought up a more ambitious idea: an interactive song that listeners could shape as it played. But by the time he got around to writing it his hurt feelings had given way to more indeterminate sentiments, and the idea grew to become an interactive music video. The result, “I Can’t Be Sad Anymore,” which he and his band released online in 2010, opens with Bloch at a party in a Tel Aviv apartment. Standing on a balcony, he puts on headphones, then wanders among his friends, singing about his readiness to escape melancholy. He passes the headphones to others; whoever wears them sings, too. Viewers decide, by clicking on onscreen prompts, how the headphones are passed—altering, in real time, the song’s vocals, orchestration, and emotional tone, while also following different micro-dramas. If you choose the drunk, the camera follows her as she races into the bathroom, to Bloch’s words “I want to drink less / but be more drunk.” Choose her friend instead, and the video leads to sports fans downing shots, with the lyrics “I want to work less / but for a greater cause.”

Bloch came to believe that there was commercial potential in the song’s underlying technology—software that he and his friends had developed during a few intense coding marathons. (Bloch had learned to write programs at an early age, starting on a Commodore 64.) He put his music career on hold, raised millions of dollars in venture capital, and moved to New York. Bloch called his software Treehouse and his company Interlude—the name hinting at a cultural gap between video games and movies which he sought to bridge. What he was selling was “a new medium,” he took to saying. Yet barely anyone knew of it. Treehouse was technology in need of an auteur, which is why Bloch reached out to the Daniels—encouraging them to use the software as they liked. “It was like handing off a new type of camera and saying, ‘Now, use this and do something amazing,’ ” he recalled. “ ‘I don’t want to tell you what to do.’ ”

Bloch was offering for film an idea that has long existed in literature. In 1941, Jorge Luis Borges wrote a story about a learned Chinese governor who retreated from civilization to write an enormous, mysterious novel called “The Garden of Forking Paths.” In Borges’s telling, the novel remained a riddle—chaotic, fragmentary, impenetrable—for more than a century, until a British Sinologist deciphered it: the book, he discovered, sought to explore every possible decision that its characters could make, every narrative bifurcation, every parallel time line. By chronicling all possible worlds, the author was striving to create a complete model of the universe as he understood it. Borges apparently recognized that a philosophical meditation on bifurcating narratives could make for more rewarding reading than the actual thing. “The Garden of Forking Paths,” if it truly explored every possible story line, would have been a novel without any direction—a paradox, in that it would hardly say more than a blank page.

Cartoon
“Part of me is going to miss liberal democracy.”
ShareTweetBuy a cartoon
Daniel Kwan told me that while he was in elementary school, in the nineteen-nineties, he often returned from the public library with stacks of Choose Your Own Adventure novels—slim volumes, written in the second person, that allow readers to decide at key moments how the story will proceed. (“If you jump down on the woolly mammoth, turn to page 29. If you continue on foot, turn to page 30.”) The books were the kind of thing you could find in a child’s backpack alongside Garbage Pail Kids cards and Matchbox cars. For a brief time, they could offer up a kind of Borgesian magic, but the writing was schlocky, the plot twists jarring, the endings inconsequential. As literature, the books never amounted to anything; the point was that they could be played. “Choose Your Own Adventure was great,” Kwan told me. “But even as a kid I was, like, there is something very unsatisfying about these stories.”

Early experiments in interactive film were likewise marred by shtick. In 1995, a company called Interfilm collaborated with Sony to produce “Mr. Payback,” based on a script by Bob Gale, who had worked on the “Back to the Future” trilogy. In the movie, a cyborg meted out punishment to baddies while the audience, voting with handheld controllers, chose the act of revenge. The film was released in forty-four theatres. Critics hated it. “The basic problem I had with the choices on the screen with ‘Mr. Payback’ is that they didn’t have one called ‘None of the above,’ ” Roger Ebert said, declaring the movie the worst of the year. “We don’t want to interact with a movie. We want it to act on us. That’s why we go, so we can lose ourselves in the experience.”

Gene Siskel cut in: “Do it out in the lobby—play the video game. Don’t try to mix the two of them together. It’s not going to work!”

Siskel and Ebert might have been willfully severe. But they had identified a cognitive clash that—as the Daniels also suspected—any experiment with the form would have to navigate. Immersion in a narrative, far from being passive, requires energetic participation; while watching movies, viewers must continually process new details—keeping track of all that has happened and forecasting what might plausibly happen. Good stories, whether dramas or action films, tend to evoke emotional responses, including empathy and other forms of social cognition. Conversely, making choices in a video game often produces emotional withdrawal: players are either acquiring skills or using them reflexively to achieve discrete rewards. While narratives help us to make sense of the world, skills help us to act within it.

As the Daniels discussed Bloch’s offer, they wondered if some of these problems were insurmountable, but the more they talked about them, the more they felt compelled to take on the project. “We tend to dive head first into things we initially want to reject,” Kwan said. “Interactive filmmaking—it’s like this weird thing where you are giving up control of a tight narrative, which is kind of the opposite of what most filmmakers want. Because the viewer can’t commit to one thing, it can be a frustrating experience. And yet we as human beings are fascinated by stories that we can shape, because that’s what life is like—life is a frustrating thing where we can’t commit to anything. So we were, like, O.K., what if we took a crack at it? No one was touching it. What would happen if we did?”

The Daniels live half a mile from each other, in northern Los Angeles, and they often brainstorm in informal settings: driveway basketball court, back-yard swing set, couch, office. After making an experimental demo for Bloch, they signed on for a dramatic short film. “Let us know any ideas you have,” Bloch told them. “We’ll find money for any weird thing.” By then, Interlude had developed a relationship with Xbox Entertainment Studios, a now defunct wing of Microsoft that was created to produce television content for the company’s game console. (The show “Humans,” among others, was first developed there.) Xbox signed on to co-produce.

While brainstorming, the Daniels mined their misgivings for artistic insight. “We’d be, like, This could suck if the audience was taken out of the story right when it was getting good—if they were asked to make a choice when they didn’t want to. And then we would laugh and be, like, What if we intentionally did that?” Scheinert told me. “We started playing with a movie that ruins itself, even starts acknowledging that.” Perhaps the clash between interactivity and narrative which Ebert had identified could be resolved by going meta—by making the discordance somehow essential to the story. The Daniels came up with an idea based on video-game-obsessed teen-agers who crash a high-school party. “We wanted to integrate video-game aesthetics and moments into the narrative—crazy flights of fancy that were almost abrasively interactive,” Scheinert said. “Because the characters were obsessed with gaming, we would have permission to have buttons come up in an intrusive and motivated way.”

Cartoon
“Your grass-fed beef—are the cows forced to eat the grass?”
ShareTweetBuy a cartoon
For other ideas, the two directors looked to previous work by Bloch’s company. Interlude had designed several simple games, music videos, and online ads for Subaru and J. Crew, among others, but the scope for interaction was limited. “It was, like, pick what color the girl’s makeup is, or, like, pick the color of the car and watch the driver drive around,” Scheinert recalled. One project that interested them was a music video for Bob Dylan’s “Like a Rolling Stone.” While the song plays, viewers can flip among sixteen faux cable channels—sports, news, game shows, documentaries, dramas—but on each channel everyone onscreen is singing Dylan’s lyrics. The video attracted a million views within twenty-four hours, with the average viewer watching it three times in a row. The Daniels liked the restrained structure of the interactivity: instead of forking narratives, the story—in this case, the song—remained fixed; viewers were able to alter only the context of what they heard.

With this principle in mind, the Daniels came up with an idea for a horror film: five strangers trapped in a bar visited by a supernatural entity. “Each has a different take on what it is, and you as a viewer are switching between perspectives,” Kwan said. “One person thinks the whole thing is a prank, so he has a cynical view. One is religious and sees it as spiritual retribution. One sees it as her dead husband. The whole thing is a farcical misunderstanding of five characters who see five different things.”

Their third idea was about a romantic breakup: a couple wrestling with the end of their relationship as reality begins to fragment—outer and inner worlds falling apart in unison. “We got excited about it looking like an M. C. Escher painting,” Scheinert said. “We were playing with it getting frighteningly surreal. Maybe there’s, like, thousands of versions of your girlfriend, and one of them is on stilts, and one of them is a goth—”

“It was us making fun of the possible-worlds concept, almost—but that became overwhelming,” Kwan said.

“And so we started to zero in on our theme,” Scheinert said. “We realized, Oh, all the silliness is icing more than substance.” The premise was that the viewer would be able to explore different versions of the breakup but not alter the dialogue or the outcome. “We thought there was something funny about not being able to change the story—about making an interactive film that is thematically about your inability to change things.”

The Daniels submitted all three ideas—three radically different directions—for Bloch and his team to choose from. Then they waited.

Interlude operates from behind a metal security door on the sixth floor of a building off Union Square. The elevator opens into a tiny vestibule. On a yellow table is a wooden robot, alongside a stack of Which Way books—a copycat series in the style of Choose Your Own Adventure. A pane of glass reveals a bright office space inside: a lounge, rows of workstations, people who mostly postdate 1980.

Yoni Bloch occupies a corner office. Thin, smiling, and confident, he maintains a just-rolled-out-of-bed look. In summer, he dresses in flip-flops, shorts, and a T-shirt. Usually, he is at his desk, before a bank of flat-screen monitors. An acoustic guitar and a synthesizer sit beside a sofa, and above the sofa hangs a large neo-expressionist painting by his sister, depicting a pair of fantastical hominids.

Bloch’s world is built on intimate loyalties. He wrote his first hit song, in 1999, with his best friend in high school. He co-founded Interlude with two bandmates, Barak Feldman and Tal Zubalsky. Not long after I met him, he told me about the close bond that he had with his father, a physicist, who, starting at the age of nine, wrote in a diary every day: meticulous Hebrew script, filling page after page. After his father passed away, Bloch began reading the massive document and discovered a new perspective on conversations they had shared long before, experiences they had never spoken about. When he yearned to confer with his father about Interlude, he went looking for passages about the company; when his son was born, last year, he searched for what his father had written when his first child was born. Rather than read straight through, Bloch took to exploring the diary sporadically, out of time—as if probing a living memory.

Cartoon
“When he reached the end of the pier, the rhetoric turned nasty.”
ShareTweetBuy a cartoon
Treehouse is an intuitive program for a nonintuitive, nonlinear form of storytelling, and Bloch is adept at demonstrating it. In his office, he called up a series of video clips featuring the model Dree Hemingway sitting at a table. Below the clips, in a digital workspace resembling graph paper, he built a flowchart to map the forking narrative—how her story might divide into strands that branch outward, or loop backward, or converge. At first, the flowchart looked like a “Y” turned on its side: a story with just one node. “When you start, it is always ‘To be, or not to be,’ ” he said. The choice here was whether Hemingway would serve herself coffee or tea. Bloch dragged and dropped video clips into the flowchart, then placed buttons for tea and coffee into the frame, and set the amount of time the system would allow viewers to decide. In less than a minute, he was previewing a tiny film: over a soundtrack of music fit for a Philippe Starck lobby, Hemingway smiled and poured the beverage Bloch had selected. He then returned to the graph paper and added a blizzard of hypothetical options: “You can decide that here it will branch again, here it goes into a loop until it knows what to do, and here it becomes a switching node where five things can happen at the same time—and so on.”

As Bloch was getting his company off the ground, a small race was under way among like-minded startups looking for financial backing. In Switzerland, a company called CtrlMovie had developed technology similar to Interlude’s, and was seeking money for a feature-length thriller. (The film, “Late Shift,” had its American première last year, in New York.) Closer to home, there was Nitzan Ben-Shaul, a professor at Tel Aviv University, who, in 2008, had made an interactive film, “Turbulence,” using software that he had designed with students. Ben-Shaul, like the Daniels, felt some ambivalence about the form, even as he sought to develop it. “What I asked myself while making ‘Turbulence’ was: Why am I doing this?” he told me. “What is the added value of this, if I want to enhance the dramatic effect of regular movies?” The questions were difficult to answer. Some of his favorite films—“Rashomon,” for instance—prodded viewers to consider a story’s divergent possibilities without requiring interactivity. As a result, they maintained their coherence as works of art and, uncomplicated by the problems of audience participation, could be both emotionally direct and thought-provoking. “Rashomon” ’s brilliance, Ben-Shaul understood, was not merely the result of its formal inventiveness. Its director, Akira Kurosawa, had imbued it with his ideas about human frailty, truth, deceit, and the corrupting effects of self-esteem.

Ben-Shaul feared that, as technology dissolved the boundaries of conventional narrative, it could also interfere with essential elements of good storytelling. What was suspense, for example, if not a deliberate attempt to withhold agency from audience members—people at the edge of their seats, screaming, “Don’t go in there!,” enjoying their role as helpless observers? At the same time, why did the mechanisms of filmmaking have to remain static? Cautiously, he embraced the idea that interactivity could enable a newly pliant idea of cinematic narrative—“one that is opposed to most popular movies, which are built on suspense, which make you want to get to the resolution, and focus you on one track, one ending.” Perhaps, he thought, such films could even have a liberating social effect: by compelling audiences to consider the multiplicity of options a character could explore, and by giving them a way to act upon those options, movies could foster a sense of open-mindedness and agency that might be carried into the real world. He began pitching his technology to investors.

Yoni Bloch and his bandmates, meanwhile, were lining up gigs in the Pacific Northwest to pay for a flight from Tel Aviv, to present Treehouse to Sequoia Capital, the investment firm. The trip had grown out of a chance meeting with Haim Sadger, an Israeli member of the firm, who had handed Bloch his business card after seeing a demo of “I Can’t Be Sad Anymore” at a technology convention in Tel Aviv. Bloch, who hadn’t heard of Sequoia and thought it sounded fly-by-night, filed the card away. But, once the significance of the interest was explained to him, he worked to get his band to the group’s headquarters, in Menlo Park, California.

Bloch speaks with a soft lisp, and in a tone that betrays no urgency to monetize, but he is a skilled pitchman. Once, he gave a presentation to a Hollywood director who was recovering from a back injury and had to stand. “Even if you’re standing and he’s sitting, it feels the other way round,” the director recalled. “He owns the room.” Sadger told me that three minutes into his presentation Bloch had everyone’s attention. Coming from the worlds of music videos and video games, rather than art films, Bloch and his band spoke earnestly, and with little hesitancy, about revolutionizing cinematic narratives. “They didn’t see at the time the tremendous business potential that their creative idea and evolving technology had,” Sadger said. The Sequoia investors recognized a business that could not only earn revenue by licensing the technology but also harvest data on viewer preferences and support new advertising models; they offered Bloch and his bandmates more than three million dollars. “They beat us in getting large investments,” Ben-Shaul recalled. “Our investment fell through—and they took off.”

Cartoon
“There are no seats left together, but maybe if you make pouty faces at me I can magically add more chairs to the airplane.”
ShareTweetBuy a cartoon
By the time Bloch moved to New York, in 2011, and contacted the Daniels, Interlude had raised an additional fifteen million dollars in venture capital. Bloch told the directors that if there were creative options that Treehouse did not provide he could build them. The role of enabler comes naturally to him. (His best songs, a critic at Haaretz told me, were those he had written and produced for other people.) Bringing a music producer’s sense of discrimination to video, Bloch told the Daniels that they should make the breakup story. “Right away, it was, like, Let’s go with the hardest concept,” he told me. “Love stories have been written billions of times, especially love tragedies. It’s the oldest story in the book. Finding out how to make it different while using the audience is something you can’t do easily.”

“Possibilia” is a term of art in metaphysics, and it is also the title that the Daniels placed on the cover sheet of a six-page treatment for their breakup film—alongside mug shots of twenty-three uniformed schoolgirls, each with an orange on her shoulder. The schoolgirls don’t signify anything, except, perhaps, that the remaining pages are going to get weird, and that a serious idea will be toyed with.

In the treatment, the Daniels sketched out a cinematic poem: a brief investigation of indecision and emotional entropy in a dissolving romance. The story starts with a couple, Rick and Polly, seated at a kitchen table. They begin to argue, and, as they do, reality begins to unravel. Soon, their breakup is unfolding across parallel worlds that divide and multiply: first into two, then four, eight, sixteen. The Daniels envisioned viewers using thumbnails to flip among the alternate realities onscreen.

Translating the treatment into a script posed a unique challenge: because the dialogue needed to be identical across the sixteen different performances, so that viewers could shift from one to another seamlessly, Rick’s and Polly’s lines had to be highly general. “Early on, we came up with all sorts of specific lines, and they kept falling by the wayside, because we couldn’t come up with different ways to interpret them,” Scheinert said. “It got vaguer the harder we worked on it, which is the opposite of good screenwriting.” Kwan added, “Basically, we allowed the location, the performance, and the actions to give all the specificity.”
close dialog
To get more of the latest
stories from The New Yorker,
sign up for our newsletter.

Enter your e-mail address.
Get access.


At one moment of tension, as the film splinters into eight parallel worlds, Polly declares, “I need to do something drastic!” The script notes that her line will be delivered, variously, in the kitchen, in a laundry room, on the stairs, in a doorway, on the porch, in the front and back yards, and on the street—and that in each setting she will make good on her outburst differently: “slaps him and starts a fight / starts making out with him / flips the table / breaks something / gets in a car and begins to drive away / etc.” Like a simple melody harmonized with varied chords, the story would change emotional texture in each world. To keep track of all the permutations, the Daniels used a color-coded spreadsheet.

The Daniels cast Alex Karpovsky (of “Girls”) and Zoe Jarman (of “The Mindy Project”) as Rick and Polly, and then recorded the two actors improvising off the script. “We kind of fell in love with their mumbly, accidental, awkward moments,” Scheinert said. But these “accidents,” like the written dialogue, would also have to be carefully synchronized across the many possible versions of the story. The Daniels edited the improvisations into an audio clip and gave it to the actors to memorize. Even so, to keep the timing precise, the actors had to wear earpieces during shooting—listening to their original improvisation, to match their exact rhythm, while interpreting the lines differently. “At first, it was very disorienting,” Karpovsky told me. “I had to keep the same pace, or the whole math at Interlude would fall apart: this section has to last 8.37 seconds, or whatever, so it seamlessly feeds into the next branch of our narrative.”

The result, empathetic and precise, could easily work as a gallery installation. The multiple worlds lend a sense of abstraction; the vagueness of the lines lends intimacy. As Scheinert told me, “It reminded me of bad relationships where you have a fight and you are, like, What am I saying? We are not fighting about anything.” While working on “Possibilia,” the Daniels decided to make the story end in the same place that it begins, dooming Rick and Polly to an eternal loop. Watching the film, toggling among the alternate worlds while the characters veer between argument and affection, one has the sense of being trapped in time with them. There is almost no narrative momentum, no drive to a definite conclusion, and yet the experience sustains interest because viewers are caught in the maelstrom of the couple’s present.

Cartoon
“No one designs for cat bodies.”
ShareTweetBuy a cartoon
As a child, reading Choose Your Own Adventure books, I often kept my fingers jammed in the pages, not wanting to miss a pathway that might be better than the one I had chosen. In “Possibilia” there is no such concern, since all the pathways lead to the same outcome. The ability to wander among the alternate worlds serves more as a framing device, a set of instructions on how to consider the film, than as a tool for exhaustive use. “Possibilia” is only six minutes long, but when a member of Interlude roughly calculated the number of different possible viewings, he arrived at an unimaginably large figure: 3,618,502,788,666,131,106,986,593,281,521,497,120,414,687,020,801,267,626, 233,049,500,247,285,301,248—more than the number of seconds since the Big Bang. It is unfeasible to watch every iteration, of course; knowing this is part of the experience. By the time I spoke with Karpovsky, I had watched “Possibilia” a dozen times. He gleefully recalled a moment of particular intensity—“I got to light my hand on fire!”—that I hadn’t yet seen.

The film, in its structure, had no precedent, and one’s response to it seemed to be at least partly a function of age and technological fluency. When a screening of the project was arranged for Xbox, the studio’s head of programming, Nancy Tellem—a former director of network entertainment at CBS—was uncertain what to do. “I was sitting at a table with my team, and my natural response was to sit back and say, ‘O.K., I want to see the story,’ ” she told me. “But then, all of a sudden, my team, which is half the age that I am, starts screaming, ‘Click! Click! Click!’ ”

In 2014, a version of the film hit the festival circuit, but it quickly became impossible to see. Just after its début, Microsoft shuttered Xbox Entertainment Studios, to reassert a focus on video games—stranding all its dramatic projects without distribution. Last August, Interlude decided to make “Possibilia” viewable online, and I stopped by to watch its producers prepare it for release. Alon Benari, an Israeli director who has collaborated with Bloch for years, was tweaking the film’s primary tool: a row of buttons for switching among the parallel worlds. The system took a few seconds to respond to a viewer’s choice. “A lot of people were clicking, then clicking again, because they didn’t think anything happened,” he told me. “At the moment that viewers interact, it needs to be clear that their input has been registered.” He was working on a timer to inform a viewer that a decision to switch between worlds was about to be enacted. Two days before “Possibilia” went online, Benari reviewed the new system.

“Is it good?” Bloch asked him.

“Yeah,” Benari said. “I was actually on the phone with Daniel, and he was happy.” All that was left was the advertising. Interlude had secured a corporate partnership with Coke, and Benari was working on a “spark”—five seconds of footage of a woman sipping from a bottle, which would play before the film. Watching the ad, he said, “The visuals are a bit too clean, so with the audio we are going to do something a bit grungy.” After listening to a rough cut, he walked me to the door. He was juggling several new projects. He had recently shown me a pilot for an interactive TV show, its mood reminiscent of “Girls.” The interactivity was light; none of the forking pathways significantly affected the plot. Benari thought that there was value in the cosmetic choices—“You still feel a sense of agency”—but he was hoping for more. Wondering if the director was simply having trouble letting go, he said, “We like the storytelling, and the acting, but we feel he needs to amp up the use of interactivity.”

Trying to invent a new medium, it turns out, does not easily inspire focus. Early on, Interlude applied its technology to just about every form of visual communication: online education, ads, children’s programming, games, music videos. But in the past year Bloch has steered the company toward dramatic entertainment. After Microsoft shut down Xbox Entertainment Studios, he invited Nancy Tellem to serve as Interlude’s chief media officer and chairperson. Tellem, impressed by the way Interlude viewers tended to replay interactive content, accepted. “The fact that people go back and watch a video two other times—you never see it in linear television,” she told me. “In fact, in any series that you might produce, the hope is that the normal TV viewer will watch a quarter of it.” Interactive films might have seemed like a stunt in the nineties, but for an audience in the age of Netflix personalized content has become an expected norm; L.C.D. screens increasingly resemble mirrors, offering users opportunities to glimpse themselves in the content behind the tempered glass. Employees at Bloch’s company envision a future where viewers gather around the water cooler to discuss the differences in what they watched, rather than to parse a shared dramatic experience. It is hard not to see in this vision, on some level, the prospect of entertainment as selfie.

Cartoon
APRIL 2, 2007
ShareTweetBuy a cartoon
Six months after Tellem was hired, Interlude secured a deal with M-G-M to reboot “WarGames,” the nineteen-eighties hacker film, as an interactive television series. (M-G-M also made an eighteen-million-dollar investment.) Last April, CBS hired Interlude to reimagine “The Twilight Zone” in a similar way, and in June Sony Pictures made a multimillion-dollar “strategic investment.” By August, Interlude was sitting on more than forty million dollars in capital—the money reflecting the growing industry-wide interest. (Steven Soderbergh recently completed filming for a secretive interactive project at HBO.) Business cards from other networks, left behind in Bloch’s office like bread crumbs, suggested additional deals in the making; a whiteboard listing new projects included a pilot for the N.B.A. To signify the corporate transformation, Bloch told me, his company had quietly changed its name, to Eko.

“As of when?” I asked.

“As of four days ago,” he said, smiling.

Even as the company was expanding, Bloch was striving to preserve a sense of scrappy authenticity. “We are a company run by a band,” he insisted. “Everything sums up to money—I have learned this—but we still believe that if you make the work about the story it will be powerful.” One of Eko’s creative directors was overseeing a grass-roots strategy to attract talent, giving seminars at universities and conferences, encouraging people to use the software, which is available for free online. Hundreds of amateurs have submitted films. The best of them have been invited to make actual shows.

Of the marquee projects, “WarGames” is the furthest along in development, with shooting scheduled to begin this winter. When Bloch’s team pitched M-G-M, they had in mind a project tied to the original film, which is about a teen-age hacker (Matthew Broderick) who breaks into a military server and runs a program called Global Thermonuclear War. He thinks the program is a game, but in fact it helps control the American nuclear arsenal, and soon he must reckon with the possibility that he has triggered a real nuclear war.

Sam Barlow, the Eko creative director overseeing the reboot, worked in video-game design before Bloch hired him. He told me, “The premise in the pitch was that there is a game, a literal game, that you are playing, and then—as with the original—it becomes apparent that there is a more nefarious purpose behind it. The idea was that you would be able to see the reaction to what you are doing as live-action video.”

This proposal was soon set aside, however, out of fear that toggling between a game and filmed segments would be jarring. Instead, Barlow pulled together a new pitch. Hacking was still central, but it would be explored in the present-day context of groups like Anonymous, and in the murky post-Cold War geopolitical environment: terrorism, drone warfare, cyber attacks. The story centered on a young hacker and her friends and family. Viewers would be seated before a simulacrum of her computer, viewing the world as she does, through chat screens, Skype-like calls, live streams of cable news. On a laptop, Barlow loaded a prototype: three actors chatting in separate video windows on a neutral background. With quick swipes, he moved one window to the foreground. The show’s internal software, he said, would track the feeds that viewers watched, noting when they took an interest in personal relationships, for instance, or in political matters. The tracking system would also gauge their reactions to the protagonist, to see if they preferred that her actions have serious consequences (say, putting lives at risk) or prankish ones (defacing an official Web site).

“Suppose you have a significant story branch,” Barlow said. “If that’s linked to an explicit decision that the viewer must make, then it feels kind of mechanical and simple.” In contrast, the show’s system will be able to customize the story seamlessly, merely by observing what viewers decide to watch. This design acknowledged that key life choices are often not guided by explicit decisions but by how we direct our attention—as Iris Murdoch once noted, “At crucial moments of choice most of the business of choosing is already over.” Before an impending story branch, for instance, the system would know if a particular viewer was interested in the protagonist’s personal life, and her serious side, and could alter the story accordingly—perhaps by killing off a close relative and having her seek revenge.

Barlow was uncertain how much of the “WarGames” tracking mechanics he should reveal to the viewer. “The two-million-dollar question is: Do we need to show this?” he said. He believed that interactive films will increasingly resemble online ads: unobtrusively personalized media. “When ads first started tracking you, for the first few months you’d be, like, ‘How did they know?’ A couple of months later, you’d be, like, ‘Of course they knew. I was Googling baby formula.’ And now it’s, like, ‘I’m still getting spammed for vacation properties around Lake Placid, and I’m, like, Dude, we went. You should already know!’ ”

Cartoon
“Freshly ground pepper?”
APRIL 1, 2013
ShareTweetBuy a cartoon
In many ways, the swiping system that Barlow had designed was a work-around for technological limitations that will soon fall away. Advances in machine learning are rapidly improving voice recognition, natural-language processing, and emotion detection, and it’s not hard to imagine that such technologies will one day be incorporated into movies. Brian Moriarty, a specialist in interactive media at Worcester Polytechnic Institute, told me, “Explicit interactivity is going to yield to implicit interactivity, where the movie is watching you, and viewing is customized to a degree that is hard to imagine. Suppose that the movie knows that you’re a man, and a male walks in and you show signs of attraction. The plot could change to make him gay. Or imagine the possibilities for a Hitchcock-type director. If his film sees you’re noticing a certain actor, instead of showing you more of him it shows you less, to increase tension.”

Moriarty believes that as computer graphics improve, the faces of actors, or even political figures, could be subtly altered to echo the viewer’s own features, to make them more sympathetic. Lifelike avatars could even replace actors entirely, at which point narratives could branch in nearly infinite directions. Directors would not so much build films around specific plots as conceive of generalized situations that computers would set into motion, depending on how viewers reacted. “What we are looking at here is a breakdown in what a story even means—in that a story is defined as a particular sequence of causally related events, and there is only one true story, one version of what happened,” Moriarty said. With the development of virtual reality and augmented reality—technology akin to Pokémon Go—there is no reason that a movie need be confined to a theatrical experience. “The line between what is a movie and what is real is going to be difficult to pinpoint,” he added. “The defining art form of the twenty-first century has not been named yet, but it is something like this.”

In mid-October, Bloch showed me a video that demonstrates the cinematic use of eye tracking—technology that is not yet commercially widespread but will likely soon be. The Daniels had directed the demo, and they had imbued it with their usual playfulness. It opens on a couple in a café. Behind them, a woman in a sexy dress and a muscleman walk in; whichever extra catches your gaze enters the story. Throughout, an announcer strives to describe the tracking system, but the story he uses as a showcase keeps breaking down as the characters, using a magical photo album, flee him by escaping into their past. Viewers, abetting the couple, send them into their memories by glancing at the photos in their album. At key moments, the story is told from the point of view of the actor you watch the most. “People start out looking at both, and then focus on one—and it is not necessarily the one who talks,” Bloch said. “When you look at her, she talks about him, then you care about him.”

The future that the demo portended—entertainment shaped by deeply implicit interactivity—was one that the Daniels later told me they found exhilarating and disconcerting. “In some ways, as artists, we are supposed to be creating collective experiences,” Kwan said. “This could get really messy if what we are actually doing is producing work that creates more isolated experiences.” Alternatively, it is possible to imagine the same technology pulling audiences into highly similar story patterns—narratives dominated by violence and sex—as it registers the basest of human responses.

“On the upside, interactivity has the potential to push you to reflect on your biases,” Scheinert said. Psychological experiments suggest that people who inhabit digital avatars of a race, gender, or age unlike their own can become more empathetic. “Done right, interactivity can shed light on what divides us,” he added. “We find ourselves talking a lot about video games lately. Video games have blossomed into an art form that’s become pretty cool. People are now making interactive stories that can move you, that can make you reflect on your own choices, because they make you make the kinds of choices that a hero really has to make. At the same time, it is really hard to make films with multiple endings, and I wonder what shortcuts will present themselves, what patterns. Right now, we don’t have many to fall back on.”

On the morning that Bloch showed me the eye-tracking demo, the Eko offices were humming with anticipation. Two weeks earlier, the company had been divided into ten teams that competed, in a two-day hackathon, to produce mockups for new shows or games. The competition was a search for effective ways to tell stories in a new medium. The solution that the Daniels had worked out for “Possibilia”—a fixed narrative playing out across multiple worlds—might have sufficed for a short film, where abstract dialogue could be tolerated, but it was not scalable to feature-length projects. That morning, an Eko creative director told me that he was wrestling with the magnitude of the creative shift. “What does character development even mean if a viewer is modifying the character?” he said. If a film has five potential endings, does it constitute a single work of art, or is it an amalgam of five different works?

All the teams had completed their mini films, and Bloch, in his office, was ready to announce the winner. An employee knocked on his door. “Should I yalla everyone?” he asked—using the Arabic term for “Let’s go.”

“Yalla everyone,” Bloch said.

Cartoon
“It’s made entirely out of rejected résumés.”
JANUARY 22, 2007
ShareTweetBuy a cartoon
In the office cafeteria, there was a long wooden table, beanbag chairs, a drum kit. People munched on popcorn. On a flat-screen monitor, the staff of Eko’s Israeli satellite office, which does the technical work, had video-conferenced in. Bloch held a stack of envelopes, as if he were at the Oscars, and began to run through the submissions. Two groups had used voice recognition, making it possible to talk to their films. Others had toyed with “multiplayer” ideas. Sam Barlow’s team had filmed a man in Hell trying to save his life in a game of poker. Viewers play the role of Luck, selecting the cards being dealt—not to win the game but to alter the drama among the players. Bloch said he had cited the film in a recent presentation as a possibility worth exploring. “The expected thing in that kind of story is that you would be the guy who comes to Hell,” he said. “Playing Luck—something that is more godlike—is much more exciting.”

The winner, Bloch announced, was “The Mole,” in which the viewer plays a corporate spy. The team had written software that made it possible to manipulate objects in the film—pick them up and move them. Bloch thought the software had immediate commercial potential. Walking back to his office, he expressed his excitement about what it would mean to permanently alter a scene: to tamper with evidence in a crime drama, say, and know that the set would stay that way. “It makes the world’s existence more coherent,” he said.

Even as a number of Bloch’s creative directors were working to make the interactivity more implicit, he did not think that explicit choices would fade away. Done well, he believed, they could deepen a viewer’s sense of responsibility for a story’s outcome; the problem with them today was the naïveté of the execution, but eventually the requisite artistic sophistication would emerge. “Every time there is a new medium, there’s an excessive use of it, and everyone wants to make it blunt,” he told me. “When stereo was introduced—with the Beatles, for example—you could hear drums on the left, singing on the right, and it didn’t make sense musically. But, as time went on, people started to use stereo in ways that enhanced the music.” Bloch had started assembling a creative board—filmmakers, game designers, writers—to think about such questions. “We have to break out of the gimmicky use of interactivity, and make sure it is used to enhance a story. We are in the days of ‘Put the drums on the left.’ But we’re moving to where we don’t have to do that. People, in general, are ready for this.”
Read More

sabato 18 marzo 2017

In praise of cash

08:19 0

Cash might be grungy, unfashionable and corruptible, but it is still a great public good, important for rich and poor alike

In praise of cash
I Recently found myself facing a vending machine in a quiet corridor at the Delft University of Technology in the Netherlands. I was due to speak at a conference called ‘Reinvent Money’ but, suffering from jetlag and exhaustion, I was on a search for Coca-Cola. The vending machine had a small digital interface built by a Dutch company called Payter. Printed on it was a sentence: ‘Contactless payment only.’ I touched down my bank card, but rather than dispensing Coke, it beeped a message: ‘Card invalid.’ Not all cards are created equal, even if you can get one – and not everyone can.
In the economist’s imagining of an idealised free market, rational individuals enter into monetary-exchange contracts with each other for their mutual benefit. One party – called the ‘buyer’ – passes money tokens to another party – called the ‘seller’ – who in turn gives real goods or services. So here I am, the tired individual rationally seeking sugar. The market is before me, fizzy drinks stacked on a shelf, presided over by a vending machine acting on behalf of the cola seller. It’s an obedient mechanical apparatus that is supposed to abide by a simple market contract: If you give money to my owner, I will give you a Coke. So why won’t this goddamn machine enter into this contract with me? This is market failure.
To understand this failure, we must first understand that we live with two modes of money. ‘Cash’ is the name given to our system of physical tokens that are manually passed on to complete transactions. This first mode of money is public. We might call it ‘state money’. Indeed, we experience cash like a public utility that is ‘just there’. Like other public utilities, it might feel grungy and unsexy – with inefficiencies and avenues for corruption – but it is in principle open-access. It can be passed directly by the richest of society to the poorest of society, or vice versa.
Alongside this, we have a separate system of digital fiat money, in which our money tokens take the form of ‘data objects’ recorded on a database by an authority – a bank – granted power to ‘keep score’ of them for us. We refer to this as our bank account and, rather than physically transporting this money, we ‘move’ it by sending messages to our banks – for example, via mobile phones or the internet – asking them to edit the data. Money ‘moves’ to your landlord if your two respective banks can agree to edit your accounts, reducing your score and increasing your landlord’s score.
This second mode of money is essentially private, running off an infrastructure collectively controlled by profit-seeking commercial banks and a host of private payment intermediaries – like Visa and Mastercard – that work with them. The data inscriptions in your bank account are not state money. Rather, your bank account records private promises issued to you by your bank, promising you access to state money should you wish. Having ‘£500’ in your Barclays account actually means ‘Barclays PLC promises you access to £500’. The ATM network is the main way by which you convert these private bank promises – ‘deposits’ – into the state cash that has been promised to you. The digital payments system, on the other hand, is a way to transfer – or reassign – those bank promises between ourselves.
This dual system allows us the option to use private digital bank money when buying pizza at a restaurant, but we can always resort to public state money drawn out of an ATM if the proprietor’s debit card system crashes. This choice seems fair. At different times, we might find either form more or less useful. As you read this, though, architects of a ‘cashless society’ are working to remove the option of resorting to state cash. They wish to completely privatise the movement of money tokens, pushing banks and private-payments intermediaries between all interactions of buyers and sellers.
Get Aeon straight to your inbox
The cashless society – which more accurately should be called the bank-payments society – is often presented as an inevitability, an outcome of ‘natural progress’. This claim is either naïve or disingenuous. Any future cashless bank-payments society will be the outcome of a deliberate war on cash waged by an alliance of three elite groups with deep interests in seeing it emerge.
The first is the banking industry, which controls the core digital fiat money system that our public system of cash currently competes with. It irritates banks that people do indeed act upon their right to convert their bank deposits into state money. It forces them to keep the ATM network running. The cashless society, in their eyes, is a utopia where money cannot leave – or even exist – outside the banking system, but can only be transferred from bank to bank.
The second is the private payments industry – the likes of Mastercard – that profits from running the infrastructure that services that bank system, streamlining the process via which we transfer digital money between bank accounts. They have self-serving reasons to push for the removal of the cash option. Cash transactions are peer-to-peer, requiring no intermediary, and are thus transactions that Visa cannot skim a cut off.
The third – perhaps ironically – is the state, and quasi-state entities such as central banks. They are united with the financial industry in forcing everyone to buy into this privatised bank-payments society for reasons of monitoring and control. The bank-money system forms a panopticon that enables – in theory – all transactions to be recorded, watched and analysed, good or bad. Furthermore, cash’s ‘offline’ nature means it cannot be remotely altered or frozen. This hampers central banks in implementing ‘innovative’ monetary policies, such as setting negative interest rates that slowly edit away bank deposits in order to coerce people into spending.
Governments don’t really mention that monetary policy agenda. It isn’t catchy enough. Rather, the key weapons used by the alliance are more classic shock-and-awe scare tactics. Cash is used by criminals! People buy drugs with cash! It’s the black economy! It supports tax evasion! The ability to present control as protection relies on constant calls to imagine an external enemy, the terrorist or Mafiosi. These cries of moral panic are set in contrast to the glossy smiling adverts about digital payment. The emerging cashless society looms like a futuristic sunrise, cleansing us of these dangerous filthy notes with rays of hygienic, convenient, digital salvation.
Signs say ‘Card only’. Who is Card? Card is a glamorous socialite, welcomed into stores. Card is superior
Supporting this core alliance are auxiliary corps of establishment academics, economists and futurists, living life in leafy suburbs, flying business class to speak at technology conferences, attended to by a wall of sycophantic media pundits and innovation journalists preaching the gospel of cashlessness. The Curse of Cash (2016) by Kenneth Rogoff, economics professor at Harvard, was longlisted for the Financial Times and McKinsey Business Book of the Year award, undoubtedly accompanied by invitations to financial industry-sponsored conference parties in five-star hotel lobbies.
The psychological assault is working. The Netherlands – where I face my vending machine – has become one key front in the war on cash. Here cash is becoming viewed like an illegal alien on the run, increasingly excluded from the formal economy, drawing dirty looks from shop assistants. Signs say ‘Card only’. Who is Card? Card is a glamorous socialite, welcomed into stores. Card is superior. Look at the bank adverts showcasing their accessories for Card. Nobody is building accessories for Cash.
The frontlines, though, are now creeping to poorer countries. India’s recent so-called ‘demonetisation’ was a brutal overnight retraction of rupee notes by the prime minister Narendra Modi to bring discipline to the ‘black economy’. It was an exercise that necessitated choking the poorest Indians, who depend on cash and who often lack access to bank accounts. Originally cast in popular terms as an attempt to stem corruption, the message was later ironically altered to cast cashlessness as a way to create economic progress for India’s poor.
This message is given humanitarian credentials by the UN-based Better Than Cash Alliance, which promotes ‘the shift from cash to digital payments to reduce poverty and drive inclusive growth’, and which counts Visa, Mastercard and Citi Foundation as key partners. The Modi action was also preceded by the initiation of the Cashless Catalyst programme, ‘an alliance between the Government of India and USAID, to expand digital payments in India’, backed by a panoply of digital payments companies. These official alliances of states, corporations and public academics are impressive. In India, well-heeled urban elites who applauded Modi’s actions from the sidelines can safely point to Rogoff’s Financial Times-nominated book of the year to justify it.
Rogoff, though, has appeared spooked, writing articles stating that he was advocating removing cash only from advanced economies with advanced banking systems. Oh damn. Highly influential and politically powerful Harvard economist releases a global anti-cash book and is concerned when poorer nations take him seriously?
The attempt to present the cashless bank-payments society as a benefit to marginalised people is tenuous at best. If you’re a vulnerable denizen of the informal economy, an off-the-grid hustler, or a low-income precarious worker, banks and payments intermediaries have little interest in prioritising you. The bank-payments society will not process the activity that takes place in the peripheral cracks that form the basis of your livelihood. Indeed, it is intended to shut down those spaces. That might be characterised as ‘progress’, but equally we might say you’re being firewalled out of the economy in an act of economic cleansing. Under the guise of destroying the ‘shadow economy’, the underclass, the unwatched, the eccentric and the untamed will be coercively corralled into the hands of the state-corporate mainstream.
Ihave no special love of cash. I don’t really care for nostalgic reveries on the beautiful aesthetics of the banknote, or its texture and cultural importance within a market system, though I understand this is important to many. I also don’t really care about the pedantic history of cash, whether it was the Tang or Song dynasty in China who first issued notes. What I care about is the unaccountable callousness of this vending machine, the one that has just blocked me from engaging in free trade.
Old vending machines didn’t do this. They had a little slot for coins, one that allowed even a ragged beggar to convert his tiny income into sustenance. Look closely at the machine. It’s actually two machines. The Payter device fused into its body does not work for the cola seller. It works for payments corporations. You see, the cola seller has one bank account, but there are many people with many accounts at different banks approaching the vending machine. Those banks need to identify which of their account holders wishes to transfer how much money to which account at which other bank. The device is there to deliver my card information into the transmission lines of the card payments networks, where it will be – in theory – routed to facilitate the transfer of money tokens from my account into the seller’s account, for a small fee.
This is no longer a deal between me and the seller. I am now dealing with a complex of unknown third parties, profit-seeking money-passers who stand between us to act as facilitators of the money flow, but also as potential gatekeepers. If a gatekeeper doesn’t want to do business with me, I can’t do business with the seller. They have the ability to jam, monitor or place conditions upon that glorious core ritual of capitalism – the transfer of money for the transfer of goods. This innocuous device exudes mechanical indifference, reporting only to invisible bosses far away, running invisible algorithms in invisible black boxes that don’t like me.
If we are going to refer to bank payments as ‘cashless’, we should then refer to cash payments as ‘bankless’. Because that’s what cash is, and right now it is the only thing standing between us and a completely privatised money system.
As in the case of previous privatisations, we’ll hear suited TV pundits arguing that if the digital payments companies don’t work for people they will be outcompeted by better private systems. Yeah right. When did you last see a credible competitor to the likes of Mastercard and Visa? They preside over huge network systems, subject to intense network effects. It’s in no shopkeeper’s interest to use a competitor to Visa when it’s so utterly dominant already.  
The most we can hope for, then, is a benign oligopoly of payments corporations, heavily exposed to the geopolitical aspirations of the states they reside within. The Chinese state encouraged the creation of China UnionPay precisely because they don’t want US payment megacorps installing themselves as gatekeepers into transactions made by Chinese citizens.
The new bank-payments society doesn’t actually solve the <em>old</em> problems – crime just goes digital, your account gets hacked rather than your wallet stolen
When mounting a defence, there are always two options. You either block an incoming attack, or you launch a strategic counterstrike, sometimes summed up as ‘offence is the best defence’.
In the former strategy, you focus on pointing out that the arguments against cash are either exaggerated, inaccurate or incomplete. Exaggeration and inaccuracy are both present in anti-cash tirades, but incompleteness is crucial. For example, let’s say we agree that criminals prefer cash. Does that translate into ‘We should ban cash’? Banning everything that criminals favoured would almost certainly lead to a constrained, suffocating existence for everyone. Congratulations, we ended crime, but only at the expense of ending privacy and free creative space too. The end of crime comes accompanied by an overbearing surveillance state, always standing next to you, reaching into your most private moments, treating you like a small child that cannot be trusted. Enjoy your life.
The second mode of defence-as-offence involves attacking the proposed alternative. We point out that the new bank-payments society, firstly, does not actually solve the old problems – crime just goes digital, and your account gets hacked rather than your wallet stolen – and, even worse, causes a whole range of new problems that were not explicitly mentioned in Mastercard’s marketing material. Let me reveal the fine-print written in invisible ink: Did we mention that in removing the ability to transact with cash we can now see everything you do and can also censor you? Cheer up, if you have nothing to hide, you have nothing to fear!
Oh yes, I can use scare tactics too. I can point out that removing cash takes us one step closer to potentially realising the most powerful and automated state-corporate financial control complex the world has ever seen. Very few people either seem to understand this, or care. Like a slow-boiled frog, we don’t seem to notice the process of locking ourselves into daily dependence on an alienating, unaccountable infrastructure that makes us increasingly subservient to bureaucratic processes we cannot see.
Maybe I need to turn up the shock-and-awe. Maybe I can drum up an argument about how, in a cashless society, terrorists could target the electrical grid to bring entire regional economies to a halt.
No. My main defence of cash will be simple and intuitive. As unsexy and analogue as cash is, it is resilient. It is easy to use. It requires little fancy infrastructure. It is not subject to arbitrary algorithmic glitches from incompetent programmers. And, yes, it leaves no data trail that will be used to project the aspirations and neuroses of faceless technocrats and business analysts into my daily existence. It comes with criminals, but hey, it’s good old friendly normal capitalism rather than predictive Minority Report surveillance-capitalism. And ask yourself this: do you really want to live in the latter society without the ability to buy drugs? Believe me, you’ll need something to dull the existential pain.
Read More

domenica 12 marzo 2017

WHY EVER STOP PLAYING VIDEO GAMES

09:58 0

EO GAMES

 WHY EVER STOP PLAYING VIDEO GAMES 

MANY AMERICANS have replaced work hours with game play — and ENDED UP HAPPIER. Which wouldn’t surprise most gamers.

On the evening of November 9, having barely been awake to see the day, I took the subway to Sunset Park. My objective was to meet a friend at the arcade Next Level.
In size, Next Level resembles a hole-in-the-wall Chinese restaurant. It does indeed serve food — free fried chicken and shrimp were provided that night, and candy, soda, and energy drinks were available at a reasonable markup — but the sustenance it provides is mostly of a different nature. Much of Next Level’s space was devoted to brilliant banks of monitors hooked up to video-game consoles, and much of the remaining space was occupied by men in their 20s avidly facing them. It cost us $10 each to enter.
I had bonded with Leon, a graphic designer, musician, and Twitter magnate, over our shared viewership of online broadcasts of the Street Fighter tournaments held every Wednesday night at Next Level. It was his first time attending the venue in person and his first time entering the tournament. I wasn’t playing, but I wanted to see how he’d do, in part because I had taken to wondering more about video games lately — the nature of their appeal, their central logic, perhaps what they might illuminate about what had happened the night before. Like so many others, I played video games, often to excess, and had done so eagerly since childhood, to the point where the games we played became, necessarily, reflections of our being.
To the uninitiated, you can find longreads on twitter and also the figures are nothing if not staggering: 155 million Americans play video games, more than the number who voted in November’s presidential election. And they play them a lot: According to a variety of recent studies, more than 40 percent of Americans play at least three hours a week, 34 million play on average 22 hours each week, 5 million hit 40 hours, and the average young American will now spend as many hours (roughly 10,000) playing by the time he or she turns 21 as that person spent in middle- and high-school classrooms combined. Which means that a niche activity confined a few decades ago to preadolescents and adolescents has become, increasingly, a cultural juggernaut for all races, genders, and ages. How had video games, over that time, ascended within American and world culture to a scale rivaling sports, film, and television? Like those other entertainments, video games offered an escape, of course. But what kind?
In 1993, the psychologist Peter D. Kramer published Listening to Prozac, asking what we could learn from the sudden mania for antidepressants in America. A few months before the election, an acquaintance had put the same question to me about video games: What do they give gamers that the real world doesn’t?
The first of the expert witnesses at Next Level I had come to speak with was the co-owner of the establishment. I didn’t know him personally, but I knew his name and face from online research, and I waited for an opportune moment to approach him. Eventually, it came. I haltingly asked if he’d be willing, sometime later that night, to talk about video games: what they were, what they meant, what their future might be — what they said, perhaps, about the larger world.
“Yes,” he replied. “But nothing about politics.”
In JuneErik Hurst, a professor at the University of Chicago’s Booth School of Business, delivered a graduation address and later wrote an essay in which he publicized statistics showing that, compared with the beginning of the millennium, working-class men in their 20s were on average working four hours less per week and playing video games for three hours. As a demographic, they had replaced the lost work time with playtime spent gaming. How had this happened? Technology, through automation, had reduced the employment rate of these men by reducing demand for what Hurst referred to as “lower-skilled” labor. He proposed that by creating more vivid and engrossing gaming experiences, technology also increased the subjective value of leisure relative to labor. He was alarmed by what this meant for those who chose to play video games and were not working; he cited the dire long-term prospects of these less-employed men; pointed to relative levels of financial instability, drug use, and suicide among this cohort; and connected them, speculatively, to “voting patterns for certain candidates in recent periods,” by which one doubts he meant Hillary Clinton.
But the most striking fact was not the grim futures of this presently unemployed group. It was their happy present — which he neglected to emphasize. The men whose experiences he described were not in any meaningful way despairing. In fact, the opposite. “If we go to surveys that track subjective well-being,” he wrote, “lower-skilled young men in 2014 reported being much happier on average than did lower-skilled men in the early 2000s. This increase in happiness is despite their employment rate falling by 10 percentage points and the increased propensity to be living in their parents’ basement.” The games were obviously a comforting distraction for those playing them. But they were also, it follows, giving players something, or some things, their lives could not.
The professor is nevertheless concerned. If young men were working less and playing video games, they were losing access to valuable on-the-job skills that would help them stay employed into middle age and beyond. At the commencement, Hurst was not just speaking abstractly — and warning not just of the risk to the struggling working classes. In fact, his argument was most convincing when it returned to his home, and his son, who almost seemed to have inspired the whole inquiry. “He is allowed a couple of hours of video-game time on the weekend, when homework is done,” Hurst wrote. “However, if it were up to him, I have no doubt he would play video games 23 and a half hours per day. He told me so. If we didn’t ration video games, I am not sure he would ever eat. I am positive he wouldn’t shower.”
My freshman year, I lived next door to Y, a senior majoring in management science and engineering whose capacity to immerse himself in the logic of any game and master it could only be described as exceptional. (This skill wasn’t restricted to electronic games, either: He also played chess competitively.) Y was far and away the most intrepid gamer I’d ever met; he was also an unfailingly kind person. He schooled me in Starcraftlet me fiddle around on the PlayStation 2 he kept in his room while he worked or played on his PC. An older brother and oldest child, I had always wanted an older brother of my own, and in this regard, Y, tolerant and wise, was more or less ideal.
Then, two days before Thanksgiving, a game called World of Warcraft was released. The game didn’t inaugurate the genre of massively multiplayer online role-playing games (MMORPGs), but given its enormous and sustained success — ­augmented by various expansions, it continues to this day — it might as well have. Situated on the sprawling plains of cyberspace, the world of World of Warcraft was immense, colorful, and virtually unlimited. Today’s WoW has countless quests to complete, items to collect, weapons and supplies to purchase. It was only natural that Y would dive in headfirst.
This he did, but he didn’t come out. There was too much to absorb. He started skipping classes, staying up later and later. Before, I’d leave when it was time for him to sleep. Now, it seemed, the lights in his room were on at all hours. Soon he stopped attending class altogether, and soon after that he left campus without graduating. A year later, I learned from M, his friend who’d lived next door to me on the other side, that he was apparently working in a big-box store because his parents had made him; aside from that, he spent every waking hour in-game. Despite having begun my freshman year as he began his senior one, and despite my being delayed by a yearlong leave of absence, I ended up graduating two years ahead of him.
Y’s fine now, I think. He did finally graduate, and today he works as a data scientist. No doubt he’s earning what economists would term a higher-skilled salary. But for several years he was lost to the World, given over totally and willingly to a domain of meanings legible only to other players and valid only for him. Given his temperament and dedication, I feel comfortable saying that he wasn’t depressed. Depression feels like an absence of meaning, but as long as he was immersed in the game, I believe that his life was saturated with meaning. He definitely knew what to do, and I would bet that he was happy. The truth is, as odd as it might sound, considering his complete commitment to that game, I envy this experience as much as I fear it. For half a decade, it seems to me, he set a higher value on his in-game life than on his “real” life.
What did the game offer that the rest of the world could not? To begin with, games make sense, unlike life: As with all sports, digital or analog, there are ground rules that determine success (rules that, unlike those in society, are clear to all). The purpose of a game, within it, unlike in society, is directly recognized and never discounted. You are always a protagonist: Unlike with film and television, where one has to watch the acts of others, in games, one is an agent within it. And unlike someone playing sports, one no longer has to leave the house to compete, explore, commune, exercise agency, or be happy, and the game possesses the potential to let one do all of these at once. The environment of the game might be challenging, but in another sense it is literally designed for a player to succeed — or, in the case of multiplayer games, to have a fair chance at success. In those games, too, players typically begin in the same place, and in public agreement about what counts for status and how to get it. In other words, games look like the perfect meritocracies we are taught to expect for ourselves from childhood but never actually find in adulthood.
And then there is the drug effect. In converting achievement into a reliable drug, games allow one to turn the rest of the world off to an unprecedented degree; gaming’s opiate-like trance can be delivered with greater immediacy only by, well, actual opiates. It’s probably no accident that, so far, the most lucid writing on the consciousness of gaming comes from Michael Clune, an academic and author best known for White Outa memoir about his former heroin addiction. Clune is alert to the rhetoric and logic of the binge; he recognizes prosaic activities where experience is readily rendered in words and activities like gaming and drugs, where the intensity eclipses language. Games possess narratives that have the power to seal themselves off from the narratives in the world beyond it. The gamer is driven by an array of hermetic incentives only partially and intermittently accessible from without, like the view over a nose-high wall.
In Tony Tulathimutte’s novel Private Citizens, the narrator describes the feeling near a porn binge’s end, when one has “killed a week and didn’t know what to do with its corpse.” An equally memorable portrait of the binge comes from the singer Lana Del Rey, who rose to stardom in 2011 on the strength of a single titled “Video Games.” In the song, Del Rey’s lover plays video games; he watches her undress for him; later, she ends up gaming. Pairing plush orchestration with a languid, serpentine delivery, the song evokes an atmosphere of calm, luxurious delight where fulfillment and artifice conspire to pacify and charm. The song doesn’t just cite video games; it sounds the way playing video games feels, at least at the dawn of the binge — a rapturous caving in.



Image
Images from Javier Laspiur’s “Controllers” series, in which he photographed himself with each video-game system he played over the years, beginning with Teletenis in 1983 and ending with Playstation Vita in 2013. The composite image that opens this story was built by Laspiur from these images. Photo: Javier Laspiur

Of course, it was not video games generally that removed Y from school but, allegedly, one specific and extraordinary game. In much the same way that video gaming subsumes most of the appeals of other leisure activities into itself, World of Warcraft fuses the attractions of most video games into a single package. It’s not just a game; in many ways, it’s the game of games. Set in a fantasy universe influenced by Tolkien and designed to support Tolkienesque role-playing, the game, digitally rendered, is immeasurably more colorful and elaborate than anything the Oxford don ever wrote: If The Lord of the Rings books are focused on a single, all-important quest, World of Warcraft is structured around thousands of quests (raids, explorations) that the player, alone or teaming with others, may choose to complete.
Whether greater or lesser, the successful completion of these quests leads to the acquisition of in-game currency, equipment, and experience points. Created by the Irvine-based developer Blizzard (in many ways the Apple of game developers), WoW is rooted in an ethos of self-advancement entirely alien to that of Tolkien’s ­Middle-Earth, where smallness and humility are the paramount virtues. There is little to be gained by remaining at a low level in WoW, and a great deal to be lost. The marginal social status of the gamer IRL has been a commonplace for some time — even for those who are, or whose families are, relatively well-off. What a game as maximalist and exemplary as WoW is best suited to reveal is the degree to which status is in the eye of the beholder: There are gamers who view themselves in the light of the game, and once there are enough of them, they constitute a self-sufficient context in which they become the central figures, the successes, by playing. At its peak, WoW counted 12.5 million subscribers, each of them paying about $15 monthly for the privilege (after the initial purchase). When you consider how tightly rationed status is outside the game, how unclear the rules are, how loosely achievement is tied to recognition, how many credentials and connections and how much unpleasantness are required to level up there, it seems like a bargain.
Of course, there are other games, and other reasons to play beyond achieving status. Richard Bartle, a British game-design researcher and professor, constructed a much-cited taxonomy of gamers based on his observations of MUD, an early text-based multiplayer game he co-created in 1978. These gamers, according to Bartle, can be subdivided into four classes: achievers, competing with one another to reap rewards from the game engine; explorers, seeking out the novelties and kinks of the system; socializers, for whom the game serves merely as a pretext for conversations with one another; and killers, who kill. It isn’t hard to extend the fourfold division from gamers to games: Just as there are video games, WoW chief among them, that are geared toward achievers, there are games suited to the other three branches of gamers.
In many major games of exploration, like Grand Theft Auto or Minecraft, the “objectives” of the game can be almost beside the point. Other times, the player explores by pursuing a novel-like narrative. The main character of the tactical espionage game Metal Gear Solid 3 is a well-toned Cold War–era CIA operative who finds himself suddenly in the forests of the USSR; the hero of the choose-your-own-adventure game Life Is Strange is a contemporary high-school student in Oregon, and her estrangement results from her discovery that she can, to a limited extent, reverse time. These games are all fundamentally single-player: Solitude is the condition for exploring within games in much the same way that it is for reading a novel.
While explorers commune with a story or storyteller, socializers communicate with one another: The games that serve as the best catalysts for conversation are their natural preference. Virtually any game can act as a bonding agent, but perhaps the best examples are party games like Nintendo’s Mario Party series, which are just board games in electronic form, or the Super Smash Brothers series, in which four players in the same room select a character from a Nintendo game with which to cheerfully clobber the other. The story, in these games, isn’t inside the game. It’s between the players as they build up camaraderie through opposition.
The ultimate games for killers aren’t fighting games so much as first-person shooters: Counter-Strike when played in competitive mode obliges you to play as one member of a team of five whose task is to eliminate an enemy quintet. The teams take turns being terrorists, whose task is to plant and detonate a bomb, and counter­terrorists, whose task is to deny them. What beauty exists, is found only in feats of split-second execution: improbable headshots, inspired ambushes, precisely coordinated spot rushes.
What’s odd is that across these groups of games there’s perhaps as much unity as difference. Many of the themes blend together. Achievement can be seen as a mode of exploration and seems as viable a basis for socializing as any other. Socializing can be grouped with achievement as a sign of self-actualization. And killing? Few things are more ubiquitous in gaming than killing. Each one of the trio of novel-like games cited above forces the player-protagonist to kill one or more of his or her closest friends. Even a game as rudimentary as Tetris can be framed as an ­unending spree of eliminations.
Perhaps psychological types are a less useful rubric than, say, geological strata. As much as games themselves are divided into distinct stages, levels divide the game experience as a whole.
The first, most superficial level is the most attractive: the simple draw of a glowing screen on which some compelling activity unfolds. There will always be a tawdry, malformed aspect to gaming — surely human beings were made for something more than this? — but games become more than games when displayed vividly and electronically. Freed from the pettiness of cardboard and tokens, video games, like the rest of screen culture, conjure the specter of a different, better world by contrasting a colorful, radiant display with the dim materials of the dusty world surrounding them.
Second: narrative. Like film and television, many video games rely heavily on narrative and character to sustain interest, but just as those mediums separated themselves from theater by taking full advantage of the camera’s capacity for different perspectives, video games distinguish themselves from film and television in granting the viewer a measure of control. What fiction writing achieves only rarely — the intimate coordination of reader and character — the video-game system achieves by default. Literary style pulls together character and reader; technology can implant the reader, as controller, within the character.
Third: objectives, pure and simple. Action games and platformers (like Mario) in which the player controls a fighter; strategy games in which the player controls an army; grand strategy games in which the player controls an empire; racing games in which the player controls a vehicle; puzzle games in which the player manipulates geometry; sports games; fighting games; SimCity: These are genres of games where plot is merely a function of competition, character is merely a function of success, and goals take precedence over words. Developing characters statistically by “leveling up” can feel more important, and gratifying, than developing characters psychologically by progressing through the plot. The graphics may or may not be polished, but the transactional protocol of video games — do this and you’ll improve by this much — must remain constant; without it, the game, any game, would be senseless.
Fourth: economics. Since every game is reliant on this addictive incentive system, every gamer harbors a game theorist, a situational logician blindly valorizing the
optimization of quantified indices of “growth” — in other words, an economist. Resource management is to video games what ­African-American English is to rap music or what the visible sex act is to pornography — the element without which all else is unimaginable. In games as in the market, numbers come first. They have to go up. Our job is to keep up with them, and all else can wait or go to hell.
And there is something sublime, though not beautiful, about the whole experience: Video games are rife with those Pythagorean vistas so adored by Americans, made up of numbers all the way down; they solve the question of meaning in a world where transcendent values have vanished. Still, the satisfaction found in gaming can only be a pale reflection of the satisfaction absent from the world beyond. We turn to games when real life fails us — not merely in touristic fashion but closer to the case of emigrants, fleeing a home that has no place for them.
Gamers have their own fantasies of ­prosperity, fantasies that sometimes come true. For a few, gaming has already become a viable and lucrative profession. Saahil Arora, an American college dropout who plays professional Dota 2 under the name UNiVeRsE, is reportedly the richest competitive gamer: He has earned $2.7 million in his career so far. But even Arora’s income is dwarfed by those of a handful of YouTube (and Twitch) broadcasters with a fraction of his talent: Just by filming themselves playing through games in a ludicrously excitable state for a young audience of fellow suburbanites, their income from ads and subscriptions adds up to earnings in the mid seven figures. The prospects for those who had gathered at Next Level that chilly November night were not quite so sunny. The fighting-game community (FGC), which has developed around one-on-one games like Streetfighter, and for which Next Level serves as a training ground, has yet to reach the popularity of multiplayer online battle arenas (MOBAS) like Dota 2, or first-person shooters, such as Counter-Strike. (The scene is taking steps in that direction: 2016 marked the first year that the Street Fighter V world championships were broadcast on ESPN2 as well as the first time that an American FGC player — Du Dang, from Florida — took the title over top players from Japan.)

Next Level itself is not financially self-sufficient: Without additional income, including from its co-owner and co-founder Henry Cen (a former day trader), it couldn’t pay the rent. “Only rich countries can have places like this,” says the bespectacled and crane-thin Cen. “You wouldn’t see this in Third World countries.” He describes the people who make up the majority of New York’s FGC as coming from blue-collar families: “They’re not the richest of people. There are some individuals that are, but most people that do have money, they want to do something more with their money.” He’s relatively pessimistic about the possibility of becoming a professional gamer: Considering the economic pressures on FGC members and the still small size (roughly 100,000 viewers at most) of the viewing audience, it’s a career that’s available only to the top “0.01 percent” of players. Family pressures to pull back from gaming are strong: Even Justin Wong, one of the happy few who succeeded in becoming a professional, reportedly hid the fact from his family for a long time. “His family did not accept him as a gamer, but recently, they have changed their opinion,” says Cen.Still, according to a veteran of the community (16 of his 34 years), Sanford Kelly, the fighting-game community scene has a long way to go. Though he personally isn’t fond of Street Fighter V, the latest iteration in the series, his energies are devoted to guiding the New York FGC to become more respectable and therefore more attractive to e-sports organizations that might sponsor its members: “We have to change our image, and we have to be more professional.” Compared with other branches of American e-sports, dominated by white and Asian players, the FGC has a reputation that’s always been more colorful: It’s composed primarily of black players like Kelly, Asian players like his longtime Marvel rivals Justin Wong and Duc Do, and Latino gamers, and its brash self-presentation is influenced by the street culture that gave rise to hip-hop. With a typical mixture of resignation and determination, Kelly internalized the fact that, locally and nationally, his scene would have to move away from its roots to move to a larger stage. But the competitive gaming economy had already reached the point where, as the streamer, commentator, and player Arturo Sanchez told me, the earning potential of the FGC were already viable. “So long as you don’t have unrealistic ambitions.” Between the money gleaned from subscriptions to his Twitch channel, payments for streaming larger tournaments, sponsor fees from businesses that pay for advertising in the breaks between matches, crowdfunding, merchandise, and YouTube revenue, Sanchez is able to scratch out a living, comfortably if not prosperously, as a full-time gamer.
“Because he started bringing in money,” I speculated.
“Yes. If you’re doing gaming, especially if you’re an Asian, your progress in life is measured by only one thing: money.”
Like Professor Hurst, I was interested in the political valence of gaming: Was there something fundamental to the pastime that inevitably promoted a dangerous politics? I was intrigued by the data Hurst cited, and during the recent campaign and immediately after, a number of writers noted the connection between Trump supporters and the world of militant gamer-trolls determined to make gaming great again through harassment and expulsion. But as a gamer myself, I found this ominous vision incomplete at best: Most gamers weren’t Trump-adjacent, and if Trumpism corresponded to any game, I thought, it was one that, in its disastrous physicality, could never become a video game: not Final Fantasy but Jenga. (Jenga is now on Nintendo Wii, I’m told.) On the other hand, I’ve never found it easy to trust my own perceptions, so I reached out to friends and acquaintances who were also gamers to learn from their experiences.
Though none of us is a Trumpist, no discourse could unite us. We were trading dispatches atop the Tower of Babel. We got different things out of gaming because we were looking for different things. Some of us greatly preferred single-player games, and some could barely stand to play games alone. Some of us held that writing about games was no more difficult than writing about any other subject; some of us found, and find, the task insanely difficult. Some of us just played more than others — Tony Tulathimutte listed 28 games as personal favorites. He and Bijan Stephen, also a writer, both had a fondness for secondary characters. (Stephen: “I love the weird helpers like Toad and the wizards in Gauntlet — not because they’re necessarily support characters but because they’ve got these defined roles that only work in relation to the other players.”) Meanwhile, Emma Janaskie, an associate editor at Ecco Books, spoke about her favorite games’ main characters, especially Lara Croft. Janaskie’s longest run of gaming lasted ten hours, compared with Stephen’s record of six hours and Tulathimutte’s of 16. When likewise queried, the art critic and gaming writer Nora Khan laughingly asked if she could go off the record, then recalled: “I’ve gotten up to take breaks and stuff, but I’ve played through all of Skyrim once,” adding parenthetically that “Skyrim is a 60-to-80-hour game.”
Janaskie and Tulathimutte made strong avowals that gaming fell squarely within the literary field (Tulathimutte: “Gaming can be literary the same way books can be. DOS for Dummies and Tetris aren’t literary, but Middlemarch and The Last of Us are, and each has its purpose”); I found the proposition more dubious.
“It seems to me that writers get into games precisely because it’s almost the antithesis of writing,” I said to Khan.
“Absolutely,” she said.
“When you’re writing, you don’t know what the stakes are. The question of what victory or defeat is — those questions are very hard to pin down. Whereas with a game, you know exactly what the parameters are.”
“Yes. I wouldn’t say that for everyone. Completing a quest or completing the mission was never really very interesting to me personally. For me, it’s more meditative. When I play Grand Theft Auto V, it’s just a way to shut off all the noise and for once be in a space where I don’t need to be critical or intellectualize something. Because I’m doing that all the time. I just go off and drive — honestly, that’s what I do in real life, too. When I just want to drop out of the situation, I’ll go and drive outside of the city.”
I wouldn’t trade my life or my past for any other, but there have been times when I’ve wanted to swap the writing life and the frigid self-consciousness it compels for the gamer’s striving and satisfaction, the infinite sense of passing back and forth (being an “ambiguous conduit,” in Janaskie’s ­poignant phrase) between number and body. The appeal can’t be that much different for nonwriters subjected to similar social or economic pressures, or for those with other ambitions, maybe especially those whose ambitions have become more dream state than plausible, actionable future. True, there are other ways to depress mental turnout. But I don’t trust my body with intoxicants; so far as music goes, I’ve found few listening experiences more gratifying or revealing than hearing an album on repeat while performing some repetitive in-game task. Gaming offers the solitude of writing without the strain of performance, the certitude of drug addiction minus its permanent physical damage, the elation of sports divorced from the body’s mortality. And, perhaps, the ritual of religion without the dogma. For all the real and purported novelty of video games, they offer nothing so much as the promise of repetition. Life is terrifying; why not, then, live through what you already know — a ­fundamental pulse, speechless and without thought?
After college graduation, once I’d been living back home unemployed with my father for a few months, he confronted me over the dinner table with a question. Given the vast sums of time he’d witnessed me expend on video games both recently and in my youth, wouldn’t it be right to say that gaming, not writing, was what I really wanted to do with my life?
I responded that my goal was to become a writer, and I meant it. But first I had to pause a few seconds to be sure. It’s true that the postgraduate years I spent jobless with my father laid the foundation for what I can do as a writer. I read literature, read history, studied maps, watched films and television, listened to music. I lifted weights in the basement. I survived my final episode of clinical depression and finished translating a mid-19th-century French poet who laid the foundation for literary modernism. But when I was too weak to do these things, and I often was, that so-called writer (zero pitches, zero publications) was, in Baudelaire’s phrase, a “drunkard of his own blood” obsessively replaying the video games of his adolescence — so as to re-create a sense, tawdry and malformed but also quantifiable, of status advancement in an existence that was, by any worldly standard, I knew, stagnant and decrepit. It didn’t matter that the world, by its own standards of economic growth, was itself worn down and running on fumes. Regardless of the rightness of the world, one cannot help but feel great individual guilt for failing to find a meaningful activity and position within it. And regardless of whether it benefits one in the long run, video games can ease that guilt tremendously in the immediate present.
The strange thing is that that guilt should be gone now. I have made a name as a writer. Yet I can’t say that I’ve left the game. In the weeks after writing my first major piece, a long book review, I fired lasers at robots in space for 200 hours. Two summers ago, I played a zombie game in survival mode alone for a week; eventually the zombies, which one must take down precisely and rapidly lest one be swarmed, started to remind me of emails. A few months back, I used a glitch to amass, over the course of several hours, a billion dollars in a game where there’s nothing to buy besides weapons, which you can get for free anyhow. I reinstalled the zombie game on Election Night.

Is it an addiction? Of course. But one’s addiction is always more than a private affair: It speaks to the health and the logic of society at large. Gaming didn’t impact the election, but electing to secede from reality is political, too. I suspect that the total intensity of the passion with which gamers throughout society surrender themselves to their pastime is an implicit register of how awful, grim, and forbidding the world outside them has become — the world that is gaming’s ultimate level, a space determined by finance and labor, food and housing, race and education, gender and art, with so many tests and so many bosses. Just as a wrong life cannot be lived rightly, a bad game cannot be played well. But for lack of an alternative, we live within one, and suffer from its scarcity.
Read More

Post Top Ad