Daily Interesting News: Longreads

Daily Interesting News Daily news about generic category, long form, long lenght articles, for all read fan. Miscellanous news, general news, amazing news, story, stories, long articles story, internet story.

ADS

Hot

Post Top Ad

Visualizzazione post con etichetta Longreads. Mostra tutti i post
Visualizzazione post con etichetta Longreads. Mostra tutti i post

venerdì 24 marzo 2017

The Sense of an Ending

07:18 0
The Sense of an Ending
The last Woolly Mammoth died on an island now called Wrangel, which broke from the mainland twelve thousand years ago. They inhabited it for at least eight millennia, slowly inbreeding themselves into extinction. Even as humans developed their civilizations, the mammoths remained, isolated but relatively safe. While the Akkadian king conquered Mesopotamia and the first settlements began at Troy, the final mammoth was still here on Earth, wandering an Arctic island alone.
The last female aurochs died of old age in the Jaktorów Forest in 1627. When the male perished the year before, its horn was hollowed, capped in gold, and used as a hunting bugle by the king of Poland.
The last pair of great auks had hidden themselves on a huge rock in the northern Atlantic. In 1844, a trio of Icelandic bounty hunters found them in a crag, incubating an egg. Two of the hunters strangled the adults to get to the egg, and the third accidentally crushed its shell under his boot.
Martha, the last known passenger pigeon, was pushing thirty when she died. She’d suffered a stroke a few years earlier, and visitors to her cage at the Cincinnati Zoo complained the bird never moved. It must have been strange for the older patrons to see her there on display like some exotic, since fifty years before, there were enough of her kind to eclipse the Ohio sun when they migrated past.
Incas, the final Carolina parakeet, died in the same Cincinnati cage that Martha did, four years after her. Because his long-term mate, Lady Jane, had died the year before, it was said the species fell extinct thanks to Incas’s broken heart.
When Booming Ben, the last heath hen, died on Martha’s Vineyard, they said he’d spent his last days crying out for a female that never came to him. The Vineyard Gazette dedicated an entire issue to his memory: “There is no survivor, there is no future, there is no life to be recreated in this form again. We are looking upon the uttermost finality which can be written, glimpsing the darkness which will not know another ray of light.”
Benjamin, the last thylacine—or Tasmanian tiger—perished in a cold snap in 1936. His handlers at the Beaumaris Zoo had forgotten to let him inside for the night and the striped marsupial froze to death.
The gastric brooding frog—which incubates eggs in its belly and then vomits its o spring into existence—was both discovered and declared extinct within the twelve years that actor Roger Moore played James Bond.
Turgi, the last Polynesian tree snail, died in a London zoo in 1996. According to the Los Angeles Times, “It moved at a rate of less than two feet a year, so it took a while for curators…to be sure it had stopped moving forever.”
The same year, two administrators of a Georgia convalescent center wrote the editor of the journal Nature, soliciting a name for an organism that marks the last of its kind. Among the suggestions were “terminarch,” “ender,” “relict,” “yatim,” and “lastline,” but the new word that stuck was “endling.” Of all the proposed names, it is the most diminutive (like “duckling” or “ ngerling”) and perhaps the most storied (like “End Times”). The little sound of it jingles like a newborn rattle, which makes it doubly sad.
While Nature’s readers were debating vocabulary, a research team in Spain was counting bucardos. A huge mountain ibex, the bucardo was once abundant in the Pyrenees. The eleventh Count of Foix wrote that more of his peasants wore bucardo hides than they did woven cloth; one winter, the count saw five hundred bucardos running down the frozen outcrops near his castle. The bucardo grew shyer over the centuries—which made trophy hunters adore it—and soon disappeared into the treacherous slopes for which it was so well designed.
Though a naturalist declared it hunted from existence at the turn of the twentieth century, a few dozen were spotted deep in the Ordesa Valley in the 1980s. Scientists set cage traps, which caught hundreds of smaller, nonendangered chamois. It was frustrating work, and bucardo numbers dwindled further as the humans searched on. By 1989, they’d trapped only one male and three females. In 1991, the male died and eight years later, the taxon’s endling, Celia, walked right into the researchers’ trap.
She was twelve when they shot her with a blow dart and tied white rags over her eyes to keep her calm. They fit her with a tracking collar and a pulse monitor and biopsied two sections of skin: at the left ear and the ank. Then Celia was released back into the wild to live out the rest of her days. Of the next ten months we know nothing; science cannot report what life was like for Earth’s final bucardo. But the Capra pyrenaica before her had, probably since the late Pleistocene, moved through the seasons in sex-sorted packs. In the female groups, a bucarda of Celia’s age would serve as leader. When they grazed in vulnerable spaces, she’d herd her sisters up the tricky mountain shelves at the rst sign of danger, up and up until the group stood on cli s that were practically vertical. Celia, however, climbed to protect only herself that final winter—and for at least three winters before that, if not for most winters in her rocky life.
BWhite Woolys
It is dangerous to assume that an endling is conscious of its singular status. Wondering if she felt guilty, or felt the universe owed her something—that isn’t just silly; it’s harmful. As is imagining a bucardo standing alone on a vertical cliff , suppressing thoughts of suicide. As is assuming her thoughts turned to whatever the mountain ungulate’s version of prayer might be. Or hoping that, in her life, she felt a fearlessness impossible for those of us that must care for others.
The safe thought is that Celia lived the life she’d been given without any sense of finality. She climbed high up Monte Perdido to graze alone each summer, and hobbled down into the valley by herself before the winters grew too frigid. She ate and groomed and slept, walked deep into the woods, and endured her useless estrus just as she was programmed to do—nothing further.
But then again, a worker ant forever isolated from its colony will walk ceaselessly, refusing to digest food, and a starling will suffer cell death when it has no fellow creature to keep it company. A dying cross spider builds a nest for her o spring even though she’ll never meet them, and a pea aphid will explode itself in the face of a predator, saving its kin. An English-speaking gray parrot once considered his life enough to ask what color he was, and a gorilla used his hands to tell humans the story of how he became an orphan. Not to mention the countless jellyfish that, while floating in the warm seas, have looked to the heavens for guidance.
Though problematic, it’s still easy to call these things representative of what unites our kingdom: we are all hardwired to live for the future. Breeding, dancing, nesting, the night watch—it’s all in service to what comes later. On a cellular level, we seem programmed to work for a future which doesn’t concern us exactly, but that rather involves something that resembles us. We all walk through the woods, our bodies rushing at the atomic level toward the idea that something is next. But is there space in a creature’s DNA to consider the prospect of no next? That one day, nothing that’s us—beyond ourselves—will exist, despite the world that still spins all around us?
Six days into the new millennium, Celia’s collar transmitted the “mortality” beep. A natural death—crushed by a falling tree limb, her neck broken and one horn snapped like a twig. In a photo taken by the humans that fetched her, she seems to have been nestled on her haunches, asleep. They sent Celia to a local taxidermist and then turned to the cells they’d biopsied. After a year spent swimming in liquid nitrogen at 321 degrees below zero, the cells were primed to divide. The Los Angeles Times ran a long article about what might happen next, quoting an environmentalist who warned, “We don’t have the necessary humility in science.”
At the lab, technicians matched a skin cell from Celia with a domestic goat’s egg cell. The goat-egg’s nucleus was removed, and Celia’s nucleus put in its place. Nearly all the DNA of any cell lives inside its nucleus, so this transfer was like putting a perfect Celia curio into the frame of a barnyard goat.
After a mammal’s egg cell is enucleated, it is common for nothing to happen. But sometimes, the reconstructed cell reprograms itself. Thanks to a magic humans don’t totally grasp, the nucleus decides it is now an egg nucleus and then replicates not as skin, but as pluripotent, able to split into skin cells, blood cells, bone cells, muscle cells, nerve cells, cells of the lung.
While this DNA technology evolved, the Celia team cultivated an odd harem of hybrid surrogates—domestic goats mated with the last female bucardos. They had hybrid wombs that the scientists prayed would accept the reconstructed and dividing eggs. In 2003, they placed 154 cloned embryos—Celia in a goat eggshell—into 44 hybrids. Seven of the hybrids were successfully impregnated, and of those seven, just one animal carried a zygote to term. The kid was born July 30, 2003, to a trio of mothers: hybrid womb, goat egg, and magical bucardo nucleus. Genetically speaking, however, the creature was entirely Capra pyrenaica. And so, thirteen hundred days after the tree fell on Celia, her taxon was no longer extinct—for about seven minutes.
The necropsy photos of the bucardo kid are strangely similar to those of Yuka, the juvenile mammoth found frozen in permafrost with wool still clinging to her body. Wet, strangely cute, and lying stretched out on her side, the newborn looks somehow time- less. Her legs seem strong and kinetic, as if she were ready to jump up and run. All of her systems were apparently functional, save her tiny lungs.
In her hybrid mother’s womb, the clone’s lung cells mistakenly built an awful extra lobe, which lodged in her brand-new throat. The kid was born struggling for air and soon died of self-strangulation. Lungs seem the trickiest parts to clone from a mammal; they’re what killed Dolly the sheep as well. How fitting that the most difficult nature to re-create in a lab is the breath of life.
***
The term we now use for the procedure of un-ending an endling has been around for decades, though it was rarely used. “De-extinction” first appeared in a 1979 fantasy novel, after a future-world magician conjures domestic cats back from obscurity. But when the Celia team reported their findings to the journal Theriogenology, they didn’t use the word. A few scientifc papers in fields ranging from cosmology to paleobiology check the name, but it was left almost entirely to science fiction until a dozen years postbucardo. A MacArthur Fellow chided the term’s clunkiness, calling it “painful to write down, much less to say out loud.” But eventually, the buzzword stuck.
“De-extinction” made its popular debut in 2013, in a National Geographic article. To celebrate the coming-out of the term—and the new ways it would allow humans to mark animal lives—the magazine held a conference at their headquarters with lectures organized into four categories: Who, How, Why/Why Not, and Wild Again. Among the How speakers was Ordesa National Wild- life Park’s wildlife director, who recounted the Celia saga. The four-syllable term tangled with the director’s Castilian accent, but people still applauded when he called Celia’s kid “the first ez-tinc-de-tion.” As the audience clapped, the director bowed his head, obviously nervous. Behind him was a projected image of the cloned baby, fresh from her hybrid mother and gagging in the director’s latexed hands. The clone’s tongue lolled out the side of her mouth.
Earlier that morning, an Australian paleontologist confessed his lifelong obsession with thylacines, despite being born nine years after the Tasmanian tiger’s demise. “We killed these things,” he said to the audience. “We shot every one that we saw. We slaughtered them. I think we have a moral obligation to see what we can do about it.” He then explained how he’d detected DNA fragments in the teeth of museum specimens. He vowed to first find the technology to extract the genetic code from the thylacine tooth-scraps, then to rebuild the fragments to make an intact nucleus, and finally to find a viable host womb where a Tasmanian tiger’s egg could incubate—in a Tasmanian devil, perhaps.
The man’s research group, called the Lazarus Project, had just announced their successful cloning of gastric brooding frog cells. The fact that the cells only divided for a few days and then died would not deter his enthusiasm. “Watch this space,” he said. “I think we’re gonna have this frog hopping glad to be alive in the world again.”
Later in the conference, a young researcher from Santa Cruz outlined a plan that allowed humans to “get to witness the passenger pigeon rediscover itself.” But after de-extinction, he said, the birds would still need flying lessons. So why not train homing pigeons to fly passenger routes? To convince the passenger babies they were following their own kind, the young scientist suggested coating the homers with blue and scarlet cosmetic dyes.
That afternoon, the chair of the Megafauna Foundation mentioned how medieval tales and even the thirty-thousand-year-old paintings in Chauvet Cave would help prepare Europe for the herds of aurochs he hoped to resurrect. The head of the conference’s steering committee sounded almost wistful when he concluded at the end of his speech, “Some species that we killed off totally, we could consider bringing back to a world that misses them.” And a Harvard geneticist hinted that mouse DNA could be jiggered to keep the incisors growing from the jawline until they protruded, tusk-like, from the mouth. This DNA patchworking could help fill a gap in our spotty rebuild of the mammoth genome, he said.
Shortly after that talk, a rare naysayer—a conservation biologist from Rutgers—addressed the group: “At this very moment, brave conservationists are risking their lives to protect dwindling groups of existing African elephants from heavily armed poachers, and here we are in this safe auditorium, talking about bringing back the woolly mammoth; think about it.”
BWhite Pigeons
But what exactly is there to think about? What can thinking do for us, really, at a moment like this one? We’re knee-deep in the Holocene die-off, slogging through neologisms that remind us what is left. These speeches—of extravagant plans, of Herculean pipe dreams, and of missing—are more than thought; they admit to a spot on our own genome. Perhaps we’ve always held, with submicroscopic scruples, the fact of this as our next. The first time a forged tool sliced a beast up the back was the core of this lonely cell, and then that cell set to split, and now each scientist—onstage and dreaming—is a solitary cry of this atomic, thoughtless fate.
To dispatch animals, then to miss them. To forget their power and use our own cockeyed brawn to rebuild something unreal from the scraps. Each speech, at this very moment, is a little aria of human understanding, but it’s the kind of knowledge that rests on its haunches in places far beyond thought.
And at that very moment, the last Rabbs’ fringe-limbed tree frog was dodging his keepers at a biosecure lab in Atlanta. Nicknamed Toughie, the endling hadn’t made a noise in over seven years.
And at that very moment, old Nola and Angalifu, two of the six remaining northern white rhinos, stood in the dirt of the Safari Park at the San Diego Zoo with less than twenty-four months to live. Their keepers had already taken Angalifu’s sperm and would do the same for Nola’s eggs, housing the samples in a lab that had already cataloged cells from ten thousand species. It was a growing trend—this new kind of ark, menagerie, or book of beasts—and it carried a new term for itself: the “frozen zoo.”
The planet’s other northern whites, horns shaved down for their own protection, roamed Kenya’s Ol Pejeta Conservancy under constant armed watch. And Celia’s famous cells were buzzing in their cryogenic state, far from Monte Perdido, still waiting for whatever might come next.
And at that very moment, way up in northwest Siberia, a forward-thinking Russian was clearing a space to save the world. As the permafrost melted, he said, it would eventually release catastrophic amounts of surface carbon into the atmosphere. To keep the harmful gases in the rock-hard earth, the Russian and his team wanted to turn the tundra back into the mammoth steppe: restoring grassland and reintroducing ancient megafauna that would stomp the dirt, tend the grass, and let the winter snows seep lower to cool the deep land. The reintroduced beasts, he swore, would send the tundra back in time.
He proposed that for every square kilometer of land there be “five bison, eight horses, and fifteen reindeer,” all of which had already been transported to his “Pleistocene Park.” Here was a space where earlier versions of all these beasts had lived in the tens of thousands of years prior. Eventually, once the science caught up, he would bring one elephant-mammoth hybrid per square kilometer, too.
And so here is a picture of next: some model of gargantuan truck following the Kolyma River—rolling over the open land where mammoths once ran for hundreds of miles. Like a growing many of us, the Russian sees the moment in which that truck’s cargo door opens and a creature—not quite Yuka but certainly not elephant—lumbers out into the grass. Her first steps would be less than five hundred miles as the crow fiies, out and out over the Arctic, from the island where the last living mammoth fell into the earth 3,600 years ago.
The Russian’s process—making new beasts to tread on the bones of what are not quite their ancestors—has a fresh label for itself, as everything about this world is new. The sound of this just-coined word, when thrown by a human voice into a safe auditorium, carries with it the hope of a do-over, and the thrust of natural danger.
That new word is re-wilding.
Read More

domenica 12 marzo 2017

The Easiest Way To Protect Your Devices From Hacks? Keep Them Updated

10:00 9

The Easiest Way To Protect Your Devices From Hacks? Keep Them Updated

The Easiest Way To Protect Your Devices From Hacks? Keep Them Updated

This week’s WikiLeaks revelations, which showed that the CIA can compromise a huge range of devices, shouldn’t send you into paroxysms of fear over your smartphone. It should, though, be a solid reminder that one of the best ways to keep yourself safe from hackers is also one of the simplest: Update your gear.
What allows hackers access to your devices, after all, are breakdowns and vulnerabilities in the firmware (read: operating system) that runs them. Many companies push out updated versions of that firmware regularly, and those releases often include important security updates. A recent example? In January, Apple pushed out iOS 10.2.1, which which patched over a dozen vulnerabilities—some of them major.
That’s an extreme but not isolated example. And while there are plenty of other tin-foil hat strategies to securing your digital lives, the absolute simplest, surest way to achieve a baseline of protection is usually just to hit “update.”
With that in mind, here’s how to keep all of your gear as up to date as possible. Find a few hours some weekend to back up your stuff, and then bang out the update. It won’t make you anything close to bulletproof, but it should grant you some much-needed peace of mind.

iOS

Okay, the easy one’s first. Apple updates iOS pretty regularly, and will badger you with notifications until you catch up. Nearly 80 percent of iOS devices, for instance, are already on iOS 10. Good work!
If you’re one of the laggards, or if you’ve been skipping the iterative updates, getting caught up is easy. If your battery level’s healthy, head to Settings > General > Software Update. Then tap Download and Install, at which point you can decide to install at that moment or schedule it for overnight. If you go with the latter, plug your phone in before bedtime.
And that’s it! You’re up to speed. Back to Clash Royale.

Android

Here’s where things get trickier (yes, already). The good news? Google releases monthly security updates for Android devices. That’s great if you own a Nexus or Pixel smartphone, but less helpful across the rest of the big wide Android world.

While some manufacturers, like LG and Samsung, have committed to the monthly patch process, It’s a situation that leaves millions of devices potentially exposed. As of publication, fewer than three percent of Android devices had received last fall’s Nougat update, with only half a percent on the latest push, Android 7.1.While Google shares that updated code with its hardware partners, it can take a long, long time for it to reach devices that aren’t sold by Google itself. That’s because manufacturers often run modified versions of the operating system, meaning regular changes aren’t quite as simple as plug-and-play. Carriers also sometimes weigh in on when and how a smartphone or tablet update happens.
So the best way to keep Android device updated, then, is to stick with a Nexus or Pixel. Regardless of your specific device, though, go to Settings > About Phone > System Updates to see what you’re running, and if there’s a more recent version available to you.

MacOS

Easy as pie! Your Mac is probably already up to speed, because Apple hounds you with daily reminders until you concede. Just in case you want to be extra-sure you’re up to date, though, click on the Apple icon at the top of your screen and hit Software Update. That’ll get you to the Mac App Store, which will show you what needs downloading under the Updates tab. Select the latest and greatest, and do a little air guitar solo while you wait for the install to finish.

Windows

Windows 10 features automatic updates, so you should be doing just fine. It’s worth checking just in case you’re behind, though. To do so, head to Start > Settings > Update & security > Windows Update > Check for Updates.
Getting yourself from an older version of Windows may be slightly more of a pain, but not by much. Just head to this link https://www.microsoft.com/en-us/software-download/windows10, select which version of Windows 10 you want, click Download Tool Now, launch the tool, and follow the instructions from there. It’ll take a while, but it’s worth it.

Your TV

One WikiLeaks revelation that took casual observers aback? The CIA used malware to turn a specific set of Samsung televisions into eavesdropping machines.
Creepy! But maybe not as surprising as it seemed. Smart TVs have gotten in trouble for tracking their viewers for years; the perpetrators have just been advertisers instead of spies.
Otherwise, many smart TVs offer automatic updates by default. If not, or if you just want to double check, a manual update usually sits just a few remote-clicks away. On an LG webOS TV, go to Menu > System Settings > About > System Updates. For Samsung, go to Menu > Support > Software Update. For Sony, hit the Home button on your remote, then go to Help > System software update > Check for a system software update. Vizio? Press the Menu button on your remote, click on System > System Information > Service Check.

Routers

Oh boy. OK. This one’s not fun. And we’re not going to be able to cover all of them here. But know that routers act as the first line of defense against hackers and botnet recruiters, so keeping yours up to date really can be worth the hassle.
For Netgear, head to this website. on a device that’s connected to your network. Enter the user name and password; the defaults are “admin” and “password,” respectively. (Also: Change the defaults.) Go to Advanced > Administration > Firmware/Router Update > Check. If there’s an update available, click Yes, and hope you don’t have to fiddle with it again any time soon.
Linksys has an auto firmware update feature, but if you’d rather go manual, head to this site http://support.linksys.com/, then enter your router’s model number. Click on Downloads, select which version of the hardware you have, and then click Download. Agree to the terms of service, save the file to your computer, access your Linksys Smart Wi-Fi Account (more details on that here), click Connectivity, then Router Firmware Update, then Choose File, and run that sucker and follow the instructions from there.
And so on. Fortunately, it’s not always so annoying. You can update Apple’s AirPort routers through the AirPort Utility on your Mac. The Google Wi-Fi mesh network routers update automagically, as do Eero and lots of other next-generation routers, which saves all kinds of headaches.
That should just about cover your most critical devices. Or at least, going any further would mean unpacking every single IoT system you’ve signed on with. For those, just make sure autoupdates are turned on, and that should help plenty.
Or, you know, don’t put your dishwasher on the internet in the first place.
Read More

WHY EVER STOP PLAYING VIDEO GAMES

09:58 0

EO GAMES

 WHY EVER STOP PLAYING VIDEO GAMES 

MANY AMERICANS have replaced work hours with game play — and ENDED UP HAPPIER. Which wouldn’t surprise most gamers.

On the evening of November 9, having barely been awake to see the day, I took the subway to Sunset Park. My objective was to meet a friend at the arcade Next Level.
In size, Next Level resembles a hole-in-the-wall Chinese restaurant. It does indeed serve food — free fried chicken and shrimp were provided that night, and candy, soda, and energy drinks were available at a reasonable markup — but the sustenance it provides is mostly of a different nature. Much of Next Level’s space was devoted to brilliant banks of monitors hooked up to video-game consoles, and much of the remaining space was occupied by men in their 20s avidly facing them. It cost us $10 each to enter.
I had bonded with Leon, a graphic designer, musician, and Twitter magnate, over our shared viewership of online broadcasts of the Street Fighter tournaments held every Wednesday night at Next Level. It was his first time attending the venue in person and his first time entering the tournament. I wasn’t playing, but I wanted to see how he’d do, in part because I had taken to wondering more about video games lately — the nature of their appeal, their central logic, perhaps what they might illuminate about what had happened the night before. Like so many others, I played video games, often to excess, and had done so eagerly since childhood, to the point where the games we played became, necessarily, reflections of our being.
To the uninitiated, you can find longreads on twitter and also the figures are nothing if not staggering: 155 million Americans play video games, more than the number who voted in November’s presidential election. And they play them a lot: According to a variety of recent studies, more than 40 percent of Americans play at least three hours a week, 34 million play on average 22 hours each week, 5 million hit 40 hours, and the average young American will now spend as many hours (roughly 10,000) playing by the time he or she turns 21 as that person spent in middle- and high-school classrooms combined. Which means that a niche activity confined a few decades ago to preadolescents and adolescents has become, increasingly, a cultural juggernaut for all races, genders, and ages. How had video games, over that time, ascended within American and world culture to a scale rivaling sports, film, and television? Like those other entertainments, video games offered an escape, of course. But what kind?
In 1993, the psychologist Peter D. Kramer published Listening to Prozac, asking what we could learn from the sudden mania for antidepressants in America. A few months before the election, an acquaintance had put the same question to me about video games: What do they give gamers that the real world doesn’t?
The first of the expert witnesses at Next Level I had come to speak with was the co-owner of the establishment. I didn’t know him personally, but I knew his name and face from online research, and I waited for an opportune moment to approach him. Eventually, it came. I haltingly asked if he’d be willing, sometime later that night, to talk about video games: what they were, what they meant, what their future might be — what they said, perhaps, about the larger world.
“Yes,” he replied. “But nothing about politics.”
In JuneErik Hurst, a professor at the University of Chicago’s Booth School of Business, delivered a graduation address and later wrote an essay in which he publicized statistics showing that, compared with the beginning of the millennium, working-class men in their 20s were on average working four hours less per week and playing video games for three hours. As a demographic, they had replaced the lost work time with playtime spent gaming. How had this happened? Technology, through automation, had reduced the employment rate of these men by reducing demand for what Hurst referred to as “lower-skilled” labor. He proposed that by creating more vivid and engrossing gaming experiences, technology also increased the subjective value of leisure relative to labor. He was alarmed by what this meant for those who chose to play video games and were not working; he cited the dire long-term prospects of these less-employed men; pointed to relative levels of financial instability, drug use, and suicide among this cohort; and connected them, speculatively, to “voting patterns for certain candidates in recent periods,” by which one doubts he meant Hillary Clinton.
But the most striking fact was not the grim futures of this presently unemployed group. It was their happy present — which he neglected to emphasize. The men whose experiences he described were not in any meaningful way despairing. In fact, the opposite. “If we go to surveys that track subjective well-being,” he wrote, “lower-skilled young men in 2014 reported being much happier on average than did lower-skilled men in the early 2000s. This increase in happiness is despite their employment rate falling by 10 percentage points and the increased propensity to be living in their parents’ basement.” The games were obviously a comforting distraction for those playing them. But they were also, it follows, giving players something, or some things, their lives could not.
The professor is nevertheless concerned. If young men were working less and playing video games, they were losing access to valuable on-the-job skills that would help them stay employed into middle age and beyond. At the commencement, Hurst was not just speaking abstractly — and warning not just of the risk to the struggling working classes. In fact, his argument was most convincing when it returned to his home, and his son, who almost seemed to have inspired the whole inquiry. “He is allowed a couple of hours of video-game time on the weekend, when homework is done,” Hurst wrote. “However, if it were up to him, I have no doubt he would play video games 23 and a half hours per day. He told me so. If we didn’t ration video games, I am not sure he would ever eat. I am positive he wouldn’t shower.”
My freshman year, I lived next door to Y, a senior majoring in management science and engineering whose capacity to immerse himself in the logic of any game and master it could only be described as exceptional. (This skill wasn’t restricted to electronic games, either: He also played chess competitively.) Y was far and away the most intrepid gamer I’d ever met; he was also an unfailingly kind person. He schooled me in Starcraftlet me fiddle around on the PlayStation 2 he kept in his room while he worked or played on his PC. An older brother and oldest child, I had always wanted an older brother of my own, and in this regard, Y, tolerant and wise, was more or less ideal.
Then, two days before Thanksgiving, a game called World of Warcraft was released. The game didn’t inaugurate the genre of massively multiplayer online role-playing games (MMORPGs), but given its enormous and sustained success — ­augmented by various expansions, it continues to this day — it might as well have. Situated on the sprawling plains of cyberspace, the world of World of Warcraft was immense, colorful, and virtually unlimited. Today’s WoW has countless quests to complete, items to collect, weapons and supplies to purchase. It was only natural that Y would dive in headfirst.
This he did, but he didn’t come out. There was too much to absorb. He started skipping classes, staying up later and later. Before, I’d leave when it was time for him to sleep. Now, it seemed, the lights in his room were on at all hours. Soon he stopped attending class altogether, and soon after that he left campus without graduating. A year later, I learned from M, his friend who’d lived next door to me on the other side, that he was apparently working in a big-box store because his parents had made him; aside from that, he spent every waking hour in-game. Despite having begun my freshman year as he began his senior one, and despite my being delayed by a yearlong leave of absence, I ended up graduating two years ahead of him.
Y’s fine now, I think. He did finally graduate, and today he works as a data scientist. No doubt he’s earning what economists would term a higher-skilled salary. But for several years he was lost to the World, given over totally and willingly to a domain of meanings legible only to other players and valid only for him. Given his temperament and dedication, I feel comfortable saying that he wasn’t depressed. Depression feels like an absence of meaning, but as long as he was immersed in the game, I believe that his life was saturated with meaning. He definitely knew what to do, and I would bet that he was happy. The truth is, as odd as it might sound, considering his complete commitment to that game, I envy this experience as much as I fear it. For half a decade, it seems to me, he set a higher value on his in-game life than on his “real” life.
What did the game offer that the rest of the world could not? To begin with, games make sense, unlike life: As with all sports, digital or analog, there are ground rules that determine success (rules that, unlike those in society, are clear to all). The purpose of a game, within it, unlike in society, is directly recognized and never discounted. You are always a protagonist: Unlike with film and television, where one has to watch the acts of others, in games, one is an agent within it. And unlike someone playing sports, one no longer has to leave the house to compete, explore, commune, exercise agency, or be happy, and the game possesses the potential to let one do all of these at once. The environment of the game might be challenging, but in another sense it is literally designed for a player to succeed — or, in the case of multiplayer games, to have a fair chance at success. In those games, too, players typically begin in the same place, and in public agreement about what counts for status and how to get it. In other words, games look like the perfect meritocracies we are taught to expect for ourselves from childhood but never actually find in adulthood.
And then there is the drug effect. In converting achievement into a reliable drug, games allow one to turn the rest of the world off to an unprecedented degree; gaming’s opiate-like trance can be delivered with greater immediacy only by, well, actual opiates. It’s probably no accident that, so far, the most lucid writing on the consciousness of gaming comes from Michael Clune, an academic and author best known for White Outa memoir about his former heroin addiction. Clune is alert to the rhetoric and logic of the binge; he recognizes prosaic activities where experience is readily rendered in words and activities like gaming and drugs, where the intensity eclipses language. Games possess narratives that have the power to seal themselves off from the narratives in the world beyond it. The gamer is driven by an array of hermetic incentives only partially and intermittently accessible from without, like the view over a nose-high wall.
In Tony Tulathimutte’s novel Private Citizens, the narrator describes the feeling near a porn binge’s end, when one has “killed a week and didn’t know what to do with its corpse.” An equally memorable portrait of the binge comes from the singer Lana Del Rey, who rose to stardom in 2011 on the strength of a single titled “Video Games.” In the song, Del Rey’s lover plays video games; he watches her undress for him; later, she ends up gaming. Pairing plush orchestration with a languid, serpentine delivery, the song evokes an atmosphere of calm, luxurious delight where fulfillment and artifice conspire to pacify and charm. The song doesn’t just cite video games; it sounds the way playing video games feels, at least at the dawn of the binge — a rapturous caving in.



Image
Images from Javier Laspiur’s “Controllers” series, in which he photographed himself with each video-game system he played over the years, beginning with Teletenis in 1983 and ending with Playstation Vita in 2013. The composite image that opens this story was built by Laspiur from these images. Photo: Javier Laspiur

Of course, it was not video games generally that removed Y from school but, allegedly, one specific and extraordinary game. In much the same way that video gaming subsumes most of the appeals of other leisure activities into itself, World of Warcraft fuses the attractions of most video games into a single package. It’s not just a game; in many ways, it’s the game of games. Set in a fantasy universe influenced by Tolkien and designed to support Tolkienesque role-playing, the game, digitally rendered, is immeasurably more colorful and elaborate than anything the Oxford don ever wrote: If The Lord of the Rings books are focused on a single, all-important quest, World of Warcraft is structured around thousands of quests (raids, explorations) that the player, alone or teaming with others, may choose to complete.
Whether greater or lesser, the successful completion of these quests leads to the acquisition of in-game currency, equipment, and experience points. Created by the Irvine-based developer Blizzard (in many ways the Apple of game developers), WoW is rooted in an ethos of self-advancement entirely alien to that of Tolkien’s ­Middle-Earth, where smallness and humility are the paramount virtues. There is little to be gained by remaining at a low level in WoW, and a great deal to be lost. The marginal social status of the gamer IRL has been a commonplace for some time — even for those who are, or whose families are, relatively well-off. What a game as maximalist and exemplary as WoW is best suited to reveal is the degree to which status is in the eye of the beholder: There are gamers who view themselves in the light of the game, and once there are enough of them, they constitute a self-sufficient context in which they become the central figures, the successes, by playing. At its peak, WoW counted 12.5 million subscribers, each of them paying about $15 monthly for the privilege (after the initial purchase). When you consider how tightly rationed status is outside the game, how unclear the rules are, how loosely achievement is tied to recognition, how many credentials and connections and how much unpleasantness are required to level up there, it seems like a bargain.
Of course, there are other games, and other reasons to play beyond achieving status. Richard Bartle, a British game-design researcher and professor, constructed a much-cited taxonomy of gamers based on his observations of MUD, an early text-based multiplayer game he co-created in 1978. These gamers, according to Bartle, can be subdivided into four classes: achievers, competing with one another to reap rewards from the game engine; explorers, seeking out the novelties and kinks of the system; socializers, for whom the game serves merely as a pretext for conversations with one another; and killers, who kill. It isn’t hard to extend the fourfold division from gamers to games: Just as there are video games, WoW chief among them, that are geared toward achievers, there are games suited to the other three branches of gamers.
In many major games of exploration, like Grand Theft Auto or Minecraft, the “objectives” of the game can be almost beside the point. Other times, the player explores by pursuing a novel-like narrative. The main character of the tactical espionage game Metal Gear Solid 3 is a well-toned Cold War–era CIA operative who finds himself suddenly in the forests of the USSR; the hero of the choose-your-own-adventure game Life Is Strange is a contemporary high-school student in Oregon, and her estrangement results from her discovery that she can, to a limited extent, reverse time. These games are all fundamentally single-player: Solitude is the condition for exploring within games in much the same way that it is for reading a novel.
While explorers commune with a story or storyteller, socializers communicate with one another: The games that serve as the best catalysts for conversation are their natural preference. Virtually any game can act as a bonding agent, but perhaps the best examples are party games like Nintendo’s Mario Party series, which are just board games in electronic form, or the Super Smash Brothers series, in which four players in the same room select a character from a Nintendo game with which to cheerfully clobber the other. The story, in these games, isn’t inside the game. It’s between the players as they build up camaraderie through opposition.
The ultimate games for killers aren’t fighting games so much as first-person shooters: Counter-Strike when played in competitive mode obliges you to play as one member of a team of five whose task is to eliminate an enemy quintet. The teams take turns being terrorists, whose task is to plant and detonate a bomb, and counter­terrorists, whose task is to deny them. What beauty exists, is found only in feats of split-second execution: improbable headshots, inspired ambushes, precisely coordinated spot rushes.
What’s odd is that across these groups of games there’s perhaps as much unity as difference. Many of the themes blend together. Achievement can be seen as a mode of exploration and seems as viable a basis for socializing as any other. Socializing can be grouped with achievement as a sign of self-actualization. And killing? Few things are more ubiquitous in gaming than killing. Each one of the trio of novel-like games cited above forces the player-protagonist to kill one or more of his or her closest friends. Even a game as rudimentary as Tetris can be framed as an ­unending spree of eliminations.
Perhaps psychological types are a less useful rubric than, say, geological strata. As much as games themselves are divided into distinct stages, levels divide the game experience as a whole.
The first, most superficial level is the most attractive: the simple draw of a glowing screen on which some compelling activity unfolds. There will always be a tawdry, malformed aspect to gaming — surely human beings were made for something more than this? — but games become more than games when displayed vividly and electronically. Freed from the pettiness of cardboard and tokens, video games, like the rest of screen culture, conjure the specter of a different, better world by contrasting a colorful, radiant display with the dim materials of the dusty world surrounding them.
Second: narrative. Like film and television, many video games rely heavily on narrative and character to sustain interest, but just as those mediums separated themselves from theater by taking full advantage of the camera’s capacity for different perspectives, video games distinguish themselves from film and television in granting the viewer a measure of control. What fiction writing achieves only rarely — the intimate coordination of reader and character — the video-game system achieves by default. Literary style pulls together character and reader; technology can implant the reader, as controller, within the character.
Third: objectives, pure and simple. Action games and platformers (like Mario) in which the player controls a fighter; strategy games in which the player controls an army; grand strategy games in which the player controls an empire; racing games in which the player controls a vehicle; puzzle games in which the player manipulates geometry; sports games; fighting games; SimCity: These are genres of games where plot is merely a function of competition, character is merely a function of success, and goals take precedence over words. Developing characters statistically by “leveling up” can feel more important, and gratifying, than developing characters psychologically by progressing through the plot. The graphics may or may not be polished, but the transactional protocol of video games — do this and you’ll improve by this much — must remain constant; without it, the game, any game, would be senseless.
Fourth: economics. Since every game is reliant on this addictive incentive system, every gamer harbors a game theorist, a situational logician blindly valorizing the
optimization of quantified indices of “growth” — in other words, an economist. Resource management is to video games what ­African-American English is to rap music or what the visible sex act is to pornography — the element without which all else is unimaginable. In games as in the market, numbers come first. They have to go up. Our job is to keep up with them, and all else can wait or go to hell.
And there is something sublime, though not beautiful, about the whole experience: Video games are rife with those Pythagorean vistas so adored by Americans, made up of numbers all the way down; they solve the question of meaning in a world where transcendent values have vanished. Still, the satisfaction found in gaming can only be a pale reflection of the satisfaction absent from the world beyond. We turn to games when real life fails us — not merely in touristic fashion but closer to the case of emigrants, fleeing a home that has no place for them.
Gamers have their own fantasies of ­prosperity, fantasies that sometimes come true. For a few, gaming has already become a viable and lucrative profession. Saahil Arora, an American college dropout who plays professional Dota 2 under the name UNiVeRsE, is reportedly the richest competitive gamer: He has earned $2.7 million in his career so far. But even Arora’s income is dwarfed by those of a handful of YouTube (and Twitch) broadcasters with a fraction of his talent: Just by filming themselves playing through games in a ludicrously excitable state for a young audience of fellow suburbanites, their income from ads and subscriptions adds up to earnings in the mid seven figures. The prospects for those who had gathered at Next Level that chilly November night were not quite so sunny. The fighting-game community (FGC), which has developed around one-on-one games like Streetfighter, and for which Next Level serves as a training ground, has yet to reach the popularity of multiplayer online battle arenas (MOBAS) like Dota 2, or first-person shooters, such as Counter-Strike. (The scene is taking steps in that direction: 2016 marked the first year that the Street Fighter V world championships were broadcast on ESPN2 as well as the first time that an American FGC player — Du Dang, from Florida — took the title over top players from Japan.)

Next Level itself is not financially self-sufficient: Without additional income, including from its co-owner and co-founder Henry Cen (a former day trader), it couldn’t pay the rent. “Only rich countries can have places like this,” says the bespectacled and crane-thin Cen. “You wouldn’t see this in Third World countries.” He describes the people who make up the majority of New York’s FGC as coming from blue-collar families: “They’re not the richest of people. There are some individuals that are, but most people that do have money, they want to do something more with their money.” He’s relatively pessimistic about the possibility of becoming a professional gamer: Considering the economic pressures on FGC members and the still small size (roughly 100,000 viewers at most) of the viewing audience, it’s a career that’s available only to the top “0.01 percent” of players. Family pressures to pull back from gaming are strong: Even Justin Wong, one of the happy few who succeeded in becoming a professional, reportedly hid the fact from his family for a long time. “His family did not accept him as a gamer, but recently, they have changed their opinion,” says Cen.Still, according to a veteran of the community (16 of his 34 years), Sanford Kelly, the fighting-game community scene has a long way to go. Though he personally isn’t fond of Street Fighter V, the latest iteration in the series, his energies are devoted to guiding the New York FGC to become more respectable and therefore more attractive to e-sports organizations that might sponsor its members: “We have to change our image, and we have to be more professional.” Compared with other branches of American e-sports, dominated by white and Asian players, the FGC has a reputation that’s always been more colorful: It’s composed primarily of black players like Kelly, Asian players like his longtime Marvel rivals Justin Wong and Duc Do, and Latino gamers, and its brash self-presentation is influenced by the street culture that gave rise to hip-hop. With a typical mixture of resignation and determination, Kelly internalized the fact that, locally and nationally, his scene would have to move away from its roots to move to a larger stage. But the competitive gaming economy had already reached the point where, as the streamer, commentator, and player Arturo Sanchez told me, the earning potential of the FGC were already viable. “So long as you don’t have unrealistic ambitions.” Between the money gleaned from subscriptions to his Twitch channel, payments for streaming larger tournaments, sponsor fees from businesses that pay for advertising in the breaks between matches, crowdfunding, merchandise, and YouTube revenue, Sanchez is able to scratch out a living, comfortably if not prosperously, as a full-time gamer.
“Because he started bringing in money,” I speculated.
“Yes. If you’re doing gaming, especially if you’re an Asian, your progress in life is measured by only one thing: money.”
Like Professor Hurst, I was interested in the political valence of gaming: Was there something fundamental to the pastime that inevitably promoted a dangerous politics? I was intrigued by the data Hurst cited, and during the recent campaign and immediately after, a number of writers noted the connection between Trump supporters and the world of militant gamer-trolls determined to make gaming great again through harassment and expulsion. But as a gamer myself, I found this ominous vision incomplete at best: Most gamers weren’t Trump-adjacent, and if Trumpism corresponded to any game, I thought, it was one that, in its disastrous physicality, could never become a video game: not Final Fantasy but Jenga. (Jenga is now on Nintendo Wii, I’m told.) On the other hand, I’ve never found it easy to trust my own perceptions, so I reached out to friends and acquaintances who were also gamers to learn from their experiences.
Though none of us is a Trumpist, no discourse could unite us. We were trading dispatches atop the Tower of Babel. We got different things out of gaming because we were looking for different things. Some of us greatly preferred single-player games, and some could barely stand to play games alone. Some of us held that writing about games was no more difficult than writing about any other subject; some of us found, and find, the task insanely difficult. Some of us just played more than others — Tony Tulathimutte listed 28 games as personal favorites. He and Bijan Stephen, also a writer, both had a fondness for secondary characters. (Stephen: “I love the weird helpers like Toad and the wizards in Gauntlet — not because they’re necessarily support characters but because they’ve got these defined roles that only work in relation to the other players.”) Meanwhile, Emma Janaskie, an associate editor at Ecco Books, spoke about her favorite games’ main characters, especially Lara Croft. Janaskie’s longest run of gaming lasted ten hours, compared with Stephen’s record of six hours and Tulathimutte’s of 16. When likewise queried, the art critic and gaming writer Nora Khan laughingly asked if she could go off the record, then recalled: “I’ve gotten up to take breaks and stuff, but I’ve played through all of Skyrim once,” adding parenthetically that “Skyrim is a 60-to-80-hour game.”
Janaskie and Tulathimutte made strong avowals that gaming fell squarely within the literary field (Tulathimutte: “Gaming can be literary the same way books can be. DOS for Dummies and Tetris aren’t literary, but Middlemarch and The Last of Us are, and each has its purpose”); I found the proposition more dubious.
“It seems to me that writers get into games precisely because it’s almost the antithesis of writing,” I said to Khan.
“Absolutely,” she said.
“When you’re writing, you don’t know what the stakes are. The question of what victory or defeat is — those questions are very hard to pin down. Whereas with a game, you know exactly what the parameters are.”
“Yes. I wouldn’t say that for everyone. Completing a quest or completing the mission was never really very interesting to me personally. For me, it’s more meditative. When I play Grand Theft Auto V, it’s just a way to shut off all the noise and for once be in a space where I don’t need to be critical or intellectualize something. Because I’m doing that all the time. I just go off and drive — honestly, that’s what I do in real life, too. When I just want to drop out of the situation, I’ll go and drive outside of the city.”
I wouldn’t trade my life or my past for any other, but there have been times when I’ve wanted to swap the writing life and the frigid self-consciousness it compels for the gamer’s striving and satisfaction, the infinite sense of passing back and forth (being an “ambiguous conduit,” in Janaskie’s ­poignant phrase) between number and body. The appeal can’t be that much different for nonwriters subjected to similar social or economic pressures, or for those with other ambitions, maybe especially those whose ambitions have become more dream state than plausible, actionable future. True, there are other ways to depress mental turnout. But I don’t trust my body with intoxicants; so far as music goes, I’ve found few listening experiences more gratifying or revealing than hearing an album on repeat while performing some repetitive in-game task. Gaming offers the solitude of writing without the strain of performance, the certitude of drug addiction minus its permanent physical damage, the elation of sports divorced from the body’s mortality. And, perhaps, the ritual of religion without the dogma. For all the real and purported novelty of video games, they offer nothing so much as the promise of repetition. Life is terrifying; why not, then, live through what you already know — a ­fundamental pulse, speechless and without thought?
After college graduation, once I’d been living back home unemployed with my father for a few months, he confronted me over the dinner table with a question. Given the vast sums of time he’d witnessed me expend on video games both recently and in my youth, wouldn’t it be right to say that gaming, not writing, was what I really wanted to do with my life?
I responded that my goal was to become a writer, and I meant it. But first I had to pause a few seconds to be sure. It’s true that the postgraduate years I spent jobless with my father laid the foundation for what I can do as a writer. I read literature, read history, studied maps, watched films and television, listened to music. I lifted weights in the basement. I survived my final episode of clinical depression and finished translating a mid-19th-century French poet who laid the foundation for literary modernism. But when I was too weak to do these things, and I often was, that so-called writer (zero pitches, zero publications) was, in Baudelaire’s phrase, a “drunkard of his own blood” obsessively replaying the video games of his adolescence — so as to re-create a sense, tawdry and malformed but also quantifiable, of status advancement in an existence that was, by any worldly standard, I knew, stagnant and decrepit. It didn’t matter that the world, by its own standards of economic growth, was itself worn down and running on fumes. Regardless of the rightness of the world, one cannot help but feel great individual guilt for failing to find a meaningful activity and position within it. And regardless of whether it benefits one in the long run, video games can ease that guilt tremendously in the immediate present.
The strange thing is that that guilt should be gone now. I have made a name as a writer. Yet I can’t say that I’ve left the game. In the weeks after writing my first major piece, a long book review, I fired lasers at robots in space for 200 hours. Two summers ago, I played a zombie game in survival mode alone for a week; eventually the zombies, which one must take down precisely and rapidly lest one be swarmed, started to remind me of emails. A few months back, I used a glitch to amass, over the course of several hours, a billion dollars in a game where there’s nothing to buy besides weapons, which you can get for free anyhow. I reinstalled the zombie game on Election Night.

Is it an addiction? Of course. But one’s addiction is always more than a private affair: It speaks to the health and the logic of society at large. Gaming didn’t impact the election, but electing to secede from reality is political, too. I suspect that the total intensity of the passion with which gamers throughout society surrender themselves to their pastime is an implicit register of how awful, grim, and forbidding the world outside them has become — the world that is gaming’s ultimate level, a space determined by finance and labor, food and housing, race and education, gender and art, with so many tests and so many bosses. Just as a wrong life cannot be lived rightly, a bad game cannot be played well. But for lack of an alternative, we live within one, and suffer from its scarcity.
Read More

Post Top Ad