Just Watched: Weeds
Intro
Andy's Plan
Masturbation lesson
(little background: kid's uncle is asked to teach nephew how to masturbate since the kid kept throwing "used" socks into the toilet)
things about this world that seem to matter... Life is too absurd to take it seriously. Laugh and be laughed at - that's my motto. то, что меня привлекает в этом мире... Жизнь слишком абсурдна, чтобы её воспринимать всерьёз... Смейся над всем и пусть смеются над тобой - вот мой девиз! Valera Meylis, aka Валерий Мамедалиев
Intro
Andy's Plan
Masturbation lesson
(little background: kid's uncle is asked to teach nephew how to masturbate since the kid kept throwing "used" socks into the toilet)
Labels: Weeds
During the winter of 2007, a UCLA professor of psychiatry named Gary Small recruited six volunteers—three experienced Web surfers and three novices—for a study on brain activity. He gave each a pair of goggles onto which Web pages could be projected. Then he slid his subjects, one by one, into the cylinder of a whole-brain magnetic resonance imager and told them to start searching the Internet. As they used a handheld keypad to Google various preselected topics—the nutritional benefits of chocolate, vacationing in the Galapagos Islands, buying a new car—the MRI scanned their brains for areas of high activation, indicated by increases in blood flow.
The two groups showed marked differences. Brain activity of the experienced surfers was far more extensive than that of the newbies, particularly in areas of the prefrontal cortex associated with problem-solving and decisionmaking. Small then had his subjects read normal blocks of text projected onto their goggles; in this case, scans revealed no significant difference in areas of brain activation between the two groups. The evidence suggested, then, that the distinctive neural pathways of experienced Web users had developed because of their Internet use.
read moreThe most remarkable result of the experiment emerged when Small repeated the tests six days later. In the interim, the novices had agreed to spend an hour a day online, searching the Internet. The new scans revealed that their brain activity had changed dramatically; it now resembled that of the veteran surfers. “Five hours on the Internet and the naive subjects had already rewired their brains,” Small wrote. He later repeated all the tests with 18 more volunteers and got the same results.
When first publicized, the findings were greeted with cheers. By keeping lots of brain cells buzzing, Google seemed to be making people smarter. But as Small was careful to point out, more brain activity is not necessarily better brain activity. The real revelation was how quickly and extensively Internet use reroutes people’s neural pathways. “The current explosion of digital technology not only is changing the way we live and communicate,” Small concluded, “but is rapidly and profoundly altering our brains.”
What kind of brain is the Web giving us? That question will no doubt be the subject of a great deal of research in the years ahead. Already, though, there is much we know or can surmise—and the news is quite disturbing. Dozens of studies by psychologists, neurobiologists, and educators point to the same conclusion: When we go online, we enter an environment that promotes cursory reading, hurried and distracted thinking, and superficial learning. Even as the Internet grants us easy access to vast amounts of information, it is turning us into shallower thinkers, literally changing the structure of our brain.
Back in the 1980s, when schools began investing heavily in computers, there was much enthusiasm about the apparent advantages of digital documents over paper ones. Many educators were convinced that introducing hyperlinks into text displayed on monitors would be a boon to learning. Hypertext would strengthen critical thinking, the argument went, by enabling students to switch easily between different viewpoints. Freed from the lockstep reading demanded by printed pages, readers would make all sorts of new intellectual connections between diverse works. The hyperlink would be a technology of liberation.
By the end of the decade, the enthusiasm was turning to skepticism. Research was painting a fuller, very different picture of the cognitive effects of hypertext. Navigating linked documents, it turned out, entails a lot of mental calisthenics—evaluating hyperlinks, deciding whether to click, adjusting to different formats—that are extraneous to the process of reading. Because it disrupts concentration, such activity weakens comprehension. A 1989 study showed that readers tended just to click around aimlessly when reading something that included hypertext links to other selected pieces of information. A 1990 experiment revealed that some “could not remember what they had and had not read.”
Even though the World Wide Web has made hypertext ubiquitous and presumably less startling and unfamiliar, the cognitive problems remain. Research continues to show that people who read linear text comprehend more, remember more, and learn more than those who read text peppered with links. In a 2001 study, two scholars in Canada asked 70 people to read “The Demon Lover,” a short story by Elizabeth Bowen. One group read it in a traditional linear-text format; they’d read a passage and click the word next to move ahead. A second group read a version in which they had to click on highlighted words in the text to move ahead. It took the hypertext readers longer to read the document, and they were seven times more likely to say they found it confusing. Another researcher, Erping Zhu, had people read a passage of digital prose but varied the number of links appearing in it. She then gave the readers a multiple-choice quiz and had them write a summary of what they had read. She found that comprehension declined as the number of links increased—whether or not people clicked on them. After all, whenever a link appears, your brain has to at least make the choice not to click, which is itself distracting.
A 2007 scholarly review of hypertext experiments concluded that jumping between digital documents impedes understanding. And if links are bad for concentration and comprehension, it shouldn’t be surprising that more recent research suggests that links surrounded by images, videos, and advertisements could be even worse.
In a study published in the journal Media Psychology, researchers had more than 100 volunteers watch a presentation about the country of Mali, played through a Web browser. Some watched a text-only version. Others watched a version that incorporated video. Afterward, the subjects were quizzed on the material. Compared to the multimedia viewers, the text-only viewers answered significantly more questions correctly; they also found the presentation to be more interesting, more educational, more understandable, and more enjoyable.
The depth of our intelligence hinges on our ability to transfer information from working memory, the scratch pad of consciousness, to long-term memory, the mind’s filing system. When facts and experiences enter our long-term memory, we are able to weave them into the complex ideas that give richness to our thought. But the passage from working memory to long-term memory also forms a bottleneck in our brain. Whereas long-term memory has an almost unlimited capacity, working memory can hold only a relatively small amount of information at a time. And that short-term storage is fragile: A break in our attention can sweep its contents from our mind.
Imagine filling a bathtub with a thimble; that’s the challenge involved in moving information from working memory into long-term memory. When we read a book, the information faucet provides a steady drip, which we can control by varying the pace of our reading. Through our single-minded concentration on the text, we can transfer much of the information, thimbleful by thimbleful, into long-term memory and forge the rich associations essential to the creation of knowledge and wisdom.
On the Net, we face many information faucets, all going full blast. Our little thimble overflows as we rush from tap to tap. We transfer only a small jumble of drops from different faucets, not a continuous, coherent stream.
Psychologists refer to the information flowing into our working memory as our cognitive load. When the load exceeds our mind’s ability to process and store it, we’re unable to retain the information or to draw connections with other memories. We can’t translate the new material into conceptual knowledge. Our ability to learn suffers, and our understanding remains weak. That’s why the extensive brain activity that Small discovered in Web searchers may be more a cause for concern than for celebration. It points to cognitive overload.
The Internet is an interruption system. It seizes our attention only to scramble it. There’s the problem of hypertext and the many different kinds of media coming at us simultaneously. There’s also the fact that numerous studies—including one that tracked eye movement, one that surveyed people, and even one that examined the habits displayed by users of two academic databases—show that we start to read faster and less thoroughly as soon as we go online. Plus, the Internet has a hundred ways of distracting us from our onscreen reading. Most email applications check automatically for new messages every five or 10 minutes, and people routinely click the Check for New Mail button even more frequently. Office workers often glance at their inbox 30 to 40 times an hour. Since each glance breaks our concentration and burdens our working memory, the cognitive penalty can be severe.
The penalty is amplified by what brain scientists call switching costs. Every time we shift our attention, the brain has to reorient itself, further taxing our mental resources. Many studies have shown that switching between just two tasks can add substantially to our cognitive load, impeding our thinking and increasing the likelihood that we’ll overlook or misinterpret important information. On the Internet, where we generally juggle several tasks, the switching costs pile ever higher.
The Net’s ability to monitor events and send out messages and notifications automatically is, of course, one of its great strengths as a communication technology. We rely on that capability to personalize the workings of the system, to program the vast database to respond to our particular needs, interests, and desires. We want to be interrupted, because each interruption—email, tweet, instant message, RSS headline—brings us a valuable piece of information. To turn off these alerts is to risk feeling out of touch or even socially isolated. The stream of new information also plays to our natural tendency to overemphasize the immediate. We crave the new even when we know it’s trivial.
And so we ask the Internet to keep interrupting us in ever more varied ways. We willingly accept the loss of concentration and focus, the fragmentation of our attention, and the thinning of our thoughts in return for the wealth of compelling, or at least diverting, information we receive. We rarely stop to think that it might actually make more sense just to tune it all out.
The mental consequences of our online info-crunching are not universally bad. Certain cognitive skills are strengthened by our use of computers and the Net. These tend to involve more primitive mental functions, such as hand-eye coordination, reflex response, and the processing of visual cues. One much-cited study of videogaming, published in Nature in 2003, revealed that after just 10 days of playing action games on computers, a group of young people had significantly boosted the speed with which they could shift their visual focus between various images and tasks.
It’s likely that Web browsing also strengthens brain functions related to fast-paced problem-solving, particularly when it requires spotting patterns in a welter of data. A British study of the way women search for medical information online indicated that an experienced Internet user can, at least in some cases, assess the trustworthiness and probable value of a Web page in a matter of seconds. The more we practice surfing and scanning, the more adept our brain becomes at those tasks. (Other academics, like Clay Shirky, maintain that the Web provides us with a valuable outlet for a growing “cognitive surplus”; see Cognitive Surplus: The Great Spare-Time Revolution
But it would be a serious mistake to look narrowly at such benefits and conclude that the Web is making us smarter. In a Science article published in early 2009, prominent developmental psychologist Patricia Greenfield reviewed more than 40 studies of the effects of various types of media on intelligence and learning ability. She concluded that “every medium develops some cognitive skills at the expense of others.” Our growing use of the Net and other screen-based technologies, she wrote, has led to the “widespread and sophisticated development of visual-spatial skills.” But those gains go hand in hand with a weakening of our capacity for the kind of “deep processing” that underpins “mindful knowledge acquisition, inductive analysis, critical thinking, imagination, and reflection.”
We know that the human brain is highly plastic; neurons and synapses change as circumstances change. When we adapt to a new cultural phenomenon, including the use of a new medium, we end up with a different brain, says Michael Merzenich, a pioneer of the field of neuroplasticity. That means our online habits continue to reverberate in the workings of our brain cells even when we’re not at a computer. We’re exercising the neural circuits devoted to skimming and multitasking while ignoring those used for reading and thinking deeply.
Last year, researchers at Stanford found signs that this shift may already be well under way. They gave a battery of cognitive tests to a group of heavy media multitaskers as well as a group of relatively light ones. They discovered that the heavy multitaskers were much more easily distracted, had significantly less control over their working memory, and were generally much less able to concentrate on a task. Intensive multitaskers are “suckers for irrelevancy,” says Clifford Nass, one professor who did the research. “Everything distracts them.” Merzenich offers an even bleaker assessment: As we multitask online, we are “training our brains to pay attention to the crap.”
There’s nothing wrong with absorbing information quickly and in bits and pieces. We’ve always skimmed newspapers more than we’ve read them, and we routinely run our eyes over books and magazines to get the gist of a piece of writing and decide whether it warrants more thorough reading. The ability to scan and browse is as important as the ability to read deeply and think attentively. The problem is that skimming is becoming our dominant mode of thought. Once a means to an end, a way to identify information for further study, it’s becoming an end in itself—our preferred method of both learning and analysis. Dazzled by the Net’s treasures, we are blind to the damage we may be doing to our intellectual lives and even our culture.
What we’re experiencing is, in a metaphorical sense, a reversal of the early trajectory of civilization: We are evolving from cultivators of personal knowledge into hunters and gatherers in the electronic data forest. In the process, we seem fated to sacrifice much of what makes our minds so interesting.
Labels: New York Times, Saramago
Engraved ochre block, c.75,000 BCE |
Labels: God, London Review of Books
cormorant astounding-One reason sonnets have come to be thought of as the natural form of what we now call ‘lyric poetry’ is that they can imply stories, or gesture to wider truths which might be immanent in a simple daily action, like praying, grieving, cutting hay, watching birds or reading a letter. Sonnets have often been written in larger groups, or ‘sequences’ as they’re misleadingly called, which can make these larger ambitions apparent. Groups of sonnets do not form sequences in the sense that the numbers 3,6,9 do: they don’t follow one from another in a predictable order but work together to imply a personal history, or an argument. Individual sonnets within a sequence are not bound to fit in, and sometimes single poems or small groups of poems can suddenly hint at a different version of the story from that which seems to be related in the larger sequence – as the small number of sonnets about the supposed ‘Dark Lady’ in Shakespeare’s sequence appear to do.
ly, in one sleek involuted arabesque, a vertical
turn on a dime, goes into that inimitable
vanishing-and-emerging-from-under-the-briny-deep act.
Ye holy Towers that shade the wave-worn steep,Bowles, creaking though he now sounds, was a big influence on the sonnets of the major Romantic poets, which were generally written to stand on their own rather than in sequences, and which were often inspired by Miltonic vehemence as well as by a belief that Shakespearean sonnets revealed personal emotion; the love theme tends to drop out or be transformed. So Coleridge’s ‘Work without Hope’ (1825) adapts the commonplace of the Petrarchan tradition that the year renews and birds and bees fall in love while the speaker remains alone and unloved, a theme on which the Earl of Surrey (who had been freshly edited in 1815), among others, had composed variations. Coleridge, though, is not a frustrated lover but a frustrated poet, yearning to produce a larger artwork which lies beyond the scope of the poem and beyond the capacity of the poet: ‘Bloom, o ye amaranths! bloom for whom ye may,/For me ye bloom not!’ The sonnet had arrived as an apologia for a non-existent longer work, or as a testament to broken spiritual energy: as Hopkins put it (perhaps echoing Coleridge, perhaps Surrey), ‘birds build – but not I build.’
Long may ye rear your aged brows sublime,
Though, hurrying silent by, relentless Time
Assail you, and the winter whirlwind’s sweep!
‘My name is Ozymandias, king of kings:A sonnet without a sequence is a part without a whole, and that is one reason ‘Ozymandias’ is so powerful. We see only a part of an artwork, ‘vast and trunkless legs of stone’ and ‘a shatter’d visage’. Around those fragments lie deserts of ‘lone and level sands’. A part can reverberate with the force of a whole, and can convey nostalgia, fear or excitement about the absence of that whole. Keats works in a similar way when he describes the ‘dizzy pain’ elicited by the Elgin Marbles, ‘That mingles Grecian grandeur with the rude/Wasting of old Time – with a billowy main –/A sun – a shadow of a magnitude.’ ‘A shadow of a magnitude’ magnificently evokes a larger structure that isn’t there, and also suggests the curious power of the sonnet to evoke a larger lost form. Rilke’s ‘Archaic Torso of Apollo’, in the same vein, describes a headless statue that ‘holds fast and shines’ even though ‘We never knew his head and all the light/that ripened in his fabled eyes.’ The poem exploits the power that comes from seeing only a part of things; it ends with the abrupt order, ‘You must change your life,’ and the statue seems to generate a surplus of authority by being partly lost. Probably a poem which was not a sonnet could have done the same thing. But the post-Romantic sonnet, with its roots in a poetic of the sublime, and with its buried legacy of sequences that use single poems to articulate a larger story, is particularly well suited to create this kind of shock. A part has lost its whole, but gains from the loss.
Look on my works, ye Mighty, and despair!’
Nothing beside remains. Round the decay
Of that colossal wreck, boundless and bare
The lone and level sands stretch far away.
Platonic England, house of solitudes,That quatrain weights every word with time (‘Platonic England’ is an allusion to Coleridge), but also with mythologies which inspire intimacy and mistrust. The line ‘replete with complex fortunes that are gone’ is dazzling, and it suggests why the sonnet is such a good form in which to explore English histories and church histories in particular. It’s not just that Donne and Herbert wrote sonnets; rather the sonnet has become a short space that can be filled with time, ‘replete with complex fortunes that are gone’. A ‘house of solitudes’ (and sonnets are often figured as protective enclosures, a pretty room, a cell) that ‘rests in its laurels’ is not just like a great house surrounded by a laurel hedge: the phrase suggests something resting on its laurels, near to ruin.
rests in its laurels and its injured stone,
replete with complex fortunes that are gone,
beset by dynasties of moods and clouds.
Labels: London Review of Books, Sonnet
on Death |
Labels: Marriage, New York Times
Jun 10th 2010
FROM financial traders’ propensity to make risky decisions to badly behaved schoolboys’ claims to be suffering from attention deficit hyperactivity disorder, testosterone makes a perfect scapegoat. In both of these cases, and others, many researchers reckon that the underlying cause is exposure to too much of that male hormone in the womb. Positive effects are claimed, too. Top-flight female football players and successful male musicians may also have fetal testosterone exposure to thank for their lot in life.
Yet the evidence that it is exposure in utero to testosterone that causes all these things relies on a shaky chain of causation. What these people actually share is a tendency for their ring fingers to be longer than their index fingers. This peculiarity of anatomy is often ascribed to fetal testosterone exposure because it is common in men and much rarer in women, and because there seems to be a correlation between the point in gestation when it appears and surges of testosterone in the womb. But the link has never been proved decisively. It has, rather, just become accepted wisdom.
Research carried out on birds now suggests that the accepted wisdom could be wrong. In their study of the feet of zebra finches published this week in the Proceedings of the Royal Society, Wolfgang Forstmeier and his colleagues at the Max Planck Institute for Ornithology in Seewiesen, Germany, conclude that oestrogen—the hormone of femininity—rather than testosterone, may be to blame.
Although it is well over 300m years since people and finches had a common ancestor, the basic vertebrate body plan is the same in both. So, a few years ago Dr Forstmeier, an expert on finch behaviour, wondered if the link between digit ratio and behaviour might show up in his animals, too.
It did. The ratio between a zebra finch’s second and fourth digits (which are not fingers but toes in birds) is associated with more courtship songs by males and fewer flirtatious hops by females—in other words with more masculine behaviour, regardless of the sex of the individual.
Dr Forstmeier probed the matter further. He has been investigating the birds’ oestrogen and androgen receptors—molecules that respond to female and male hormones, respectively.
The receptors in question orchestrate both behavioural and physical development, including some types of bone growth, in many vertebrate species. Different versions of a receptor (encoded by genes that have slightly different DNA sequences) can be more or less sensitive to the appropriate hormone. That led Dr Forstmeier to ask whether the type of hormone receptor a bird has influences its digit ratio, its sexual behaviour or both.
To find out, he looked for correlations between genes, ratios and behaviour in more than 1,100 zebra finches. Surprisingly, in view of the working assumption about humans, the type of testosterone receptor that a bird had proved to be irrelevant. Its oestrogen-receptor variant, however, had a significant impact on both digit ratio and courtship behaviour. This suggests that the sorts of predispositions that in people are blamed on fetal testosterone are caused in birds by fetal oestrogen (or, rather, the response to it).
That does not, of course, mean the same thing is true in people: 300m years is quite a long time for differences to emerge. It is also true that the digit-ratio that predicts male-like behaviour in birds is the opposite of the one found in humans (ie, the second digit, rather than the fourth, is the longer of the two). But it does suggest that it would be worth double-checking. Though science likes to think of itself as rational, it is just as prone to fads and assumptions as any other human activity. That, plus the fact that most scientists are men, may have led to some lazy thinking about which hormone is more likely to control gender-related behaviour. Just possibly, the trader’s finger should be pointing at oestrogen, not testosterone.
Science and Technology
Labels: Birds, The Economist
Labels: New Yorker Magazine, Poetry