Glenn Reynolds will be here in Wheeling in April. Here he talks about nuclear attack:
Glenn Reynolds will be here in Wheeling in April. Here he talks about nuclear attack:
This Harvard report suggests the affirmative:
An inclusive deliberation process, bringing people together through the Internet, can increase the chances for consensus on contentious issues, including how to address the growing federal budget deficit. That is one conclusion expressed in a new report co-authored by Harvard Kennedy School Professor Archon Fung.
The report, titled “The Difference that Deliberation Makes: Evaluating the Our Budget, Our Economy Public Deliberation,” analyzed the process that brought together more than 3500 Americans in a set of 57 town hall meetings held simultaneously on June 26, 2010 in cities all across the country. Participants spanned a wide spectrum of ages, ethnicities, religions and political affiliations.
“[The] event appears to have achieved its goals of bringing together a diverse group of ordinary Americans to engage each other in constructive discussion,” the researchers wrote. “Both liberals and conservatives appear to have moderated in their policy views regarding spending cuts and tax increases. And the organizers appear to have been quite successful in creating a forum for open and balanced discussion, based on the self-reports of participants as well as the extensive observation by our 19 on-site research assistants.”
Participants spent much of the meeting learning about, discussing and voting on revenue and spending options that would reduce the federal deficit by $1.2 trillion by 2025. They were presented with 42 options that had been developed by a national advisory committee, and were encouraged to suggest additional options to meet the deficit cutting goal.
When the votes were tallied, a vast majority – 85 percent of participants – expressed support for cutting the defense budget by at least five percent. More than half favored reducing defense spending by at least 15 percent. More than six in ten participants expressed support for reducing health care spending by at least five percent. No options for reducing Social Security benefits received a majority of support.
Several options for raising revenue also received broad support from participants. Many supported raising the limit on taxable Social Security earnings to cover 90 percent of all income. Participants also supported raising tax rates on wealthy individuals: 54 percent favored an extra five percent tax on millionaires and 38 percent favored raising the personal tax rate by 20 percent for the top two income brackets (married couples earning more than $209,000/year). Forty-four percent favored raising the top corporate income tax rate from 35 to 40 percent. Fifty-four percent of participants favored a new carbon tax and half supported a financial securities transaction tax.
“By bringing people together in this type of virtual meeting space, we found that extreme opinions became more moderated, and this, in turn, allowed participants to find common ground on some previously contentious issues,” said Fung. “This speaks to the power of citizen democracy to address the many problems that we face.”
Along with Fung, the report co-authors included Taeku Lee of the University of California at Berkeley; and Kevin Esterling of the University of California-Riverside, all of whom are national experts in public opinion and citizen deliberation.
The full report is posted on the website: www.ash.harvard.edu/docs/AmericaSpeaks.pdf
After having used an iPad shortly since its release I can safely say that the device — or another one like it — deserves to become an important part of the academic’s arsenal of gadgets. Choosing to plop down the money for an iPad is like Ingrid Bergman’s regret over leaving Casablanca with Humphrey Bogart. You will do it: not today, not tomorrow, but soon — and for the rest of your life.
Since I was traveling about last week, my posts were spotty. OK, they were non-existent. I hope to do some catch up this week. The stories I meant to post about last week were related to computers, technology and education, all from the WSJ.
The first story has to do with the rise of self-publishing. Ten years ago, a few predicted that authors would take charge and publish their own works. The criticism of that was quality would fall, but does it? Has the music industry and the indie scene resulted in better music being produced or poorer? The proliferation of small private labels has been a boon some would say. So, why not in publishing? It seems that even in what we call vanity self-publishing, that providers (authors) are able to strike deals with places like Amazon, and make some decent income in the process. There may be a lot of bad writing being self published, but there appears to be much decent talent being overlooked by big publishers, as this article notes:
Eleven months later, Ms. McQuestion has sold 36,000 e-books through Amazon.com Inc.’s Kindle e-bookstore and has a film option with a Hollywood producer. In August, Amazon will publish a paperback version of her first novel, “A Scattered Life,” about a friendship triangle among three women in small-town Wisconsin.
Ms. McQuestion is at the leading edge of a technological disruption that’s loosening traditional publishers’ grip on the book market—and giving new power to technology companies like Amazon to shape which books and authors succeed.
Much as blogs have bitten into the news business and YouTube has challenged television, digital self-publishing is creating a powerful new niche in books that’s threatening the traditional industry. Once derided as “vanity” titles by the publishing establishment, self-published books suddenly are able to thrive by circumventing the establishment.
The next obvious question for higher ed, is how it will change higher ed publishing, which thrives and finds legitimacy on a “peer reviewed” process. My suspicion is that there will be a proliferation of scholarly, peer reviewed, journals. Professorial publishing is likely to follow the vanity market.
The next two articles are really companion pieces on the role of computers and digital stimulus in education. There are pros and cons to each. In “Does the Internet Make you Smarter or Dumber” two writers speak about the proliferation of technology through the ages–back to when Luther lamented books were “evil” because they brought a new means of communication to a wider and wider audience–an odd argument considering the Reformation depended on that same medium to garner support and followers.
On the “makes you smarter” side, we have more a political than mind expanding argument:
Despite frequent genuflection to European novels, we actually spent a lot more time watching “Diff’rent Strokes” than reading Proust, prior to the Internet’s spread. The Net, in fact, restores reading and writing as central activities in our culture.
The present is, as noted, characterized by lots of throwaway cultural artifacts, but the nice thing about throwaway material is that it gets thrown away. This issue isn’t whether there’s lots of dumb stuff online—there is, just as there is lots of dumb stuff in bookstores. The issue is whether there are any ideas so good today that they will survive into the future. Several early uses of our cognitive surplus, like open source software, look like they will pass that test.
The past was not as golden, nor is the present as tawdry, as the pessimists suggest, but the only thing really worth arguing about is the future. It is our misfortune, as a historical generation, to live through the largest expansion in expressive capability in human history, a misfortune because abundance breaks more things than scarcity. We are now witnessing the rapid stress of older institutions accompanied by the slow and fitful development of cultural alternatives. Just as required education was a response to print, using the Internet well will require new cultural institutions as well, not just new technologies.
It is tempting to want PatientsLikeMe without the dumb videos, just as we might want scientific journals without the erotic novels, but that’s not how media works. Increased freedom to create means increased freedom to create throwaway material, as well as freedom to indulge in the experimentation that eventually makes the good new stuff possible. There is no easy way to get through a media revolution of this magnitude; the task before us now is to experiment with new ways of using a medium that is social, ubiquitous and cheap, a medium that changes the landscape by distributing freedom of the press and freedom of assembly as widely as freedom of speech.
Do we really think we would be better off only having, say, the NYT to read, or the WaPo? Have we really benefited from what the major publishing houses have chosen to publish and market? Sometimes, the answer is yes, but often it is no. The democratization that the net provides puts the power of choice into the consumer’s hands like no other innovation.
But is this really “good”? On the dumber side:
The Roman philosopher Seneca may have put it best 2,000 years ago: “To be everywhere is to be nowhere.” Today, the Internet grants us easy access to unprecedented amounts of information. But a growing body of scientific evidence suggests that the Net, with its constant distractions and interruptions, is also turning us into scattered and superficial thinkers.
The picture emerging from the research is deeply troubling, at least to anyone who values the depth, rather than just the velocity, of human thought. People who read text studded with links, the studies show, comprehend less than those who read traditional linear text. People who watch busy multimedia presentations remember less than those who take in information in a more sedate and focused manner. People who are continually distracted by emails, alerts and other messages understand less than those who are able to concentrate. And people who juggle many tasks are less creative and less productive than those who do one thing at a time.
The common thread in these disabilities is the division of attention. The richness of our thoughts, our memories and even our personalities hinges on our ability to focus the mind and sustain concentration. Only when we pay deep attention to a new piece of information are we able to associate it “meaningfully and systematically with knowledge already well established in memory,” writes the Nobel Prize-winning neuroscientist Eric Kandel. Such associations are essential to mastering complex concepts.
When we’re constantly distracted and interrupted, as we tend to be online, our brains are unable to forge the strong and expansive neural connections that give depth and distinctiveness to our thinking. We become mere signal-processing units, quickly shepherding disjointed bits of information into and then out of short-term memory.
So, perhaps the lesson is this: we should embrace the new freedoms provided by technology, but we should keep technology in its place and put it into perspective. Technology is not going to make us smarter, better, human beings. This “problem” has always been with us–from the invention of movable type, to the radio, to television, and a host of other media inventions. We should use technology as a means, but not forget to think deeply. Many educators praise technology the way to educate the modern student, but is it? Have humans changed over time? Unlikely. We should be more selective in our reading/online/consuming choices. The lesson from the “dumber” article is this: put away the smart phone and learn how to think in thoughtful simplicity.
There are many in higher ed who like the introduction of technology into the classroom, but I can say from experience, that it can be a distraction. Here’s a prof that makes the point of the latter.
h/t Chronicle of Higher Ed–Tweed.