Since I was traveling about last week, my posts were spotty. OK, they were non-existent. I hope to do some catch up this week.  The stories I meant to post about last week were related to computers, technology and education, all from the WSJ.

The first story has to do with the rise of self-publishing.  Ten years ago, a few predicted that authors would take charge and publish their own works.  The criticism of that was quality would fall, but does it?  Has the music industry and the indie scene resulted in better music being produced or poorer?  The proliferation of small private labels has been a boon some would say.  So, why not in publishing?  It seems that even in what we call vanity self-publishing, that providers (authors) are able to strike deals with places like Amazon, and make some decent income in the process.  There may be a lot of bad writing being self published, but there appears to be much decent talent being overlooked by big publishers, as this article notes:

Eleven months later, Ms. McQuestion has sold 36,000 e-books through Amazon.com Inc.’s Kindle e-bookstore and has a film option with a Hollywood producer. In August, Amazon will publish a paperback version of her first novel, “A Scattered Life,” about a friendship triangle among three women in small-town Wisconsin.

Ms. McQuestion is at the leading edge of a technological disruption that’s loosening traditional publishers’ grip on the book market—and giving new power to technology companies like Amazon to shape which books and authors succeed.

Much as blogs have bitten into the news business and YouTube has challenged television, digital self-publishing is creating a powerful new niche in books that’s threatening the traditional industry. Once derided as “vanity” titles by the publishing establishment, self-published books suddenly are able to thrive by circumventing the establishment.

The next obvious question for higher ed, is how it will change higher ed publishing, which thrives and finds legitimacy on a “peer reviewed” process.  My suspicion is that there will be a proliferation of scholarly, peer reviewed, journals.  Professorial publishing is likely to follow the vanity market.

The next two articles are really companion pieces on the role of computers and digital stimulus in education.  There are pros and cons to each.  In “Does the Internet Make you Smarter or Dumber” two writers speak about the proliferation of technology through the ages–back to when Luther lamented books were “evil” because they brought a new means of communication to a wider and wider audience–an odd argument considering the Reformation depended on that same medium to garner support and followers.

On the “makes you smarter” side, we have more a political than mind expanding argument:

Despite frequent genuflection to European novels, we actually spent a lot more time watching “Diff’rent Strokes” than reading Proust, prior to the Internet’s spread. The Net, in fact, restores reading and writing as central activities in our culture.

The present is, as noted, characterized by lots of throwaway cultural artifacts, but the nice thing about throwaway material is that it gets thrown away. This issue isn’t whether there’s lots of dumb stuff online—there is, just as there is lots of dumb stuff in bookstores. The issue is whether there are any ideas so good today that they will survive into the future. Several early uses of our cognitive surplus, like open source software, look like they will pass that test.

The past was not as golden, nor is the present as tawdry, as the pessimists suggest, but the only thing really worth arguing about is the future. It is our misfortune, as a historical generation, to live through the largest expansion in expressive capability in human history, a misfortune because abundance breaks more things than scarcity. We are now witnessing the rapid stress of older institutions accompanied by the slow and fitful development of cultural alternatives. Just as required education was a response to print, using the Internet well will require new cultural institutions as well, not just new technologies.

It is tempting to want PatientsLikeMe without the dumb videos, just as we might want scientific journals without the erotic novels, but that’s not how media works. Increased freedom to create means increased freedom to create throwaway material, as well as freedom to indulge in the experimentation that eventually makes the good new stuff possible. There is no easy way to get through a media revolution of this magnitude; the task before us now is to experiment with new ways of using a medium that is social, ubiquitous and cheap, a medium that changes the landscape by distributing freedom of the press and freedom of assembly as widely as freedom of speech.

Do we really think we would be better off only having, say, the NYT to read, or the WaPo?  Have we really benefited from what the major publishing houses have chosen to publish and market?  Sometimes, the answer is yes, but often it is no.  The democratization that the net provides puts the power of choice into the consumer’s hands like no other innovation.

But is this really “good”?  On the dumber side:

The Roman philosopher Seneca may have put it best 2,000 years ago: “To be everywhere is to be nowhere.” Today, the Internet grants us easy access to unprecedented amounts of information. But a growing body of scientific evidence suggests that the Net, with its constant distractions and interruptions, is also turning us into scattered and superficial thinkers.

The picture emerging from the research is deeply troubling, at least to anyone who values the depth, rather than just the velocity, of human thought. People who read text studded with links, the studies show, comprehend less than those who read traditional linear text. People who watch busy multimedia presentations remember less than those who take in information in a more sedate and focused manner. People who are continually distracted by emails, alerts and other messages understand less than those who are able to concentrate. And people who juggle many tasks are less creative and less productive than those who do one thing at a time.

The common thread in these disabilities is the division of attention. The richness of our thoughts, our memories and even our personalities hinges on our ability to focus the mind and sustain concentration. Only when we pay deep attention to a new piece of information are we able to associate it “meaningfully and systematically with knowledge already well established in memory,” writes the Nobel Prize-winning neuroscientist Eric Kandel. Such associations are essential to mastering complex concepts.

When we’re constantly distracted and interrupted, as we tend to be online, our brains are unable to forge the strong and expansive neural connections that give depth and distinctiveness to our thinking. We become mere signal-processing units, quickly shepherding disjointed bits of information into and then out of short-term memory.

So, perhaps the lesson is this:  we should embrace the new freedoms provided by technology, but we should keep technology in its place and put it into perspective.  Technology is not going to make us smarter, better, human beings.  This “problem” has always been with us–from the invention of movable type, to the radio, to television, and a host of other media inventions.  We should use technology as a means, but not forget to think deeply.  Many educators praise technology the way to educate the modern student, but is it?  Have humans changed over time?  Unlikely.  We should be more selective in our reading/online/consuming choices.  The lesson from the “dumber” article is this:  put away the smart phone and learn how to think in thoughtful simplicity.