Writing in the December edition of nature.com, Andreas von Bubnoff notes in “Science in the web age: The real death of print” that our informational world is increasingly digital.
This supports the contention I have made previously, that in order to be relevant in the 21st century, content needs to be digital. For anyone interested in research, I think this should be seen as a positive. Bubnoff writes:
[Vishwas] Chavan and other digitization visionaries paint a future in which books no longer gather dust on shelves, but exist as interconnected nodes in a vast web of stored literature, all accessible at the click of a mouse. So instead of hunting for specific books, scholars could search for specific information, customizing searches to suit their needs.
I have heard previously about Google Print (now apparently called “Google Book Search”) and the Open Content Alliance projects, but not the Million Book Project. They about a third of the way toward their goal of having a million out of copyright books online. According to this article:
Since the project began in 2002, about 600,000 out-of-copyright books have been scanned, although only about half of them are currently available online. The scanning takes place in India and China, with books being shipped there temporarily from libraries around the world.
This is really impressive, considering Project Gutenberg “only” has 17,000 free eBooks available now. It is significant that the scanning work is being done in India and China. The impact of job offshoring and outsourcing is going to just grow in the months and years to come, and we have got to figure out what to do about this in education.
The article includes an interesting quotation from Michael Gorman, President of the American Library Association, relating to the impact that digital content may be having on people reading. This strikes a chord with me, as last night (as part of my mythtv setup incidentally) I had to complete a survey that asked me how many hours per week I read. If you include reading digital texts, I really have no idea what that estimate would be. I am sure I spend upwards of at least 8 hours per day reading something somewhere, mostly on my computer.
But Gorman is worried that over-reliance on digital texts could change the way people read — and not for the better. He calls it the “atomization of knowledge”. Google searches retrieve snippets and Gorman worries that people who confine their reading to these short paragraphs could miss out on the deeper understanding that can be conveyed by longer, narrative prose. Dillon agrees that people use e-books in the same way that they use web pages: dipping in and out of the content.
This reminds me also of a humorous podcast I listened to recently about the myth of multi-tasking. Adults (digital immigrants) today seem at times to be agog and in awe of young people who are multitasking with 15 different IM windows open and simultaneous conversations, who are doing their homework, talking on their cell phone, and also playing their xBox. Yes there are generational differences, but I contend the human brain has always multitasked. Some of the technology today just makes this more apparent, and perhaps the degree of multitasking has increased. But to do really deep thinking, to think with quality and rigor, I personally think you have to ratchet down the multitasking. It would be interesting to do or read a study on this, comparing the quality of the writing of students, for instance, depending on the number of things they are multi-tasking at the time they author.
Other links on eBooks are available on http://del.icio.us/wfryer/eBooks.
If you enjoyed this post and found it useful, subscribe to Wes' free newsletter. Check out Wes' video tutorial library, "Playing with Media." Information about more ways to learn with Dr. Wesley Fryer are available on wesfryer.com/after.
On this day..
- Reflections on Learning from the OU Innovation Hub - 2017
- Laptops and iPhones Can Eat Cellular LTE Data FAST - 2014
- iOS4 Tip Screencast: Closing Apps Open in the Background #edapp - 2010
- iPhone LIVE Webcasting to UStream, Tiered iPhone Data Plans, Exaflood Rumors - 2009
- Why aren't we using the real world? - 2009
- VoiceThread compared to GarageBand - 2008
- Tips for digital story evaluation - 2008
- A great way to catch up on your edublog reading - 2008
- Home at last! - 2007
- 2006 Edublog Awards - 2006
Wow, we must be operating on the same neural network node. I posted this over at John Pederson’s blog before reading your entry:
“What’s lamentable, and I’m a prime example of this, is that searching and amassing don’t equate to knowledge. We have more and more access to information but it doesn’t necessarily make us any smarter. No big a-ha there, but something to keep in mind when we do something like “research” an illness on the web. What did we really learn? We’re all going to have PhD’s in Google searches – to what end, who knows?”
Yes, I think that is why we have to focus on helping teachers change the types of traditional TASKS they are still giving to students. We need students to be engaging in authentic activities that require them to synthesize, evaluate, and communicate information in a way that can’t be faked. To a real audience, that both cares about the student and their message. This is not factory production-line education. But it is essential for the 21st century, I think. Google’s flattening of our informational environment makes that more apparent every day to me.
As I absorb your thoughts along with Tom Hoffman’s entry at Ed-Tech Insider, I start to wonder: Does Google make it EASIER to be an anti-intellectual pretend expert? A shallow-content-dipper if you will. That’s what we have to be on-guard against and try to prevent and/or remedy.