Categories
Book and Reading News

B&N’s Nook: Weirdly unrevolutionary

In addition to this posting, please visit this clarifications posting to get the whole picture.

It would be nice to say, as Matt Miller has, that the e-book and e-reader market was revolutionized today. It simply got more interesting. A careful reading of the $259 Nook’s features, and the comparison offered by B&N to the $259 Amazon Kindle 2, reveals that, while it packs a lot of new ideas, Nook is a combination of innovation and the extraordinarily conventional.

Highlights:

  • Two screens, one 3.5-inch LCD for navigation and purchasing and a six-inch E-Ink display for reading;
  • Virtual keyboard via the LCD display
  • ePub and PDF formats supported;
  • Free 3G connectivity when shopping via BN.com;
  • Sharing of books, across Nook, smartphones and PCs;
  • Wi-Fi built in, but with strange limitations at launch(see below);
  • Synchronization of location, notes and annotation across multiple devices;
  • Audio is supported, though only MP3; Audible books not supported.

There is much I like about this device, but I am not at the announcement today, where I would be asking a lot of questions I have not seen answered in any coverage, so far. Here, with the apparent downsides first and foremost, is what is known to me at this moment.

An e-reader designed to get you into the physical Barnes & Noble store. This, and the question of how to get non-BN content onto the Nook, represent the most backward features of the Nook. When you visit a B&N retail store, you’ll receive offers and, soon, the ability to read some e-books in their entirety while in the store. Everything deleted below, while part of this critique has been clarified and extended in this posting.

There, however, is the rub.

I’d pointed out before that wireless services for browsing the 500,000+ titles available for free through Google Books, a notable feature of the Nook, probably wouldn’t be supported over the built-in 3G wireless service. It isn’t. You’ll need to download and synch the Nook with your PC, via a USB connection, to move any content not sold by BN.com onto the device. From there, it gets bizarre.

According to The New York Times’s Motoko Rich, the built-in Wi-Fi networking works only inside Barnes & Noble retail stores:

With the market for electronic readers and digital books heating up by the day, Barnes & Noble sought to differentiate itself with the wireless feature that consumers can access in any of the chain’s 1,300 stores. Outside of the stores, customers can download books on AT&T’s 3G cellular phone network. (emphasis added)

A review of the BN.com tech specs for Nook adds the caveat that free wireless service is available “from Barnes & Noble via AT&T.” Note that they are saying you get free wireless service when buying or browsing Barnes & Noble, not when accessing other sites or services. Put this and the quote from the Times together and you get: Free 3G service anywhere, when buying from BN.com. Free Wi-Fi in Barnes & Noble stores, but no Wi-Fi connectivity outside, where you can shop wirelessly on BN.com.

Comments from riffraffy in TalkBack point to this section of the Nook FAQ, which I read but still find very vague, since they refer only to travel and Wi-Fi:

Q. Can I use my nook while traveling abroad?

A.Yes, when you travel abroad, you can read any files that are already on your nook. You can connect to Wi-Fi hotspots that do not use proxy security settings, such those commonly used in hotels, and download eBooks and subscriptions already in your online digital library. You cannot, however, purchase additional eBooks and subscriptions.

Q. Will new issues of eNewspapers and eMagazines be downloaded to my nook while I’m traveling?

A. Yes, if you are traveling in the United States, or if you are abroad but connected to a supported Wi-Fi hotspot, new issues are delivered to your online digital library in both cases. When travelling abroad without Wi-Fi access, new issues are not downloaded to your nook (automatically or manually).

Two things:

In the first answer, they specifically say that you cannot purchase eBooks or subscriptions over an international Wi-Fi connection. That suggests it is not a fully functioning Wi-Fi connection. Maybe because you are connecting from overseas, maybe not. If you had full Wi-Fi access and a valid BN.com account, what should stop you?

What is a “supported hotspot” in the second answer? If they mean an AT&T hotspot, my concern remains.

I wrote that I hoped I was wrong. I think the language here and in the announcement is strangely vague (having seen a lot of strangely vague FAQs turn out to bear bad news) and would have liked to be present at the announcement to ask.

UPDATE: Paul Biba, who attended the event, added this to his report, which seems to answer clearly the question whether the Nook provides ad hoc Wi-Fi access:

Wifi can only be used in store for events and in store content. Plan to open up later on.

B&N should enable ad hoc Wi-Fi access at launch, or disclose more clearly that it will not be available in order to avoid disappointing all the people who are expecting to be able to use Wi-Fi at home or elsewhere not served by an AT&T Hotspot. To do otherwise would be doing damage to the credibility of a very impressive piece of engineering.

The rest of the content you want to put on the Nook will have to be downloaded via a PC and synched to the Nook. That’s a step back from what the promise of built-in Wi-Fi would lead a buyer to expect—particularly because Nook is advertised as providing access to 500,000 Google Books titles that, in fact, aren’t accessible through the device, but must be synched.

I hope I am reading this wrong or, that if this is correct, B&N changes the Nook to support ad hoc Wi-Fi access to Google Books. It would be a blunder, forcing readers into retail stores when we want to get away from them, into virtual stores with much broader inventories.

UPDATE: Google Books, per the updated posting here, can be downloaded free of charge over 3G and Wi-Fi connections.

Synching is cumbersome and, frankly, what keeps most people, the non-early adopting masses, from using dedicated e-readers. The popularity of smartphone e-reader

Categories
Book and Reading News

Google Editions defies digital economics

Google today announced it will enter the e-book distribution business with a service, Google Editions, which will sell electronic copies of as many as 500,000 books offered by traditional publishing houses. The service is amazing, because the company has found a way to increase the retail distribution cost of e-books relative to paper books. Think about this—the zero cost copy of an e-book will be the basis for Google keeping substantially more, as a share of list price, to deliver a Google Editions e-book through a third-party retailer than buying directly from Google.

It may seem attractive to retail partners, which will purportedly include Amazon, Sony and Barnes & Noble, but even they’ve got to be scratching their heads about the added overhead Google built into its pricing scheme. An e-book purchased from Google Editions will list for the same price as the same book offered by a publisher through Amazon or Sony, for example, and Google will pay the publisher 63 percent of the list price. But, if the book is purchased in Amazon Editions format through Amazon or Sony, publishers will only get 45 percent of the list price.

Google said it will share the additional 18 percent with the retailer, though “most” of that 55 percent reportedly will go to the retailer. My guess is that by “most,” Google means the retailer will get 25 percent and Amazon 20 percent, or some approximation of that split. This seems a concession to make sure the Google Editions format books are carried by retailers.

Let’s break that down. For a bestseller, which the market has decided should be priced at $9.99, the publisher will earn $6.29 when Google Editions sells a copy. When that same Google Editions e-book is sold through a third party, the publisher will earn only $4.49. Intermediaries increase their share of revenue, even though they’ve taken on no inventory risk.

Publishers get 63 percent for selling directly and 45 percent for a Google Edition book sold through a third-party retail site. It defies all the economic logic of digital distribution. The likelihood that Google will really get more e-books from publishers on those terms compared to those offered by Amazon, Sony or Barnes & Noble to the same population of publishers strains credibility. But, we shall see.

More bad news: DRM

While the Google Editions e-books will be readable in a browser, they will not be unencrypted. Google makes clear that books will come with DRM, because they have created a way to let readers access files when not connected to the Net but without the ability to share those books with others. Books will be tied to a Google account, just as GMail, Google Docs and other services.

The retailers, all of whom have introduced proprietary e-readers and, except for Sony, which offers ePub formatted e-books, should be

Categories
Author & Publisher Strategies

Interesting free online conference on e-books

I’ve signed up to “attend” an interesting event, Digital Content Day @ Your Desk, a free virtual conference with what looks like a lively agenda. You can participate online on October 29, from 10:30 AM to roughly 5 PM Eastern by registering here. They are talking everything from DRM to social media.

Categories
Author & Publisher Strategies Book and Reading News

Reading Steve Jobs: Why 45 e-reader devices don’t make a market

Thomas Jefferson hacked bookstands for partial continuous attention
Thomas Jefferson hacked bookstands for partial continuous attention

As I develop the coverage here at BooksAhead, I have decided that trying to break news stories about e-reader devices doesn’t add a lot of value for the reader, especially when there are few differentiating features or functionality. Way back in the early 90s, when a new Ethernet interface card for the Mac—I was networking editor at MacWEEK—it became clear that an occasional summary article covering all the recent releases would be more useful than many individual articles announcing yet another Ethernet card.

However, sometimes a real breakthrough would come along, and that would get an individual article. The most important change in the early networking card market was something subtle and largely unheralded: The addition to writable ROM chips to cards eliminated the need to return a card when its software was defective. Yet, for several years, Ethernet card developers hesitated to include EPROMs in their products. Once they did, new features proliferated, such as Simple Network Management Protocol (SNMP), because cards could be updated in response to changing technology rather than having to be replaced. It sounds trivial, yet it made a huge difference.

The e-reader device market is looking a lot like the Ethernet card business back then: It’s a developing commodity market. Price is becoming the only differentiator, but the functionality is still very limited compared both to books and what e-books could be. The action will soon turn squarely on format and networking of documents, just as the Web became relevant when the browser changed hyperlinks from navigating between documents to navigating within parts of many documents.  Two hundred years ago, Thomas Jefferson designed a bookstand for reading several titles to accommodate the limitations of books (the idea is older, but Jefferson’s is one of the most elegant solutions to the problem). Readers want to use books and the knowledge and enjoyment they contain, not just consume them.

I’ve been doing a lot of thinking about this issue since I wrote about the ePub standards maintenance process beginning a couple weeks back. There are huge business opportunities in the

Categories
Book and Reading News

DRM isn’t dead, it is always regrouping

Several triumphal postings that the RIAA has declared DRM “dead” have been proved wrong. It turns out the Recording Industry Association of America’s spokesperson was not speaking emphatically, but ironically, in reply to a question, “DRM is dead, isn’t it?” The anti-DRM crowd rushed to affirm the truth of the statement, but, unfortunately, DRM isn’t dead. It’s regrouping. The simple fact is that most people, when offered a convenient form of playback with lock-in at the device level, so that they see playback on a particular device as a benefit, are perfectly content to have DRM content.

iTunes continues to encrypt movies and many songs, for example. Amazon’s movie and TV downloads are locked to an application for untethered playback but can be streamed in a browser, making it’s DRM a compromise that splits the difference for most buyers—Mac, Linux and smartphone users can’t play an Amazon movie when disconnected from the store, but the Windows crowd is happy. Amazon’s Kindle is a DRM system that interoperates with its e-books, as are various applications running on the iPhone. DRM is everywhere, often presented as a compatibility benefit rather than a anti-copying system.

I am not arguing for DRM, so please don’t assail me for doing so. The point is that when anti-DRM activists crow about the RIAA’s slowly having learned that treating customers like criminals is a “victory” for open access, they create the impression customers no longer need to ask the question, “Will this play on any device?” In the book world, DRM is so deeply engrained that it is likely most of the e-books sold in the next five years will become inaccessible due to changes in devices and supported formats, DRM being just one of several factors that will change as the market matures.

Compatibility, particularly forward compatibility, should be the key benefit sold to readers. If you are going to sell an e-book today, make sure you are prepared to make it work on future platforms or be prepared for customers to drop your brand and books like hot rocks when they learn others do provide forward compatibility. The easiest way to ensure that compatibility today is to avoid using DRM. Enough said.

Categories
Book and Reading News

Why the Kindle 1984 deletions actually matter: post hoc restraint

The flash-fire reaction to news that Amazon had automatically and preemptively deleted copies of George Orwell’s 1984 from Kindle users’ accounts and devices, despite the fact they had been paid for the copies, was bizarre for its rationalizations of a demonstration of a de facto technology for censorship. It seems that Kindle enthusiasts want to believe in DRM when it makes the device look virtuous. I like my Kindles (we have two), but Amazon’s responding to a pirated copy of a book by deleting them represents the potential for something new in publishing and potential censorship: post hoc restraint.

First off, yes, Amazon did delete the books because they were pirated copies. They should prevent pirated copies from being sold in the first place and should have done a better job of checking what they publish before making it available to buyers. This is incumbent upon Amazon as the distributor, no matter how cheap it is to publish to Kindle devices, to confirm that they are doing business with the person or company that has the right to publish the work. Amazon’s got to work harder to vet the publications it allows on its system. A rights registry would be a big help, just not one controlled by Google.

The wrong way to handle it was the way Amazon handled it. Rather than deleting the book and giving a refund, Amazon should have purchased legal copies and replaced the pirated copies. Then, it could have been presented as a benefit of buying from Amazon. Instead, they appeared to demonize the buyer of the unauthorized publication by taking it away.

Instead of sending the email pirated 1984 buyers received, Amazon should sent a note saying: “We’re sorry, you received a corrupted copy of George Orwell’s 1984, it is not up to the standards we expect when we sell a book. Here is a new copy of a really useful edition of the book.” Amazon lost money the way it handled the problem, it should, at least, have spent that money on buying good will.

Let me take that back

For centuries, people have feared prior restraint, the judicial or institutional suppression of a publication, because the principle of free choice demands that people have the opportunity to read a document and decide for themselves. We’ve only shed prior restraint partially and unevenly in the past century. And, frankly, throughout history, information has managed to get out even when censored by the government or the church. The Reformation demonstrated the power of the press to circumvent institutions of censorship, as has the American and French Revolutions, the revolutions of 1848, etc.

Now, however, it is clear that after a document has been distributed it can be revoked by an entity that controls access to the device on which the document is read, in other words post hoc restraint (as distinct from the fallacious argument for causality known as “post hoc ergo propter hoc“). In an age when companies freely sign agreements with governments, such as the Chinese Communist Party’s government (it is, in effect, both the government of the party and the country, so it has deeply invested interests in controlling information), that limit what people may see on the Internet, a post hoc restraint technology is especially threatening to personal freedom. If, after taking the risk of downloading a controversial book, it can be removed from your e-reader and you, the reader, can be targeted for police attention, reading becomes dangerous. A government could also circulate a sanctioned book simply to see who was interested in reading it.

This is also why the idea of advertising in books is terribly invasive to personal privacy. We have managed, to this point and only for the most part, to insulate our reading from commercial interest and surveillance.

Monopolies in publishing create powerful gatekeepers

A company that owns the end-to-end distribution infrastructure can become a servant of vested interests, whether its own or those of a government or political allies, becomes a mechanism for political and intellectual control. Amazon, Sony, Apple (with the iPhone), Microsoft, and Google (with its Web services and applications and OSes) all represent potentially devastating systems for thought control, making the 1984 situation all the more ironic.

Open systems that allow a variety of devices and documents to interact without any intervention by the providers of documents or access to services, is necessary to ensure humans can still subvert institutional control of their reading. I’ve also argued that cryptography in books would facilitate a wide range of personal and communications features, such as private annotations and public discussion embedded in books, and that would be very useful for systemic control of reading if it is not an opt-in service and the service providers dedicated to readers’ privacy. Hence, the providers of such services will need to be insulated from being economically interested in the content of what they encode and communicate.

Some have written that buying a Kindle is a waste of money because Amazon could simply delete all your books after you pay for them. That’s not the danger here, since any company that did that would not stay in business long. The fatal hazard is in a company that can selectively enforce censorship or otherwise restrict access to ideas, for whatever reason. There must always be alternative channels for getting books and having private access to books’ content.

It’s early in this e-volution. We haven’t hit our Reformation, yet. It will only be after that conflagration that personal privacy and choice will be self-evidently obvious, but the 1984 “refunds” gave us a brief glimpse of the potentially devastating downside to a fully digital information infrastructure.

Categories
Author & Publisher Strategies

When “evil” destroys dialogue

The term “digital rights management,” or DRM, as the technology used by many publishers to prevent unauthorized copying of their digital titles, is the subject of intensely emotional debate, so much so that the discussion seldom rises above claims that the technology is “evil” or “not evil.” Michael Bhaskar of Pan McMillan leaps into this perennial debate with a nicely reasoned piece that, nevertheless, seeks to justify the idea of limits on use of a text. (TeleRead also likes effort.)

I don’t think DRM is a good idea. It makes using a digital product harder than it needs to be. It also represents the fear among publishers and some authors, that their work will be undermined by people who would give it away freely.

However, DRM is also built on something that could be incredibly useful in a shared e-book, cryptographic identification of multiple readers, so that their annotations and discussions can be parsed logically and presented selectively. I’ve written about this before, in hopes of raising the smog of DRM from the potentially useful features that underlie it.

If the publishing industry let books be copied freely, across more than a few devices, for example, it would create business opportunities by allowing even those who receive a book at no cost to pay a small fee—comparable to the price of an e-book—to add their own thoughts to the page or discuss the book with some coordination provided through cryptographic technology to limit who could see their notes, or selected notes (some annotations may need to be private, because they are controversial or too sensitive to be exposed publicly, but they provide a personal point of reference for framing a discussion linked to the same place in the book).

No, DRM isn’t “evil,” it’s just a barrier to greater use of the text. Turn the whole argument upside down—what could we do with a freely shared crypto-enabled document that let readers integrate their notes and other reading? How could we maintain vast personalized libraries and reference databases secured in the same way that a cash payment at a bookstore provides anonymity?

Then, the problem isn’t unauthorized copies, it is how to identify one’s own copy, so that readers can share and use the information in more meaningful ways. If anyone doubts this is a viable approach, take a look at the growing use of OpenID and Facebook logins to parse and present social relationships with greater personal context online.

Forget “evil,” it’s  meaningless term in the context of computer science.

Categories
Author & Publisher Strategies Book and Reading News

Sourcebooks tries DRM-free, multi-format romance

Sourcebooks, an independent publisher of trade print and e-books, has partnered with self-publishing services developer Smashwords.com to offer DRM-free editions of 14 romance titles, Publishers Weekly reports. The company’s Casablanca romance imprint will release the titles in nine formats priced at $6.99. Readers will be able to access purchased e-book files on any compatible reader or application, allowing them to move e-books from one compatible device to another.

Sourcebooks offers Adobe eBook versions of its titles through its own site and is also launching titles, though not necessarily DRM-free, on Scribd.com.

“There is discussion surrounding DRM, and while partnering with Smashwords does not mean we endorse DRM-free across the board, it does mean that we’re open to exploring different possibilities to better serve our customers,” said Sourcebooks CEO Dominique Raccah in a statement.

In other words, this really is an experiment that will shape Sourcebooks’ strategy. It’s a chance to vote for DRM-free books with your hard-earned cash.