Social Networking and Changing Terms of Service

Last month brought a lot of hoopla over Facebook’s change to the terms of service agreements with users. (See references below for more reading.) Now it seems that Eastman Kodak Co. also has a change that has generated some user ire. According to a recent AP story, Kodak’s free online photo hosting service is no longer free. It sounds like Kodak is asking users to make a modest minimum purchase in order to keep using the storage services. Users who fail to do that risk having their photos deleted.

These two cases sound like they are at extreme ends of the spectrum. Kodak’s change sounds reasonable to me. They don’t want to just provide free storage for people who never make a purchase, so they’re asking customers to buy a few photos. On the other end, Facebook has essentially told its users that even if they delete their accounts, Facebook has the right to do what it wants to with their content forever. Can you imagine Facebook taking one of your photos and using it in an advertising campaign? Sounds like they have given themselves the right to do just that.

Now as I said, Kodak sounds reasonable, and Facebook sounds unreasonable. The thing that really surprises me though, is what people are getting upset about. From a lot of the reading I’ve done, people are not as upset about the new TOS as they are that the terms have changed at all. They somehow seem to think that they are entitled to non-changing usage agreements. Why? Yeah we pretty much get that when we buy a piece of software, but TOS agreements change OFTEN with SERVICES. Anyone still paying the same cable, electricity, telephone, or water rates they were 10 years ago? I doubt it. Economic condition changes, management conditions change, company goals change, and terms of service agreements change. How does the Internet generate this sense of entitlement that makes people think they should have a free ride forever, and that companies should never be allowed to alter their terms of service? You know most providers include that clause that says they can change TOS at any time. Or did you miss that? Interesting to note that enough people complained, and Facebook reversed the decision.

 

References

Facebook’s New Terms Of Service: "We Can Do Anything We Want With Your Content. Forever."
Facebook Responds to Concerns Over Terms of Service
Facebook Terms of Use
Consumers can be stuck when Web sites change terms
Facebook Reverts Back to Old Terms of Service

Bandwidth Caps: Stifling Creativity and New Web Apps

A recent article caught my eye, and it reminded me of the bandwidth cap discussions I’ve read about. This article describes the effect that bandwidth caps on users of new services such as the OnLive gaming service. OnLive estimates that data usage will be roughly 1 gigabyte per hour of high-definition gaming. According to the article, Frontier Corp., a regional communications company, is imposing a bandwidth cap of 5 gigabytes per month. This means that potential users can play games for approximately 5 hours per month before the company slaps them with extra charges.

TechRepublic also carried an article suggesting the impact that this could have on telecommuters. Many people are probably familiar with the Comcast decision to impose a 250 gb per month bandwidth cap on residential customers. Customers who go above the 250 gb limit will receive a pleasant little call from Comcast reps warning them about their “excessive usage.”

According to Comcast’s amendment to their acceptable use policy, they feel that their limit is ample for most customers. They provide these examples of customer data usage based on a 250 gb limit:

  • Send 50 million emails (at 0.05 KB/email)
  • Download 62,500 songs (at 4 MB/song)
  • Download 125 standard-definition movies (at 2 GB/movie)
  • Upload 25,000 hi-resolution digital photos (at 10 MB/photo)

These numbers are interesting, but this is really the only beginning to helping customers understand their usage habits. What about customers who play MMORPGs such as World of Warcraft or Warhammer? What about people who play console games such as Xbox, PlayStation, or Wii over the network? What about those who stream movies from services such as Netflix?

I’m still trying to figure out the best billing model for home Internet users. The obvious way to look at is by comparing it to existing utility rates. Some utilities are charged based on consumption. Electricity may be charged based on kilowatt hours, and water may be charged on a per gallon or per cubic foot basis. (However, people in apartments sometimes have leases that included unlimited power and water.) In contrast, cable or satellite tv service is unlimited for a single monthly fee with extra charges for for premium or pay per view services. I think perhaps a telephone/cell phone model may be more appropriate. Depending on your expected usage, you can either choose a pay-per-minute plan or an unlimited plan.

I think one of the biggest potential problems of bandwidth caps lies in its effect on user adoption of new services, or perhaps users’ willingness to even try new services. Suppose you were considering any new Internet-enabled technology. If you didn’t know how it would impact your bandwidth consumption, you might be less willing to give it a try. Remember all those silly cell phone commercials where customers had to save their calls until the middle of the night when their rates were the lowest? Imagine an equally silly situation in which you can only try a new application at the very end of the month with your last half gigabyte of bandwidth.

The model established for high-speed residential Internet service is one of unlimited use for a flat fee. High-speed Internet service has undoubtedly spurred the development of many new services and programs, but if our Internet usage is going to be capped, maybe we won’t need those services after all.

References

Streaming games could be bane or boon for ISPs
ISP bandwidth limits may have unclear impact on telecommuters
It’s official: Comcast starts 250GB bandwidth caps October 1
Announcement Regarding An Amendment to Our Acceptable Use Policy

Your Social (After)Life

So what exactly happens when someone disappears from your social network and is never heard from again? Did they just move on to other activities? Or did they get mad at someone in the circle and write you all off? Or did they perhaps . . . die?

A recent AP story highlighted a few tales where the latter was actually the case. A person died, and relatives were left trying to make contacts with online friends to let them know what had happened. Seems like a few enterprising folks have found a new way to make money out of death. A couple of online services will take care of these after-death notifications for you so your friends won’t be left wondering.

For more information . . .

http://www.deathswitch.com
http://www.slightlymorbid.com

And Another One Gone

We just had a major newspaper announcement last week, and it looks like the Ann Arbor News is the latest victim. It sounds like the economy coupled with the new ways in which readers consume news are combining to really put the hurt on newspapers. The word is that the paper “will be replaced by a Web-focused community news operation.” Sounds kind of like that 150 citizen blogger approach we heard from the Seattle Post-Intelligencer.

It seems that in casting about for a way to survive, these organizations are really struggling to find models that work. According to the news story, Ann Arbor folks are saying that “the new free Web site won’t simply be the old newspaper delivered in a new format.” I can understand their need to try new things, but a community information portal simply isn’t the same thing as a newspaper, and that leads me to wonder who will provide balanced, accurate, insightful news – not just in Ann Arbor, but in all markets affected by changes like this.

My next question is about how we will be able to preserve the local history captured in these new community blog-o-portals. Libraries understand what it means to preserve newspapers in various formats: paper, microfilm, digital, etc. The Internet Archive knows what it means to preserve websites. But is there a natural fit here? Assuming that these new electronic news outlets contain content that should be preserved, can The Internet Archive capture these newspapers on a daily basis? If it can, perhaps that will be enough for casual users and serious researchers. But if it can’t?

Another One Bites the Dust

It was announced yesterday that today’s edition of the Seattle Post-Intelligencer will be the final print version of this 146-year-old paper. One can’t help reading the story without hearing the “Print is dead” cries echoing in one’s ears. Amidst all the talk about new business models and transitioning to a new online format, I can’t help thinking that “20 news gatherers and Web producers,” “20 newly hired advertising sales staff,” and “150 citizen bloggers” will never be able to cover the news like an experienced news staff.

We know that the Kindle can deliver content from major U.S. newspapers. Is the SPI “major” enough to merit some Kindle attention? Even if does, you still won’t be able to read it on the plane during takeoff and landing. And therein lies part of my concern with the whole “print is dead” movement. Now don’t get me wrong – I like electronic books. I’ve been through many, and I have about 50 on my Palm Treo now. But there are some places/times where/when my device is not allowed. Beyond that, traditional print books and newspapers never need to be recharged, they never need a network connection, and they never have to be migrated to a new hardware/software platform. I can easily loan my print book to a friend, but I’m certainly not going to loan them my Treo!

I hope that the various facets of the publishing industry can find a comfortable balance before the pendulum swings too far.

(For the record, I first read this story on my Palm Treo when it was delivered through Pocket Express. I read follow-up material on various web sites.)

Seattle P-I to publish last edition Tuesday
Seattle Post-Intelligencer prints final edition in online transition
First big US newspaper goes web only

Thinking ’bout that cloud thingy

Seems like every couple of days we’re hearing more about data in the cloud. I’ve thought about this one in considering the iPhone, and I’m still thinking about it as I read more about the Palm Pre. One of my biggest gripes about the iPhone has to do with synching data across multiple computers. I do this all the time with my old Palm Treo. I drop it in a cradle attached to my desktop computer, I drop it into a cradle attached to my home desktop, and I connect it to a cable attached to my laptop. The frequency varies, but the long and short of it is that if I’m in a network dead zone and I need to get something from a computer onto my handheld (or vice versa), I can connect a cable, push a button, and I’m done. Not so with the iPhone. If I need to sync data across multiple computers with the iPhone, I have to subscribe to Apple’s $99 per year MobileMe service. Even after subscribing to that service I wouldn’t be able to sync the way I want to: I can’t simply connect the two devices with a cable and push a button. MobileMe depends upon having a network connection so that my data can copy itself to Apple’s server in the cloud. Eventually the updates trickle across the network to the other computers. As I read more about the Palm Pre and Palm’s new WebOS, it seems that this device will follow the same path, and that disappoints me.

As I’ve previously mentioned, I simply may not want some of my data to live in the cloud. Yet I still need to synchronize with multiple computers. What’s a person to do then? But ignoring the fact that I don’t want all of my data to live in the cloud, there are other considerations. I live in an area with MANY network dead zones, so it simply isn’t always even possible to sync data. Let’s think about travelers. Do you really want to pay for an hour’s worth of network time in every airport you pass through just to keep all your data synchronized? Probably not.

I’m trying to keep an open mind about cloud data and apps. Some people love it, and it works well for them. That’s great for those people, but it shouldn’t come at a cost in functionality. If data synchronization with the cloud is just another option, then that’s a good way to go. Nothing wrong with giving people options. But why take away functionality that already works? Don’t do it, Palm!

Deep Web Indexing

I came across an interesting New York Times article several days ago: Exploring a ‘Deep Web’ That Google Can’t Grasp. The article explores a shortcoming of current search technologies that librarians have known about and struggled with for quite some time. As good as current search engines may be, they rely primarily on crawlers or spiders that essentially trace a web of links to their ends. That works for a lot of content out on the Internet, but it doesn’t do so well for information contained in databases. So . . . library catalogs, digital library collections, a lot of the things that libraries do aren’t being picked up by the major search engines.

Of course at some level that makes perfect sense. When a web crawler comes to a page with a search box, how is it supposed to know what to do? It needs to input search terms to retrieve search results, but what search terms are appropriate? Is it searching an online shopping website? A tech support knowledgebase? A library catalog? This discussion surfaces again and again particularly as we talk about one of our digital collections. There is a wealth of information here for people researching the history of accounting, but it resides in a database. The database works perfectly well for humans doing a search. The only problem is that they have to find out about the database first. Now we’ve done a number of things to get the word out: papers, conference presentations, a Wikipedia article . . . If we’re lucky, these things will get users to the top level of the collection. Hopefully once they’re there, their research will draw them in. (In case anyone notices, I should get credit for positioning that set of homonyms like that!)

But getting them there in the first place – that’s the hard part. That’s why I have so much hope for deep web indexing. If researchers can build tools that will look into our databases intelligently, then extensive new levels of content will ben opened up to everyone. In particular I think about students who decide that the first few search engine hits are “good enough” for their school project. Usually they’re not good enough, but the students don’t always realize that. If new search engines can truly open up the deep web, the whole playing field changes!