And All Work Stops

I’m sitting here late at night in the hotel working away as I build a survey. Let me say again, it’s late at night. Now I like my sleep as much as the next person, but late can be good. Late at night there is very little e-mail coming in. E-mail. You know – that nagging little thing that sits in your inbox and quietly demands a response. Late at night there are no phone calls. There are no updates from social networking sites. Late at night you can really dig into a project and just cruise . . .

 

Until . . .

 

Somewhere for some reason some part of the network goes down.

 

And all work stops.

 

The survey questions themselves are essentially written. I’m busy adding the login questions to branch my respondents to varying sets of questions depending upon their answers. Or rather, I was. Before the network went down.

 

This reminds me of an article I read earlier today. The point of the article is to highlight the inherent dangers of relying too much on cloud data and applications. A lot can happen. Servers can go down. Network connections can go down. Something between you and your data or app can go down. Of course when everything is working fine, it’s all very convenient.

 

But when something does go down, all you can do it sit, fume, and wait for services to be restored. I find it both interesting and frustrating that the very tools that enable us to do much of our work are also the same tools that prevent us from being able to do our work. Yes, a fascinating irony.

 

So I’m tired of sitting and waiting. I guess I’ll post this tomorrow. Sometime. When my network connection comes back up.

 

Reference: Google Users Live By the Cloud, Die By the Cloud

Social Networking and Changing Terms of Service

Last month brought a lot of hoopla over Facebook’s change to the terms of service agreements with users. (See references below for more reading.) Now it seems that Eastman Kodak Co. also has a change that has generated some user ire. According to a recent AP story, Kodak’s free online photo hosting service is no longer free. It sounds like Kodak is asking users to make a modest minimum purchase in order to keep using the storage services. Users who fail to do that risk having their photos deleted.

These two cases sound like they are at extreme ends of the spectrum. Kodak’s change sounds reasonable to me. They don’t want to just provide free storage for people who never make a purchase, so they’re asking customers to buy a few photos. On the other end, Facebook has essentially told its users that even if they delete their accounts, Facebook has the right to do what it wants to with their content forever. Can you imagine Facebook taking one of your photos and using it in an advertising campaign? Sounds like they have given themselves the right to do just that.

Now as I said, Kodak sounds reasonable, and Facebook sounds unreasonable. The thing that really surprises me though, is what people are getting upset about. From a lot of the reading I’ve done, people are not as upset about the new TOS as they are that the terms have changed at all. They somehow seem to think that they are entitled to non-changing usage agreements. Why? Yeah we pretty much get that when we buy a piece of software, but TOS agreements change OFTEN with SERVICES. Anyone still paying the same cable, electricity, telephone, or water rates they were 10 years ago? I doubt it. Economic condition changes, management conditions change, company goals change, and terms of service agreements change. How does the Internet generate this sense of entitlement that makes people think they should have a free ride forever, and that companies should never be allowed to alter their terms of service? You know most providers include that clause that says they can change TOS at any time. Or did you miss that? Interesting to note that enough people complained, and Facebook reversed the decision.

 

References

Facebook’s New Terms Of Service: "We Can Do Anything We Want With Your Content. Forever."
Facebook Responds to Concerns Over Terms of Service
Facebook Terms of Use
Consumers can be stuck when Web sites change terms
Facebook Reverts Back to Old Terms of Service

Bandwidth Caps: Stifling Creativity and New Web Apps

A recent article caught my eye, and it reminded me of the bandwidth cap discussions I’ve read about. This article describes the effect that bandwidth caps on users of new services such as the OnLive gaming service. OnLive estimates that data usage will be roughly 1 gigabyte per hour of high-definition gaming. According to the article, Frontier Corp., a regional communications company, is imposing a bandwidth cap of 5 gigabytes per month. This means that potential users can play games for approximately 5 hours per month before the company slaps them with extra charges.

TechRepublic also carried an article suggesting the impact that this could have on telecommuters. Many people are probably familiar with the Comcast decision to impose a 250 gb per month bandwidth cap on residential customers. Customers who go above the 250 gb limit will receive a pleasant little call from Comcast reps warning them about their “excessive usage.”

According to Comcast’s amendment to their acceptable use policy, they feel that their limit is ample for most customers. They provide these examples of customer data usage based on a 250 gb limit:

  • Send 50 million emails (at 0.05 KB/email)
  • Download 62,500 songs (at 4 MB/song)
  • Download 125 standard-definition movies (at 2 GB/movie)
  • Upload 25,000 hi-resolution digital photos (at 10 MB/photo)

These numbers are interesting, but this is really the only beginning to helping customers understand their usage habits. What about customers who play MMORPGs such as World of Warcraft or Warhammer? What about people who play console games such as Xbox, PlayStation, or Wii over the network? What about those who stream movies from services such as Netflix?

I’m still trying to figure out the best billing model for home Internet users. The obvious way to look at is by comparing it to existing utility rates. Some utilities are charged based on consumption. Electricity may be charged based on kilowatt hours, and water may be charged on a per gallon or per cubic foot basis. (However, people in apartments sometimes have leases that included unlimited power and water.) In contrast, cable or satellite tv service is unlimited for a single monthly fee with extra charges for for premium or pay per view services. I think perhaps a telephone/cell phone model may be more appropriate. Depending on your expected usage, you can either choose a pay-per-minute plan or an unlimited plan.

I think one of the biggest potential problems of bandwidth caps lies in its effect on user adoption of new services, or perhaps users’ willingness to even try new services. Suppose you were considering any new Internet-enabled technology. If you didn’t know how it would impact your bandwidth consumption, you might be less willing to give it a try. Remember all those silly cell phone commercials where customers had to save their calls until the middle of the night when their rates were the lowest? Imagine an equally silly situation in which you can only try a new application at the very end of the month with your last half gigabyte of bandwidth.

The model established for high-speed residential Internet service is one of unlimited use for a flat fee. High-speed Internet service has undoubtedly spurred the development of many new services and programs, but if our Internet usage is going to be capped, maybe we won’t need those services after all.

References

Streaming games could be bane or boon for ISPs
ISP bandwidth limits may have unclear impact on telecommuters
It’s official: Comcast starts 250GB bandwidth caps October 1
Announcement Regarding An Amendment to Our Acceptable Use Policy

Your Social (After)Life

So what exactly happens when someone disappears from your social network and is never heard from again? Did they just move on to other activities? Or did they get mad at someone in the circle and write you all off? Or did they perhaps . . . die?

A recent AP story highlighted a few tales where the latter was actually the case. A person died, and relatives were left trying to make contacts with online friends to let them know what had happened. Seems like a few enterprising folks have found a new way to make money out of death. A couple of online services will take care of these after-death notifications for you so your friends won’t be left wondering.

For more information . . .

http://www.deathswitch.com
http://www.slightlymorbid.com

And Another One Gone

We just had a major newspaper announcement last week, and it looks like the Ann Arbor News is the latest victim. It sounds like the economy coupled with the new ways in which readers consume news are combining to really put the hurt on newspapers. The word is that the paper “will be replaced by a Web-focused community news operation.” Sounds kind of like that 150 citizen blogger approach we heard from the Seattle Post-Intelligencer.

It seems that in casting about for a way to survive, these organizations are really struggling to find models that work. According to the news story, Ann Arbor folks are saying that “the new free Web site won’t simply be the old newspaper delivered in a new format.” I can understand their need to try new things, but a community information portal simply isn’t the same thing as a newspaper, and that leads me to wonder who will provide balanced, accurate, insightful news – not just in Ann Arbor, but in all markets affected by changes like this.

My next question is about how we will be able to preserve the local history captured in these new community blog-o-portals. Libraries understand what it means to preserve newspapers in various formats: paper, microfilm, digital, etc. The Internet Archive knows what it means to preserve websites. But is there a natural fit here? Assuming that these new electronic news outlets contain content that should be preserved, can The Internet Archive capture these newspapers on a daily basis? If it can, perhaps that will be enough for casual users and serious researchers. But if it can’t?

Deep Web Indexing

I came across an interesting New York Times article several days ago: Exploring a ‘Deep Web’ That Google Can’t Grasp. The article explores a shortcoming of current search technologies that librarians have known about and struggled with for quite some time. As good as current search engines may be, they rely primarily on crawlers or spiders that essentially trace a web of links to their ends. That works for a lot of content out on the Internet, but it doesn’t do so well for information contained in databases. So . . . library catalogs, digital library collections, a lot of the things that libraries do aren’t being picked up by the major search engines.

Of course at some level that makes perfect sense. When a web crawler comes to a page with a search box, how is it supposed to know what to do? It needs to input search terms to retrieve search results, but what search terms are appropriate? Is it searching an online shopping website? A tech support knowledgebase? A library catalog? This discussion surfaces again and again particularly as we talk about one of our digital collections. There is a wealth of information here for people researching the history of accounting, but it resides in a database. The database works perfectly well for humans doing a search. The only problem is that they have to find out about the database first. Now we’ve done a number of things to get the word out: papers, conference presentations, a Wikipedia article . . . If we’re lucky, these things will get users to the top level of the collection. Hopefully once they’re there, their research will draw them in. (In case anyone notices, I should get credit for positioning that set of homonyms like that!)

But getting them there in the first place – that’s the hard part. That’s why I have so much hope for deep web indexing. If researchers can build tools that will look into our databases intelligently, then extensive new levels of content will ben opened up to everyone. In particular I think about students who decide that the first few search engine hits are “good enough” for their school project. Usually they’re not good enough, but the students don’t always realize that. If new search engines can truly open up the deep web, the whole playing field changes!