Posts Tagged ‘ala 2008’

CK12 Solutions

Monolithic nature

Roughly averaging out the number of pages in textbooks over the school year, the average student has about 18 pages worth of reading for every day of the school year (including weekends and holidays).

Contextualized content
Empower knowledge providers – parents, teachers, librarians
Collaborative platform
Flexible output

Create a collaborative online environment
Free and open content
Aligned with curriculum guidelines
Customized for learning styles

Technology goals
As simple as email
Collaborative like wikis

Content contributed under a Share-Alike license
Site will go live in August.

Online interface allows construction or compilation of flexbooks for customized instruction.

“Open Source, Open Services”

Darrel W. Gunter
Collexis Holdings, Inc.

“Collexis High Definition Search enables extraordinary knowledge retrieval and discovery quickly and accurately by utilizing fingerprinting technology. The Collexis Fingerprint empowers users to immediately identify and search for documents, experts, trends, and new discoveries more quickly, accurately – and deeply – than conventional search engines. For users the savings in research dollars are extraordinary. High Definition Search positions Collexis as a world leader in the vital areas of knowledge management and discovery software. ” Taken from

The Semantic Web and Its Developments

The Semantic Wave
Project 10x
Phase 1: The first wave of the web
Phase 2: The Social web – connecting people and ideas
Phase 3: The semantic web, the ability to extract knowledge
Phase 4: The Ubiquitous Web, Artificial Intelligence

Collexis Fingerprint Engine

Software creates fingerprints of documents. Analogy to using a highlighter to mark key phrases and concepts.

Technology can indicate plagiarism as well as pointing out papers that support your own.

Explore instead of searching. – pre-populated social network. Shows what scientists are working together, geographical locations, co-authors, etc.


Posted: June 30, 2008 in ala 2008, conferences, wifi, wireless
Tags: , ,

People goof. It happens sometimes. But there are goofs, and then there are GOOFS! ALA is providing wireless network access for this year’s conference. GREAT! Only problem is, it doesn’t really appear to be there.

According to both the ALA Program Guide and Website:
“Once again the ALA will be offering Wifi internet access to all attendees of the Annual Meeting at no charge. Wifi “hot zones” will be in ALL {emphasis added} of the public areas (lobbies, meeting rooms, and ballrooms) but not in the exhibit halls. Persons with Wifi-enabled devices will be able to access the internet simply by connecting to the “ALA2008″ network.”

So far I’ve been to meetings in three hotels, and I have yet to find this seemingly-mythical ALA2008 network. I’ve looked. Oh believe me, I’ve looked. So where is the breakdown? Did ALA mean to provide Wifi only in the convention center? Surely not. There are meetings in numerous hotels, and people need network connections. Did ALA somehow forget to make appropriate arrangements with the conference hotels? Again, surely not. That would be a pretty big oversight. I wonder just what exactly happened?

ALA and some of its various units have been encouraging people to blog about what’s happening in the conference. If you enjoy reading or writing blogs, this could be an interesting way to expand a meeting’s discussion beyond the boundaries of the room. But . . . to blog you need a connection. If the connection isn’t there, nothing else can happen. Try again next year, ALA.

During LITA‘s annual Top Tech Trends program, and number of experts are invited to offer comments on what they view as some of the trends to watch. A few highlights are listed below. In some cases these may be paraphrased, and in others they may be direct quotes. For more detail, check out the podcast listed on the LITA blog.

Marshall Breeding

Open source trends in library automation

For a couple of decades things have been done in a certain way by a certain set of vendors. Open-source has fundamentally changed this. Libraries have perhaps been underserved by their vendors.

Open source ILS options: Koha, Evergreen, OPAL.

Although a number of libraries are moving towards an open source ILS, there is much more action in the public side than on the academic. The number of academics moving to open-source systems is much smaller than the “swell” of public libraries moving in this direction. It may take another year or two before we start seeing greater involvement on the part of academics.

Instead of traditional licensing arrangements, the emerging model is support of open source with contract programming, moving the focus of the revenue from the licensing side to the services side.

There is also a move towards open data.

ILS Discovery Layer Interface Committee – Working on standards that define interoperability between library automation systems and the new generation of front-end interfaces.

The Berkeley Accord
Some automation vendors have already signed on as part of this project.

Anyone developing library automation software has to respond to libraries’ demands for more openness.

Openness is great, but beware of the marketing pitch. Read the fine print, be skeptical, look deep. Are things as open as the vendors portray them? Are they doing tings that really deliver new value through openness?

Karen Schneider

“You know that open-source is viable because people make money at it.”

Broadband – We never have enough of it, and we seem to be in a perpetual cycle of catch-up. These limitations drive library policy and practice. Some libraries can’t explore certain initiatives because their broadband simply won’t support it. No federal broadband strategy.

Open-source – We have come full circle with our automation history. Librarians are now writing their own software, charting their own destinies.

Sarah Houghton-Jan

Now people have faster Internet access at home than at libraries. The problem is multimedia. When you have a lot of people in libraries watching videos, playing online games, and streaming radio stations, you wind up with clogged bandwidth. (Probably more of an issue in public libraries than in academic?) More of IT budgets are going to be dedicated to broadband.

People talk a lot about things that are new and beautiful, but not so much about sustainability. At the outset, people are not thinking enough about how much effort it takes to sustain new projects. How many abandoned library blogs are out there? How many library myspace sites are not being maintained?

Libraries as organizations are not nimble. We need to look at how we make decisions and how we encourage innovation. Innovation is discouraged in many libraries. The structures and practices of our organizations create these barriers to innovation. Part of this is attibutable to the age-old librarians’ fear of failure. We can’t try anything new unless it’s been planned to death and it has already been tried in 80% of other libraries (so we’re pretty sure it won’t fail for us). Staff are hesitant to innovate because of the seemingly insurmountable multi-level bureaucracies. People don’t have the time in their workdays to think about innovation. Then trying to wade through the bureaucracy is a waste of time.

Clifford Lynch

There is a growing enthusiasm about open-source in libraries, the higher ed community in general, and cultural heritage organizations. Open source is wonderful, but it isn’t a panacea. There seems to be a widespread belief that by declaring something to be open-source, you can solve a widespread range of financial, technical, and design problems, that are otherwise insoluble.

After their enthusiasm, there is likely a coming backlash against open source as people try to calibrate on a realistic view the places where it solves effectively, the places where difficulty is conserved. You need to be smart about it. You need to not overreact in any direction about open source. You need to think about when open source makes sense, and when you’re just appealing to it to solve a problem that basically nobody knows how to solve in the first place.

Virtual Organizations – Concept from the ideas of cyber-infrastructure, collaboration across the network.

People need to be able to work together and set up work arrangements in a fairly agile fashion. There is a need to be able to do this in a mixture of synchronous and asynchronous ways. Real indications that travel will get more expensive and more difficult in the future. Besides fuel, airlines are dropping a significant amount of air transportation capacity. We’re going to find that a mixture of physical and telepresence will become more of the norm at meetings for participants and audience. (Cliff said that this was being handled pretty badly at this Top Tech Trends session!) Needs to get much better very quickly. Has implications for teaching and learning.

Network storage, cloud storage. This is starting to take off in a bigger way.

Move by libraries and cultural heritage institutions to make their digitized materials available outside the library in places suck as Flickr. This is a “letting go” of holdings so that they can be reused and so that they can be put into contexts that are valuable to people.

Information overload – Social networking systems/social software. How quickly and how severely will we run into overload situations? We’re already seeing early signs of this.

Roy Tennant

The age of experimentation – revealed through a number of different projects – VuFind, Scriblio, Extensible Catalog Project. People are taking control, experimenting, and trying to find out what might be usable techniques.

Game-changing surprises such as Google digitizing entire libraries.

Data – Everyone needs to get really good at extracting data from within whatever system it resides. ILS, ERM, etc. You will be throwing those systems away at some point, so you need to be able to get your data out. Even if you keep those systems, you need to be able to analyze your metadata: find missing elements, find bad elements, do transformations.

People – Take responsibility for your own professional development. Don’t expect training courses. Support people in the organization with the opportunity to learn what they need going forward.

Systems – Take control of your systems. Don’t be locked into an outdated version just because it’s so hard to upgrade. If it’s that hard, get out of that system.

Question for Roy: What can library schools do to help prepare students for an environment of constant technological change?
Answer: Good luck. The trouble there is that personality traits are needed rather than specific knowledge, and that is really hard to impart in library school. Either you’re the kind of person who loves change and loves to learn new things, or you’re the kind of person who likes to get comfortable in a position without having to learn new things. Library schools can’t really affect that. However, library schools can focus more broadly on concepts that can be applied in different technological situations: information retrieval, precision, recall. Those kinds of things last. Specific systems don’t last. However, it would be useful to teach people at least one specific programming language because that helps them talk to programmers. This is a role that librarians increasingly need to play.

Meredith Farkas

Social software – The role of social software in collecting local knowledge. Local wikis for cities. Why can’t libraries collect local knowledge to benefit everyone? Having a space to collect knowledge is very important. Libraries can be the online hub of local communities. This presence will in turn drive more visitors to the library website.

Libraries can provide an important technical and educational role in communities. Some libraries are doing extensive technology training in their communities. Example: The Public Library of Charlotte & Mecklenburg County.

Archiving blogs as historical artifacts. Many of the real current library conversations are happening in blogs. Will this knowledge be preserved for use in future research? This could change the way we think about how we archive materials.

John Blyberg

Green technology – In 2005 American consumers dicarded 2.5 million tons of electronic equipment. Most contained some amount of toxic material. Manufacturers are coming up with new types of material that are more eco-friendly. Energy efficiency is also a rising concern. The Internet (and related computers, monitors, hardware) consumes about 350 billion kilowatts of electricity per hour in America. This is almost 10% of total energy generated in the U.S. Most of that power is wasted in toe form of heat generated and energy required to cool hardware. New devices/innovations for ultra-low voltage and heat abatement.

In the future conferences may be transformed by technologies that allow more and better virtual participation. Less physical presence also translates into less total energy consumed.

Semantic webNew Reuters API that adds semantic markup to unstructured HTML documents. As you do a search and refine the search, the software helps you locate other connections that you might not otherwise have found.

Converged Media Hubs
Portable media devices are actually mini-PCs, such as the iPhone. They can bring together a wide variety of content such as RSS feeds and live TV, and this is available in the palm of your hand. We may not be ready to accept the idea of reading books and watching entire movies on a handheld device, this drastically and fundamentally changes the expectations of our users when they look at us as information providers. Those who use these devices heavily customize their experience so that it’s tailored to their needs. This indicates that people are beginning to develop a very personal relationship with information. As users build these customized information frameworks, they begin to place a good deal of personal reliance on them. This puts in a position where we can help them do this.

The Library as Content Creator (not just content provider)
When users leave the library, they enter a world that is very highly produced. Highly crafted radio, television, and Internet experiences. Not necessarily highly crafted content, but definitely a highly crafted container. This creates an expectation from the users that their user experience will be well-presented and professionally developed. The problem is that users don’t appreciate it, they just expect it. When it works well, they take it for granted. When it doesn’t work well, they are highly judgmental, and they will disregard the content if the container is not up to their expectations.

Karen Coombs

APIs are becoming very important to libraries. Beyond just bibliographic APIs, libraries need to consider media sources such as Flickr and These will enable libraries to pull data back from those media sources or make it possible for faculty to simultaneously upload content to the institutional repository and these media sources. Libraries have to find ways for faculty to put content in institutional repositories, discipline-specific repositories, and government repositories with a single submission. Repositories are a way for this to happen.

Virtual Participation
New facets to virtual participation. More experiences with people in a physical space and in a virtual space collaborating together.

Eric Morgan

With regard to scholarly publishing, it is no longer just about the article anymore. It’s about the data that supports the article. How are we going to collect and provide access to that data, and maintain it for a long period of time.

Mobile devices
Mobile devices will become more the norm. As libraries, how are we going to get our content onto these small screen.

Web APIs. these are the pieces that fuel web 2.0. This is a way to get your stuff out there.

It’s increasingly important to make sure that library websites have bling. People really do judge books by their covers. Libraries need more expertise when I comes to graphic design. We might know how to organize information bibliographically, but when it comes to organizing it visually, we stink.

Next-gen library catalogs/discovery systems
All of these next-generation type things are essentially indexes with services running against the index.

The next challenge for libraries is letting patrons use the content that they find. Finding content is not the problem. As libraries we can allow patrons to use the content in a different sort of way. The next gen catalog is a tool to do others things that are exemplified by action verbs: tag it, review it, annotate it, compare and contrast.

Libraries always serve a community: business, university, city, government. Your library will be able to provide services against your content better than Google can.

Karen Coyle

I want to be able to walk into the stacks and do catalog searches with handheld devices. The fact that when I’m in the library and have less access to resources than when I’m sitting at my desk at home is ridiculous.

What my job often is these days is that I’m paid to write reports you couldn’t pay me to read.

“The future of bibliographic control will be highly collaborative, decentralized, international in scope, and web-based. It will take place in cooperation with the private sector and users will be among the new partners who collaborate with libraries. Data will be gathered from multiple sources. ”

Paraphrase of the opening section of the Library of Congress Working Group on the Future of Bibliographic Control.

The future of bibliographic control will not have control. It is not going to be a controlled future. It will be a gigantic mash-up. My fear right now is that if we don’t make some extreme changes, it will not involve libraries. Other people will be doing it, we won’t. And we won’t because it is going to be about linking, it is going to be about everyone having access to the data. The data has to be unencumbered and anyone can use it in any way they want. Right now it is easier to get bibliographic data from Amazon and from the publishers than it is from libraries. So if we don’t really get on the ball, the future of bibliographic control is not even going to involve us. There are a lot of things that we have to give up in order to allow this future to include us, one of them being that it no longer matters if the data/records we create for bibliographic data are the same.

Need to be able to offer the users the option to see other items that are similar.

The increase of user to user interaction.

More and more the institution is not going to be the focus of the interaction. As users can connect and interact with each other, that will be their choice. Libraries have to make sure that their content is available and accessible while users are interacting with one another.

LITA’s Top Tech Trends program always provides interesting discussion and insight into some technologies that we should keep an eye on. This year’s format was both complex and interesting. In addition to the usual panel on the stage, a couple of remote panelists were included by way of videoconferencing software. As an additional level of fun, a Meebo chat room was running on another screen so that audience members could participate in a backchannel discussion of topics raised during the discussion.

It was interesting to observe the reactions of the panelists and the crowd to this new format. Some of the speakers were clearly rattled by the technological challenges this format presented. Some were visibly annoyed by technical glitches. However, others acknowledged that even though it’s not yet perfect, at least we were trying it. Two panelists probably would not have been able to participate without the remote conferencing. Likewise, although the audience members sometimes had trouble hearing the remote panelists, many appreciated the fact that LITA was at least trying to continue pushing the technology.

Just as interesting was audience reaction to the Meebo chat room. Some clearly found it a distraction – even when they were trying not to be distracted. From one area of the meeting room, it was easy to see that most people were following the onscreen chat rather than actually listening to the speakers. For those that had network connections and were able to connect, I think they truly enjoyed the backchannel discussion. For some though, the combination of live video, online chat, and on-stage speakers was simply too overwhelming, and they could not decide where to focus their attention.

I fully expect that this process will be refined over time. And, as some panelists noted, you have to have a starting point, and you have to let the technology mature over time. If you don’t practice with it, you’ll never work the kinks out.

My notes on speakers’ comments appear in a separate posting.

Q&A between John Houser from PALINET and Joshua Ferraro from Liblime
Notes from June 28, 2008, program at the American Library Association Annual COnference.

The Open-Source Development Process
Product features are customer-driven. The community members can sponsor specific features that they need, and all other users benefit from that sponsorship. Example – Athens County Library sponsored the MARC 21 implementation for U.S. libraries.

Currently 25-30 active developers.

Current version is 2.2 Version 3.0 upcoming.

Q: Is there something about the open-source development module that results in better interfaces?
A: The sense is that since open-source development is so customer-driven, all components are built to do what the users want them to do the way they want to do them.

Q: Is open-source software quality code? The development process seems somewhat loose? Is it good? Is it secure?
A: Natural tendency is to assume that free things are not as good or not as secure as something that is purchased. For open-source, since there are a number of eyes watching the software, there are also many contributors pointing out needs for patches and areas for improvement.

Q: What about standards compliance?
A: Koha is already implementing a number of standards such as Z39.50. Dublin Core. MODS.
The whole open-source community is more interested in standards than in selling software. Since they’re focusing on the software, standards become very important.

Q: Can People expect to save money if they implement Koha?
A: Not necessarily. Instead of spending money on licensing fees to a vendor, that money can be redirected to other areas. Perhaps local development or possibly contributing to the Koha product by sponsoring modules.

Q: If I have a proprietary system, what are the most important things to think of if I want to implement Koha?
A: Do you have access to your data? This is very important for being able to extract and migrate data.

With Koha, you can actually try out a live system with no monetary investment.

Staff modules are run in a web browser rather than through a client application.

Product has a very granular permissions scheme.

Koha is built upon a number of standard open-source technologies: Linux, Apache, MySQL, Perl, etc.

Currently over 300 libraries using Koha.

Koha uses a number of APIs: LDAP, PatronAPI, Z39.50, OpenSearch, Cataloging

RFID in Libraries

Posted: June 28, 2008 in ala 2008, conferences, RFID
Tags: , ,
RFID in Libraries: Myths, FAQs, & ROI
LITA RFID Interest Group
Notes from the June 28, 2008, program at the American Library Association National Conference

Karen McPheeters, Farmington Public Library, NM
Lucie Osborn and Carey Hartman, Laramie County Library System, WY
Ross McLachlan, Phoenix Public Library, AZ

Karen McPheeters
Farmington Public Library, NM

Opening a new library with the same full-time staffing level, 52,000 square feet.
Needed to use Technology to Maximize Staff, Building, and Collection.
180,000 tags purchased; tagged 140,000 items in two weeks
Purchased the technology without really knowing if they would like it.

Goals realized
Catalyst to simplifying all processes
Reduction in theft – whole library approach
Great inventory control – can do “on the fly” inventories
Reduced time from check-in to shelf – immediate to the patron – 45 minutes to the shelf
Better shelf management
Very few mistakes with check-in
Redeployment of staff – more time with patrons – we communicate
No repetitive motion injuries in RFID
More staff time spent on customer service
100% self check since August 2003 – six self-checks
No lines anywhere (except sometimes at the self-check machines)
The smart return generates a receipt with a coupon for $2.00 off on fines.
They collect more money on fines now than they did before
60% smart return
Easy conversion – 140,000 items in three weeks
Cataloging errors found and corrected

Unexpected returns on investment
Training aspects – learning that we could teach all of our patrons new tricks
Created a culture of change and a staff that desires to learn
Better balance between a “protection” driven and a “service” driven organization
They use RFID tags on CD cases and security tags on the discs.

Lucie Osborn and Carey Hartman
Laramie County Library System, Cheyenne, WY

The Reason for RFID
New faculty
Current technology
Ease of use
More direct customer service
Moved from and old facility with 38,000 sq. ft. on one floor to a new facility with 103,000 sq. ft. on three floors.
This library previously had no security system.

What we would do differently
More careful training and more spot-checking. Some people were putting tags on top of the picture of the elephant that was an integral part of the story on the last page of a children’s book. Other were covering up maps at the end of the book.
Follow the guidelines of vendor in tagging AV items. Local people wanted to put tags on every piece of AV sets, vendor said this wasn’t a good idea.
Utilize volunteers in a different manner.

Negative perceptions
Privacy concerns for patrons, job security concerns for employees. Public thought that the library was making them do all the work and the library would be providing less service.

All floors have a very obvious service point
Had trainers come in to discuss the roving support concept with employees. Helps patrons at the point of “puzzlement”.
14 self-checks scattered around three floors.

Lessons Learned
Physics of AV items and RFID – security aspects of RFID tags do not work with AV items – possibly due to the metals in CDs
Furniture – metal in furniture is also possibly interfering with RFID tags
This library wanted to achieve a 90 % self-checkout rate. AV and table problems are making this impossible.
Some AV materials will not fit into self-service units.
Self check-in includes a conveyor belt and automated sorting system. Library has a window into the sorting room, and patrons like watching the sorting process.

Return on Investment
Open less than a year – don’t feel that they have a full idea of ROI yet
Customer service/job enhancement
More interesting work for employees – employees can work with patrons at the “point of puzzlement”; additional reader’s advisory services
Less repetitive motion
The most important ROI is that staff have more time to work with patrons.

Ross McLachlan
Phoenix Public Library

14 branches
Large central library
All locations open 72 hours per week
Collection of 2.2 million items
15 million+ annual circulation
900,000 – 1 million registered borrowers
The reasons for RFID
In the last 7 years
64% increase in circulation
25% increase in door count
87% increase in reading program
803% increase in website usage
In the last 3 years
145% increase in public PC usage
The reason why self check lead to RFID
Staff budget reductions in 2002/2003
Began using self checks at the 4 busiest locations in 2002
Achieved 80% check-outs via self-check machines the 1st year
Expand the convenience aspect of the customer experience at checkout
Achieve greater staff efficiencies at checkin
Studied the marketplace
Lead to the conclusions to use RFID
Current status
Four branches are fully RFID and self-checkout
99% of the 2.2 million item collection has been converted to RFID since October
These branches have consistently achieved 95% – 92% check-outs on self check
The service has received positive customer feedback
In January 2009 the entire system will be totally RFID.