OLE: The Open Library Environment Project

The Open Library Environment Project: Building an ILS for Service Oriented Architecture Integration
McCormick Place West, Room: W-196a

Beth Forrest-Warner, University of Kansas
John Little, Duke University
Robert H. McDonald – Indiana University
Carlen Ruschoff – University of Maryland

What is the OLE Project?

Community source alternative to current ILS

International participation from libraries and consortia
100+ institutions, 350+ individuals

Planning phase: September 2008 – July 2009

The goal is to have a reference implementation model available in 2011

Why OLE?

"Our current library business technologies cost too much and deliver too little. We need to rethink our services and workflows, and to use technology that enables innovation rather than locking us into the status quo."

There is a growing need for library systems to integrate with other enterprise systems: Financial, identity management, course management, content management

Library technology systems have not kept pace with changing users and a changing information environment.

OLE Campus

Manages locations
Manages resource subscriptions
Integrated into: course/learning management system, accounting, student/HR, consortia
Flexibility
Community Ownership
Service oriented architecture
Enterprise-level integration
Efficiency
Sustainability

Audience poll: What do you think is most critical to the future of your library? (From above list)
Respondents ranked Flexibility as being much more important than Sustainability.

Why OLE now?

Current ILS products are inadequate
Growing need for library systems to interact with other enterprise systems
Vendor consolidation

Community Source Projects

A group of institutions sign an agreement to contribute specific resources. Under this model there is an established level of buy-in as opposed to open source in which a community may or may not develop. The Community Source participants have an ongoing commitment to participation and support.

Have sustainability over the course of the product development
Invest in the community of practice for long-term support and development
Fosters innovation and shared knowledge
Coordinates institutional goals rather than individual goals

Looking at better integration and interoperability with campus enterprise systems – not just "tacking on". Why are we looking at our own patron databases? Those are campuswide functions. This will ultimately result in more efficient processes and better use of campus investments.

From Theory to Reality

Approximately 30 months build time.
The project will build on existing pieces.
RICE – Enterprise level middleware

Kuali Nervous System
Kuali System Bus
Kuali Enterpise Workflow
Kuali Enterprice Notification
Kuali Identity Management

Use Existing systems
Existing data feeds
Open ERM data
Shared database feeds

Two Year Timeline

Year 1 Deliverables (will focus on one of these)
Management of Electronic Resources Services
Leased and owned eContent
Peer Resource Sharing Services
Sharing content – peer to peer
Sharing workflow – consortial
Acquisitions
CRM

Year 2 Deliverables

Integrations
Orchestrations
Functional Scope

Risks of Participation

No CS project has yet failed, but . . .
Achieve consensus
Acquire sufficient resources
Deliver software of adequate functionality
Problems could arise with contract software
Adoption
Build sufficiently large vendor services community

Benefits of Participation

Cost savings
Access to emerging technologies
Use monetary resources in a productive and directly influential fashion
Leverage ROI on campus for enterprise systems

Build partners are agreeing to put some portion of OLE into production. At the end of the 30-month build cycle, it will be possible to close out at least one module from the legacy ILS in favor of an OLE module.

Cash Contributions Needed

$5.2 million total partner contribution
7 partners – $185k per year
6 partners – $216k per year
5 partners – $260k per year

Top Tech Trends – Denver, pt. 2

ALA Midwinter 2009 Trendsters: Marshall Breeding, Karen Coombs, Roy Tennant, Clifford Lynch, Karen Schneider, Karen Coyle

 

Karen Coyle – A lot of what’s happening is not new technology but issues around management of technology

 

Karen Schneider – Recapturing tools creation. 80s-90s – dark ages where other people were creating the tools for us.

 

Clifford Lynch – Flickr commons. Library of Congress and New York Public putting photos online. Some people are looking at ways to re-import this information into their own databases.

 

Question – Is there anything that has been the proof of the pudding that librarians can build and maintain our own tools?

 

Karen Schneider – The test for open-source software seems to be whether it can move past the founding library or founding community. The verdict is still out on whether it can be successful in the long run.

 

Karen Coyle – If software is not allowed to fork in different directions, we’re locked into the same old model where everyone is doing exactly the same thing.

 

Forking (def.) – when a project divides significantly enough so that there is no one thing that people refer to as the core code.

 

Roy Tennant – Flickr Commons – We need to find ways to feed that information back into our systems more easily. Catalogers trying to feed that information back into our systems is not going to scale.

 

Clifford Lynch – People went to Flickr because it was there and it had a user base. What is significant is that it builds bridges between existing stores of knowledge.

 

Clifford Lynch – Widespread markup of biographical and historical narratives.

 

Karen Coyle – With the ubiquity of global positioning, information is going to be more location contextual.

 

Marshall Breeding – It’s going to take a while to get there.

 

Karen Coombs – There is a point at which GPS just isn’t good enough. Users need help finding items even within the building.

 

Clifford Lynch – GPS has largely been used for driving directions or missile strikes. There is a whole set of technologies that can be used to narrow this down much more. Now that GPS is moving ubiquitously into cell phones, we’ll see a second generation of spatial applications.

 

Marshall Breeding – We’re already getting location-targeted information. When we surf the web in a new city, we get location-targeted ads.

 

Karen Coombs – Geographical-based services. Too many locations are looking at IP address or asking users to input a zip code. Systems need to consider that where you are physically doesn’t necessarily have anything to do with your affiliation.

 

Karen Coombs – Google Scholar lets you set institutions with which you are affiliated.

 

Karen Coyle – Open street map for libraries. People are walking around with GPS units and replicating Google street view with an open

 

Roy Tennant – People putting data on the web through stable URIs. We’re looking at putting data out. It will be interesting to see what kind of linkages people make with that data.

 

Marshall Breeding – What are some examples?

 

Roy Tennant – We don’t know yet, and that’s the interesting part. What will people find to do with it?

 

Clifford Lynch – In scientific communities people

 

Roy Tennant – Small slice of a particular discipline.

 

Question from audience – Does the new ORE standard have implications for this?

 

Karen Coyle – Data elements have to be on the web.

 

Clifford Lynch – ORE is really intended to allow to you work with objects or groups of objects rather than the metadata about those objects. It’s built to be consistent with semantic web standards.

 

Karen Coombs – ORE is a good for moving the objects themselves.

 

Karen Coyle – We have the amoeba form of linked data in hypertext. But all we have is a link that doesn’t tell you anything about what it means, and it’s only one-way. How do we get the links to be meaningful?

 

Karen Coombs – We code HTML in the simplest way possible and don’t use it to its full potential.

 

Karen Schneider – I think I’m seeing some controlled burn in libraries due to economic pressures. They’re having to make hard decisions that they would not otherwise have had to make. Public libraries have never had higher traffic but they’ve never had such economic pressures.

 

Karen Coyle – Public libraries circulating 3-4 times their collection every year can make a good argument for RFID. Maybe more difficult for academics.

 

Karen Schneider – If you were opening a new library tomorrow, you’d have to think about RFID and self-checkout.

 

Karen Coyle – Most libraries in study made the switch to RFID when opening a new branch or doing renovation.

 

Karen Coombs – How many ILL requests do people cancel because you have it already or because you don’t loan textbooks. We have to work smarter so we’re

 

Karen Schneider – How about RFID for item location in the stacks.

 

Karen Schneider – One vendor using advanced shipping notices for acquisitions. ASN is used ubiquitously in the commercial book world. Almost unknown in libraries.

 

Marshall Breeding – We’re concerned about processes and our control of material – not just how to fulfill user needs. We need to find a way to get that one-click user satisfaction.

 

Karen Coombs – Books have to go to cataloging and then to shelves or reserve. It would make patrons much happier if it went directly to faculty.

 

Karen Coyle -RFID in public libraries for self-check – much faster. Libraries that have a high level of self-check also circulate a high-level of self-help materials since they don’t have to pass those materials through a staff member. More privacy.

 

Audience comment – No lines for check-out, but longer lines for check-in because the automated technology can’t keep up.

 

Karen Schneider – Brisbane, Australia – Amazing city library that is completely self-check. You can also watch robots check in materials. It takes something mundane and makes it fun and entertaining. Humans are used intelligently for error handling, and let automation do what it does well.

 

Karen Schneider – You don’t want to tie people to routine, mundane tasks when they could be roaming around helping users.

 

Karen Schneider – There is one library that uses a biometric station for patrons who have forgotten their library cards.

 

Karen Coombs – We have think carefully about our processes and apply cost effective solutions. How many times does someone from systems have to work on a malfunctioning piece of hardware before we just replace it.

 

Karen Schneider – Total neglect of getting good bandwidth to the extreme ends of rural areas. Very forward thinking rural libraries that are hampered by limited bandwidth. It’s not a money problem, it’s an end-of-the-road problem.

 

Karen Coombs – Utility companies (cable, cell, etc.) think it’s not cost effective to provide services in some areas.

 

Clifford Lynch – this is a public policy problem.

 

Marshall Breeding – The lack of bandwidth to rural libraries has an impact on how they automate. Can they do resource sharing? Can they participate in consortia?

 

Audience comment – Large new Gates program addressing rural telecommunications.

 

Karen Schneider – That’s wonderful, but it’s going to be a drop in the bucket.

 

Karen Coombs – Technology is like a ravenous puppy running around eating the whole house. If libraries can’t get funding to continuously replace equipment, it quickly goes back to being bad.

 

Marshall Breeding – WiMax is supposed to solve some of the bandwidth problems. It just hasn’t solved the problems.

 

Karen Coombs – Some rural success stories come from municipalities that have partnered to provide higher bandwidth to residents.

 

Karen Coyle – Open and closed models of sharing data. Closed models are easy to understand. Open allows innovation, but it’s harder to understand the business model. I hope we’re beginning to understand the difference in databases and the web as our data platform.

There are a number of people trying to use technology to solve rights questions.

 

Karen Schneider – The death of print publishing. It’s on life support. We’re seeing the death of paper with newspapers and magazines. For those of us who have been publishing in the traditional paper world, this is very serious.

We’re starting to see sensible measurements of the carbon footprint in data centers.

 

Marshall Breeding – I fly only on plug-in hybrid planes!

 

Clifford Lynch – Newspapers seem to be melting down economically

Newspapers have ramifications for community building and community definition. If these move only to the web the question of how they’re archived changes in a radical way. The way people interact with displays is beginning to change. New generations of technology – e-ink, desktops with multiple monitors is commonplace.

Libraries are still locked into single-screen setups.

Recent study about higher ed costs have changed. Argues that all of the cost increases have gone into administration and overhead rather than teaching. The data looks strange because technology is lumped under overhead.

Evidence based studies about how technology enhances teaching and learning.

 

Roy Tennant – I don’t see the book publishing industry melting down.

There are new ways to publish that were not available before.

 

Clifford Lynch – Books – Distribution of what’s being published is changing. Authors are getting different options.

If libraries want to collect books, it’s no longer adequate to just look at what’s coming out of traditional publishing.

 

Karen Schneider – Book publishing is in serious trouble.

 

Roy Tennant – More important to focus on making good technology decisions.

How do we decide when to jump in? How do we decide when to get out?

 

Karen Coombs – What it takes to do true digital preservation – It’s very scary. Collections we rely on that other people curate. I don’t have a lot of confidence.

 

Clifford Lynch – The stuff that is already digital is probably in better shape than other things.

 

Karen Coombs – Some of the smaller journals – if they can’t get their content on the web, then I don’t trust their preservation.

 

Marshall Breeding – I worry about libraries not doing long-term digital preservation. Local libraries don’t necessarily have the resources to do that.

This is not something that every library needs to reinvent. There are a lot of local installations.

Discovery interfaces. Much work is being done on these be-all, end-all solutions. Looking for better ways to expose library collections and services.

An urgency to libraries to prevent a better front end to our users, but we are sluggish about doing it. We’re taking our usual slow-and-cautious, wait until it’s perfect approach.

Taking user-supplied content and improving it through web 2.0 features.

LibraryThing for Libraries being distributed through Bowker.

Open source companies – Open source is getting good, but not great reviews. Maybe some growing pains as software matures.

 

Clifford Lynch – If you’re a smaller scale library (smaller than national or major research)

We need to do a better job on collaborative arrangements, external services that smaller institutions can acquire.

Smaller libraries often simply cannot afford substantial preservation programs on their own. This is an incredibly hard problem because nobody wants to fund this stuff.

 

Marshall Breeding – Is has to be done as a collaborative effort. It’s simply too big and too expensive to be done library by library.

Developing and Building Koha: an Open-Source ILS

Q&A between John Houser from PALINET and Joshua Ferraro from Liblime
Notes from June 28, 2008, program at the American Library Association Annual COnference.

The Open-Source Development Process
Product features are customer-driven. The community members can sponsor specific features that they need, and all other users benefit from that sponsorship. Example – Athens County Library sponsored the MARC 21 implementation for U.S. libraries.

Currently 25-30 active developers.

Current version is 2.2 Version 3.0 upcoming.

Q: Is there something about the open-source development module that results in better interfaces?
A: The sense is that since open-source development is so customer-driven, all components are built to do what the users want them to do the way they want to do them.

Q: Is open-source software quality code? The development process seems somewhat loose? Is it good? Is it secure?
A: Natural tendency is to assume that free things are not as good or not as secure as something that is purchased. For open-source, since there are a number of eyes watching the software, there are also many contributors pointing out needs for patches and areas for improvement.

Q: What about standards compliance?
A: Koha is already implementing a number of standards such as Z39.50. Dublin Core. MODS.
The whole open-source community is more interested in standards than in selling software. Since they’re focusing on the software, standards become very important.

Q: Can People expect to save money if they implement Koha?
A: Not necessarily. Instead of spending money on licensing fees to a vendor, that money can be redirected to other areas. Perhaps local development or possibly contributing to the Koha product by sponsoring modules.

Q: If I have a proprietary system, what are the most important things to think of if I want to implement Koha?
A: Do you have access to your data? This is very important for being able to extract and migrate data.

With Koha, you can actually try out a live system with no monetary investment.

Staff modules are run in a web browser rather than through a client application.

Product has a very granular permissions scheme.

Koha is built upon a number of standard open-source technologies: Linux, Apache, MySQL, Perl, etc.

Currently over 300 libraries using Koha.

Koha uses a number of APIs: LDAP, PatronAPI, Z39.50, OpenSearch, Cataloging