(NOTE June 13, 2016: This post has been republished at PhantomLibrarian.net)

Thoughts on the possibility of an actual open modular Library Services Platform
(wherein I make prolific use of Twitter’s “Embed Tweet” tool)

Mere days before Code4Lib 2016 I managed to get my “Whither Modularity” post out the door. A few days later I was able to fancy myself a little prophetic when Sebastian Hammer presented on “Constructive disintegration – re-imagining the library platform as microservices” It looked like Index Data, the Kuali OLE folks and EBSCO were poised to grant my wish for a truly modular ILS.

My first reaction was of course to immediately tweet out a link to my blog post, so I lost a few bits of the presentation in my buzz of excitement. I was also following the #c4l16 Twitter conversation and I am not the best multitasker (read: lousy multitasker) so I am super thankful to Code4Lib for livestreaming the whole conference on YouTube. I can re-watch and remind myself of all the bits I lost track of. The Constructive disintegration presentation is here from about 54:30 to 1:15

NOTE: There is also a very recent American Libraries piece “EBSCO Supports New Open Source Project” about the project by Marshall Breeding which goes into some more detail than Hammer could in his short talk, about the project background, structure and intentions.

In the moment at Code4Lib, but especially now that I am looking back from the remove of a few weeks and reading over the American Libraries article, what Hammer presented looks pretty ambitious. Imagining the scope and flexibility is simultaneously energizing and unsettling.

Their main requirements for the end result are:

  • Easy + fun to extend and customize
  • Apache 2 license: Everyone can play
  • Cloud-ready, multi-tenant, built around an open knowledge base, linked data, electronic and print resource management
  • Can be hosted by commercial vendors, library networks, or locally
  • Community-based
  • Modular – snap-in modules (apps) can be contributed by libraries or vendors

If you read my “Whither Modularity” post it will not surprise you to learn that I think this is an idea whose time has not just arrived, but is overdue.  Maybe just under the wire before we declare it lost and start charging replacement fees.

One of the things I found most compelling is their approach to addressing the problem of participation via not prescribing which language(s) contributors can use.  Hammer mentioned that one important thing the OLE folks learned from their current OLE venture is that it’s easy to inadvertently create barriers to entry that make it difficult to attract people to the community to contribute, change & extend it.  In their case it was choosing a basis for implementation that was “cumbersome.”  They had a vision for widespread community participation which didn’t pan out.
The idea that with this project they won’t be dictating languages and tools for the developers, instead tying everything together with a common REST interface is very interesting.  It was interesting to a few tweeters (twitterers? seriously if there is an official term for twitter users, let me know!) who seemed wary of the scope of the flexibility being proposed.

Is this a concern best addressed by hosting (transferring risk to your vendor)? By being selective with which “apps” we install, with one selection criterion being language? By rewriting the modules we really want in our locally preferred languages? Seriously – how many of us have already found something awesome and rewritten it so it conforms to our local environment? Marshall Breeding’s American Libraries article notes that the new platform will expose APIs throughout and be centered on microservices, allowing any of us – library or vendor – to develop apps in any language.  Rewriting to suit local needs should be eminently feasible given time and inclination.
Other worries in the Twitterverse included network load, migration headaches and our oldest nemesis: BAD DATA

BAD DATA will always bedevil us.  It’s the nature of the work we do.  Yesterday’s good data is tomorrow’s bad data (did we really all go back and fix older records with each AACR revision? )  New data and data services arise continually in niches where no standards exist yet.  In an ecosystem where we have more complete control over the software we use to manipulate data, we potentially have better control over the good data, the iffy data and the bad data alike and can work to prevent it from interrupting our end users’ work instead of imploring the vendor to clean up their code.

@collingsruth & @redlibrarian gave voice to something that has been tickling the back of my brain in the last few months: that migrating to Alma and her sister all-in-one SaaS ILSes feels like we’re just sitting at the gate during a layover thinking “this is OK, but this isn’t where we need to be”

With the ongoing consolidation of the library vendor universe I worry that, at least in the ILS section of the library marketplace, we no longer have enough diversity.  We’re becoming like captive white tigers – genetically un-diverse, inbred and unable to innovate.    (for an illuminating illustration about just how un-diverse we are, check out the graphic Marshall Breeding maintains here: http://librarytechnology.org/mergers/)

Competition fosters innovation.  All this consolidation is the opposite of competition.  The latest consolidation of ProQuest and ExLibris has the potential to be problematic.  Though ProQuest isn’t an ILS vendor (or wasn’t until they acquired ExLibris) they were in direct competition with ExLibris in the discovery and federated indexing arenas with Summon vs Primo.  Given their plans to merge at least the Primo & Summon federated indexes, we are seeing an immediate reduction in options available to us to meet the research needs of our communities.  This might be good, bad, or neutral, only time will tell, but in the meantime I think it is only natural to worry about this as an omen of even tighter coupling of the backend and the frontend, further decreasing opportunities to choose the discovery services that best meet our needs, instead choosing them based solely on what ILS we’ve chosen.

In contrast, the goal of this new community project is to construct a truly open and flexible central platform, intentionally created to allow, encourage or even require third party development of tools and apps and microservice thingamajigs. An intentional decoupling of the parts that make up the whole. This type of open source concept certainly would allow for greater library participation in what’s next for library platforms, even more so than “open” commercial options like Alma.

As a thought experiment: what if, when ProQuest acquired ExLibris, they had gone in a different direction and made Alma an open source platform? Yes, there are plenty of ways that making Alma open source is impractical, but what if? What might that have meant for the library community, to have a relatively mature open free* platform on which to build? Alma might then have had a fighting chance to win the “total cost of ownership” race against the likes of Aleph or Voyager. Independent providers could have emerged to host and/or service the platform. A “free” software platform and free or low-cost plug-n-play modules, coupled with the option to have a reliable vendor host/service the whole kit & caboodle might dramatically reduce total cost of ownership, not just software license costs but service costs as well since we could shop around for the best deal based on our local needs.  From both a technical and a financial point of view, that would have been truly disruptive for us as an industry – not just freeing up money but simultaneously sparking real transformative innovation.

Now back to reality. A library that chooses Alma implements a multi-tenant “latest-and-greatest” ILS, but is also effectually locked into a discovery layer (Primo/Primo Central) that doesn’t handle searching especially well (at least not as well as EDS, or even Academic Search or PubMed, for example) and lacks flexibility for local control of the user experience. For a few years now, Alma has been almost unopposed in the ILS market, but the OLE / Index Data / EBSCO LSP project proposes to deliver an option that is closer to a modular, interoperable, open ideal – a new option that will include not only a multi-tenant ILS, but modules that will take it beyond the core ILS and officially into next-generation LSP. Adopters will not be compelled to implement a paired discovery layer that they might not select if they had freedom to choose from multiple independent options. And now for the kicker: it is open source. Who has the wherewithal to create such a thing? The answer is: A large and capable library vendor (EBSCO), open source development gurus (Index Data), and academic libraries themselves (including at least some OLE libraries, as well as other libraries from all over the world). The future is knocking – let’s answer the door!

A major barrier to many libraries adopting open source software is that it typically shifts your big costs from software to people.  People costs can be much harder to get budget commitment for, and in a small shop like mine OSS can be unsustainable. We are not all interchangeable cogs. We have different skills, knowledge and abilities, and it only takes one team member leaving for a new career opportunity to torpedo a project.  

A team the size of mine might be able to participate fully in this type of transformation – opt for hosting, doing development and hands-on management when we had the people resources and falling back on vendor support when we were under-peopled.

I sometimes feel that we as librarians are professional complainers and that the techies among us (myself included) especially relish shooting down big ideas by shouting our mantra of “we tried that, it didn’t work, therefore it can’t work, and there is no point in trying again.” It sounds to me like the OLE people, having tried something like this already that was less than successful, have learned from the experience and are ready to try again – differently this time.

I was surprised by my reaction to the sprinkle of skeptical tweets during Hammer’s talk. I felt strangely protective of something that isn’t even my idea. I resisted the urge to start fangirling immediately but was heartened by a few hopeful tweets.



Fangirl mode initializing…

I don’t think we’ll have to wait 10 years.

  1. This project is backed by a vendor (EBSCO) which means it has a stable source of funding.  A wise person* once said “sustainable financing is non-negotiable”  Many a good idea has been launched via grant funding and then petered out when the money dried up.  The commercial backing should make a big difference on how quickly this project moves and brings something “to market” vs another unpredictably funded OSS project.
  2. I’ve heard rumors that we can get our hands on something in 2018.  We’re already halfway through Q2 of 2016.  That’s really not that far away.  Time will fly.
  3. Backing by a vendor like EBSCO with an established history of commercIal software development, library business process analysis, etc can mean a better chance for a sustainable “product” with ongoing progress.

I’d really like them to succeed and I’m planning to lobby at my institution to support this and participate in some way. I took the pledge at futureisopen.org – “I’d like to be a part of the community for the next generation open source library platform.”

I’d like us to stay in hopeful mode and keep trying.


*yeah, yeah, free-as-in-kittens

**me, paraphrasing Julie Swierczek at Code4Lib 2016