Thoughts on the possibility of an actual open modular Library Services Platform
(wherein I make prolific use of Twitter’s “Embed Tweet” tool)

Mere days before Code4Lib 2016 I managed to get my “Whither Modularity” post out the door. A few days later I was able to fancy myself a little prophetic when Sebastian Hammer presented on “Constructive disintegration – re-imagining the library platform as microservices” It looked like Index Data, the Kuali OLE folks and EBSCO were poised to grant my wish for a truly modular ILS.

My first reaction was of course to immediately tweet out a link to my blog post, so I lost a few bits of the presentation in my buzz of excitement. I was also following the #c4l16 Twitter conversation and I am not the best multitasker (read: lousy multitasker) so I am super thankful to Code4Lib for livestreaming the whole conference on YouTube. I can re-watch and remind myself of all the bits I lost track of. The Constructive disintegration presentation is here from about 54:30 to 1:15

NOTE: There is also a very recent American Libraries piece “EBSCO Supports New Open Source Project” about the project by Marshall Breeding which goes into some more detail than Hammer could in his short talk, about the project background, structure and intentions.

In the moment at Code4Lib, but especially now that I am looking back from the remove of a few weeks and reading over the American Libraries article, what Hammer presented looks pretty ambitious. Imagining the scope and flexibility is simultaneously energizing and unsettling.

Their main requirements for the end result are:

  • Easy + fun to extend and customize
  • Apache 2 license: Everyone can play
  • Cloud-ready, multi-tenant, built around an open knowledge base, linked data, electronic and print resource management
  • Can be hosted by commercial vendors, library networks, or locally
  • Community-based
  • Modular – snap-in modules (apps) can be contributed by libraries or vendors

If you read my “Whither Modularity” post it will not surprise you to learn that I think this is an idea whose time has not just arrived, but is overdue.  Maybe just under the wire before we declare it lost and start charging replacement fees.

One of the things I found most compelling is their approach to addressing the problem of participation via not prescribing which language(s) contributors can use.  Hammer mentioned that one important thing the OLE folks learned from their current OLE venture is that it’s easy to inadvertently create barriers to entry that make it difficult to attract people to the community to contribute, change & extend it.  In their case it was choosing a basis for implementation that was “cumbersome.”  They had a vision for widespread community participation which didn’t pan out.
The idea that with this project they won’t be dictating languages and tools for the developers, instead tying everything together with a common REST interface is very interesting.  It was interesting to a few tweeters (twitterers? seriously if there is an official term for twitter users, let me know!) who seemed wary of the scope of the flexibility being proposed.

Is this a concern best addressed by hosting (transferring risk to your vendor)? By being selective with which “apps” we install, with one selection criterion being language? By rewriting the modules we really want in our locally preferred languages? Seriously – how many of us have already found something awesome and rewritten it so it conforms to our local environment? Marshall Breeding’s American Libraries article notes that the new platform will expose APIs throughout and be centered on microservices, allowing any of us – library or vendor – to develop apps in any language.  Rewriting to suit local needs should be eminently feasible given time and inclination.
Other worries in the Twitterverse included network load, migration headaches and our oldest nemesis: BAD DATA

BAD DATA will always bedevil us.  It’s the nature of the work we do.  Yesterday’s good data is tomorrow’s bad data (did we really all go back and fix older records with each AACR revision? )  New data and data services arise continually in niches where no standards exist yet.  In an ecosystem where we have more complete control over the software we use to manipulate data, we potentially have better control over the good data, the iffy data and the bad data alike and can work to prevent it from interrupting our end users’ work instead of imploring the vendor to clean up their code.

@collingsruth & @redlibrarian gave voice to something that has been tickling the back of my brain in the last few months: that migrating to Alma and her sister all-in-one SaaS ILSes feels like we’re just sitting at the gate during a layover thinking “this is OK, but this isn’t where we need to be”

With the ongoing consolidation of the library vendor universe I worry that, at least in the ILS section of the library marketplace, we no longer have enough diversity.  We’re becoming like captive white tigers – genetically un-diverse, inbred and unable to innovate.    (for an illuminating illustration about just how un-diverse we are, check out the graphic Marshall Breeding maintains here: http://librarytechnology.org/mergers/)

Competition fosters innovation.  All this consolidation is the opposite of competition.  The latest consolidation of ProQuest and ExLibris has the potential to be problematic.  Though ProQuest isn’t an ILS vendor (or wasn’t until they acquired ExLibris) they were in direct competition with ExLibris in the discovery and federated indexing arenas with Summon vs Primo.  Given their plans to merge at least the Primo & Summon federated indexes, we are seeing an immediate reduction in options available to us to meet the research needs of our communities.  This might be good, bad, or neutral, only time will tell, but in the meantime I think it is only natural to worry about this as an omen of even tighter coupling of the backend and the frontend, further decreasing opportunities to choose the discovery services that best meet our needs, instead choosing them based solely on what ILS we’ve chosen.

In contrast, the goal of this new community project is to construct a truly open and flexible central platform, intentionally created to allow, encourage or even require third party development of tools and apps and microservice thingamajigs. An intentional decoupling of the parts that make up the whole. This type of open source concept certainly would allow for greater library participation in what’s next for library platforms, even more so than “open” commercial options like Alma.

As a thought experiment: what if, when ProQuest acquired ExLibris, they had gone in a different direction and made Alma an open source platform? Yes, there are plenty of ways that making Alma open source is impractical, but what if? What might that have meant for the library community, to have a relatively mature open free* platform on which to build? Alma might then have had a fighting chance to win the “total cost of ownership” race against the likes of Aleph or Voyager. Independent providers could have emerged to host and/or service the platform. A “free” software platform and free or low-cost plug-n-play modules, coupled with the option to have a reliable vendor host/service the whole kit & caboodle might dramatically reduce total cost of ownership, not just software license costs but service costs as well since we could shop around for the best deal based on our local needs.  From both a technical and a financial point of view, that would have been truly disruptive for us as an industry – not just freeing up money but simultaneously sparking real transformative innovation.

Now back to reality. A library that chooses Alma implements a multi-tenant “latest-and-greatest” ILS, but is also effectually locked into a discovery layer (Primo/Primo Central) that doesn’t handle searching especially well (at least not as well as EDS, or even Academic Search or PubMed, for example) and lacks flexibility for local control of the user experience. For a few years now, Alma has been almost unopposed in the ILS market, but the OLE / Index Data / EBSCO LSP project proposes to deliver an option that is closer to a modular, interoperable, open ideal – a new option that will include not only a multi-tenant ILS, but modules that will take it beyond the core ILS and officially into next-generation LSP. Adopters will not be compelled to implement a paired discovery layer that they might not select if they had freedom to choose from multiple independent options. And now for the kicker: it is open source. Who has the wherewithal to create such a thing? The answer is: A large and capable library vendor (EBSCO), open source development gurus (Index Data), and academic libraries themselves (including at least some OLE libraries, as well as other libraries from all over the world). The future is knocking – let’s answer the door!

A major barrier to many libraries adopting open source software is that it typically shifts your big costs from software to people.  People costs can be much harder to get budget commitment for, and in a small shop like mine OSS can be unsustainable. We are not all interchangeable cogs. We have different skills, knowledge and abilities, and it only takes one team member leaving for a new career opportunity to torpedo a project.  

A team the size of mine might be able to participate fully in this type of transformation – opt for hosting, doing development and hands-on management when we had the people resources and falling back on vendor support when we were under-peopled.

I sometimes feel that we as librarians are professional complainers and that the techies among us (myself included) especially relish shooting down big ideas by shouting our mantra of “we tried that, it didn’t work, therefore it can’t work, and there is no point in trying again.” It sounds to me like the OLE people, having tried something like this already that was less than successful, have learned from the experience and are ready to try again – differently this time.

I was surprised by my reaction to the sprinkle of skeptical tweets during Hammer’s talk. I felt strangely protective of something that isn’t even my idea. I resisted the urge to start fangirling immediately but was heartened by a few hopeful tweets.

 

 

Fangirl mode initializing…

I don’t think we’ll have to wait 10 years.

  1. This project is backed by a vendor (EBSCO) which means it has a stable source of funding.  A wise person* once said “sustainable financing is non-negotiable”  Many a good idea has been launched via grant funding and then petered out when the money dried up.  The commercial backing should make a big difference on how quickly this project moves and brings something “to market” vs another unpredictably funded OSS project.
  2. I’ve heard rumors that we can get our hands on something in 2018.  We’re already halfway through Q2 of 2016.  That’s really not that far away.  Time will fly.
  3. Backing by a vendor like EBSCO with an established history of commercIal software development, library business process analysis, etc can mean a better chance for a sustainable “product” with ongoing progress.

I’d really like them to succeed and I’m planning to lobby at my institution to support this and participate in some way. I took the pledge at futureisopen.org – “I’d like to be a part of the community for the next generation open source library platform.”

I’d like us to stay in hopeful mode and keep trying.

 

*yeah, yeah, free-as-in-kittens

**me, paraphrasing Julie Swierczek at Code4Lib 2016

I do not consider myself a visionary.

I’ve watched plenty of ideas excite the library tech community, and thought “hm, that’s interesting.” But I’m a concrete person. It’s when an idea starts demonstrating practical applications that I start to get excited. E.g. I was aggressively ambivalent about linked data until I started learning about Bibframe – a practical application in my lib tech niche, and an exciting development for future advances in metadata management and especially discovery.

If you’d asked me in 2007 what I thought would happen in the ILS world (and my University Librarian did ask me) I would have said (and I did say) that modularity would finally win the day. We were about to enter an era where we could

  • choose our metadata database
  • choose our tools for how to add, edit, manipulate your data
  • choose our inventory management/circ tools
  • choose our discovery service to expose our resources to the public

and none of that needed to be from the same vendor. It just needed the right integration tools. APIs were a little on my radar in 2007 but I was really picturing buying different modules from different vendors expecting they’d have figured out how to plug & play.

Well, then ExLibris announced Alma, which at the time was called URM, and I was very disappointed, both in the direction of ILSes and my own powers of prognostication. Especially when other vendors started falling into line with their giant “re-integrated” systems – Sierra, InTota, WMS – all of which seem to be called Library Services Platforms (LSP).

We’ve headed back in the direction of brobdingnagian one-stop-shopping library management systems. Granted, they are ostensibly now fully integrated resource management systems for both print and e-resources, with the attendant acquisitions, inventory, etc, but their big transformative value-add is really just that we can now manage e and p in the same interface. The Library Services Platform as executed by ExLibris & OCLC is at its heart an ILS.

————— (awkward segue to the paper which sparked this post) —————

I recently read a paper from Ken Chad – Rethinking The Library Services Platform* – which sparked my above statement about today’s LSPs being essentially ILSes. The paper has me thinking seriously again about the type of modularity I wanted in 2007 and still want. I’m not here to summarize much of what he writes; this post is just my thoughts & reaction to the paper. You should read it!

Bottom line: the paper feels like a call to action. We need to transition to LSPs that are centered not around managing our store of marc records (maybe someday bibframe?) but around how our community needs to interact us. “[L]ibraries are a means to an end and success ought to be measured in terms of the best possible customer experience and outcomes” (Chad 2016, 5) From the user perspective the library is the front end of all the systems we take care of – it’s the metadata (and objects) we present to them. And as much effort as we put into creating and curating metadata for them, I’d wager that the majority of the metadata they use is not ours and it’s not in our ILS or in our repository – it’s at EBSCO, it’s at ProQuest, it’s at JSTOR. And a lot of the metadata we create we also contribute to OCLC. So why is the ILS the centerpiece of how we serve our users?

Big dream moment: What if LSPs existed that were open to 3rd parties developing apps that plug in, and used standards effectively so that apps were portable between LSPs? What if we had true modularity with the tools we use to manage our data and – most importantly – address our users’ needs?

ExLibris has made an apparent commitment to “openness” with Alma – the developers network is open to the public, API documentation is open to the public. They are making noises about not being closed, about encouraging third parties to develop things that will interoperate with Alma and Primo.

If this kind of openness is real, and they don’t close things off either intentionally or through oversight, it offers the opportunity to develop some of the tools we want to use, that are better tailored to our local idiosyncratic workflows.

Even bigger dream moment: My dream is the equivalent of an app store where we can buy the circ module we really want. Or what if we were one of the third parties authoring the tools we and other libraries want? What if we could build our own request tool that didn’t require the user to know where they needed to ask for something – ILL vs a hold in the ILS vs scan-on-demand document delivery? What might we be able to author for ourselves given the right platform to build on?

Ken Chad talks about “interoperable applications from independent software vendors (ISVs)” Each of us, given the time and the inclination, could be one of those ISVs.

The open, interoperability-ready Library Services Platform we’d need to author our own tools or select from third-party add-ons doesn’t really exist. But in the meantime, we could encourage the trend by using and/or creating tools that take advantage of the purported openness in the existing services like Alma, Primo & EDS. We can test the vendors on whether they really mean to be and stay open, and somehow incentivize them to allow the type of modularity I wished for years ago.

*Rethinking the library services platform. By Ken Chad. Higher Education Library Technology (HELibTech) Briefing Paper (No.2) . January 2016 . DOI: 10.13140/RG.2.1.5154.8248

The scene: A library on a stormy night. Four undergrads huddle in the info commons, working on a project.

Sophia: Let’s search the catalog for stuff to back up our idea.
Raj: What if we don’t find anything? Our idea is pretty obscure
Sophia scoffs: Of course we’ll find SOMETHING.
Ben: I heard there’s a place, in the darkest reaches of the web, where you can search but you won’t find anything.
Greg: That’s just an old librarian’s tale. It’s not true. That couldn’t happen.

Suddenly the lights go out!
A beam of light pierces the darkness. The students look up and gasp.
A disembodied bespectacled face hovers, illuminated by a flashlight. It is the old Gen-X librarian!
She cackles: Well, ACK-tually, it’s true. In the olden days it wasn’t uncommon to search the catalog and find ZERO HITS!
(more gasping from the students; lightning; thunder)

And it really was true. “No results” or “Zero Hits ” or however we labeled it, wasn’t that uncommon. It happened frequently enough we coded our opacs to keep statistics on it, and people put quite a bit of thought into how to solve the Zero Results Problem.

Enter the Next Gen Catalog.

One of the key selling points of so-called Next Gen Catalogs was the promise of never seeing zero results. A searcher would always get some results – and the more results the better.

That promise has never been realized, though. Typos occur, some searchers aren’t great at spelling, and if you’re searching for something obscure enough you might not get any results back. But in general, yes, you do get a big chunk of hits that you can explore further to find what you need. The prevailing theory seems to be that if you return enough results the searcher can use facets to winnow the results and this will inevitably lead to something useful.

What if, though, this embarrassment of riches we are presenting to searchers isn’t wholly helpful?
A deluge of hits can be overwhelming.
The hits we get may be irrelevant because we’re not choosing the right terms to search on.
The art of known-item-searching has suffered. Sometimes the searcher knows exactly what they seek and has trouble locating it in a pile of 10,000 results.

How do we solve this new problem we’ve introduced?

How do we serve:

  • The explorer who benefits from the serendipitous discovery of material they didn’t know to look for
  • The known-item searcher who just needs the thing they need to appear in the first couple of hits
  • The bad typist
  • The experienced searcher who doesn’t yet know the vocab of their new quest
  • The novice searcher who has no idea where or how to begin

All with the same system?

I don’t have any mind-blowing paradigm-shifting ideas, but I did have occasion to stop by the exhibits at ALA MidWinter in Boston to visit a few people I know from my misspent youth as a vendor.

I saw a couple of interesting features of EBSCO’s EDS platform and asked a few probing questions.

Attempts to address both zero results and irrelevant results

Did You Mean?
EDS brings up something as a “Results may also be available for” suggestion nearly every time a search would result in 0 hits, and sometimes when a search does bring up hits. E.g. it suggests I might mean golfers when I search for goobers, even though I find plenty of hits about peanuts. The current version of this feature in the field performs poorly when given a misspelling like guestss. EBSCO has a newer version being deployed soon, I believe, so it will be interesting to see if it handles these types of typos better. Unsurprisingly I was able to stump it when putting in cat-like-typing gibberish. “ghksajhaekjdsdklk” stumps even Google.

Auto-complete
This is probably the feature I most wish Primo had. (I understand some form of it is available but only for Primo SaaS customers). EDS has two levels of “auto-complete” – their Popular Terms suggestions and their Publications suggestions.
Popular Terms are culled from previous searches by all EBSCO customers, and can change from day to day. If other people are searching for something it will appear in the list of Popular Terms. E.g. typing in obama might bring up a list of suggestions starting with the string obama – e.g. obamacare and obama, barack. When I first saw the auto-complete feature I hoped it was drawing directly from the metadata, so that a searcher wouldn’t be prompted to search for something they’d get no hits for (Infor’s Iguana product does auto-complete this way), but no such luck. Since EDS is searching such a big database, I am not sure how often this would be an issue for searchers, but I think it gives a false promise that something exists when it appears in auto-complete.
Publications – if a searcher enters a string which exactly matches the title of a publication, that publication will appear in the suggestions area. Seeing this was a real Hallelujah moment! It solves one of my colleague’s biggest frustrations with Primo (and next-gen catalogs in general) – we call it the Times of London problem. In our Primo catalog it is difficult to locate the resource you want when you search for Times of London. In EDS, however, if you begin typing times o… Times of London appears in the Publications suggestion. Click the suggestion and the publication we want is the first hit. Time magazine is similarly easy to get to. Nature is still a bit elusive, since it has so many permutations, but still easier to locate than in Primo. I’m almost afraid to publish this blog post lest my colleague find out the answer to one of her biggest frustrations is out there and I can’t deliver it (yet). timesoflondon

Attempts to address insufficient or irrelevant results

Placards – this brilliant little feature of EDS was what really piqued the interest of the systems team. Placards are context-sensitive boxes/areas that appear when a search meets defined criteria. E.g. someone searches for “library hours” a placard can display the library’s hours, or a link to the library homepage, or… you get the idea. You can write code to link to an external subject guide, based on a searcher’s keywords. My favorite placard I’ve seen so far (I got the impression it comes standard with EDS but I haven’t actually fact-checked that so don’t quote me) appears when your search is an exact match to a publication indexed in EDS. A search box appears allowing you to search immediately within that publication. The next logical step in my mind is to make sure that type of search box also appears if someone searches for JSTOR, which happens a lot. A lot a lot!
Primo could be tricked into doing something similar, using tools like javascript, IF it’s code you can imbed in the footer.html. It just looks like a lot less work and kludgery (it’s a word, I swear) in EDS, and theoretically less prone to breaking during every upgrade.

These 3 features ideally would be available in any discovery system. Vendors have been flirting with the known-item-searching flaw in post-OPAC systems for years. I think EBSCO is moving in the right direction to solve some aspects of the problem in EDS. I’d really like to see all vendors acknowledge the problem and work to solve it.

I’m very interested to see what comes down the pike for these and other features in Primo, EDS and other discovery systems.

I’ll conclude my post-ALA musings with my wish list for ALL discovery systems:

  • Some form of Did You Mean that guides searchers toward choosing good search terms. We know they don’t really want to ask us for help. So how do we help them help themselves?
  • Customizability/context sensitivity of automated assistance
    • choice of auto-complete and Did You Mean source(s) – local indexes, recent/popular searches by others, one-offs defined by the library (libguides, webpages, local subjects, local authors, ??)
    • Interaction of auto-complete and placard-type features with user profile and preferences (e.g. demographics, field of study, enrolled courses in an LMS)
  • Fuzzy search, stemming, query expansion, tolerant search
  • And my biggest wish of all (probably a good topic for a future post) TRUE MODULARITY. Portability of all the hard work we put into our discovery system and/or ILS. If we’ve put 6 months of sweat into getting a discovery system to behave in ways that are useful to our community, we need to be able to take that work with us if we change ILSes.

Once upon a time we wrote out longhand a list of our books in a ledger, and we chained the ledger to a lectern. And it was an OK system because we had so few books and so few people could read. There wasn’t exactly a crowd of people queued up to find a book.

Eventually we typed out all our data onto 3×5 index cards. And it was an OK system because we alphabetized and cross ref’d and made cryptic little pencil notations and filed the cards neatly in little drawers and let the public rifle through them.

Then in the 60s someone thought “Hey! Wouldn’t it be fun to be able to exchange cards with each other so that every library didn’t have to catalog every book from scratch, and when a member of the public (through carelessness or malice aforethought) damaged or lost one of our precious 3×5 cards we could snap our fingers and magically produce a new one instead of typing it all over again?” And Lo! Henriette Avram invented machine readable cataloging and the Library of Congress launched a world-scale game of Library: The Gathering. And it was an OK system because getting a package of cards in the mail was like Christmas every week!

Once we had machine readable cataloging all the librarians decided they also needed machines, and automated library systems spread like a plague of ones and zeroes across the globe. But the early automation systems were staff only. For Librarians By Librarians (also the name of my line of snazzy cardigans). The eureka moment arrived when someone (possibly in Ohio and doubtlessly bathed in a sickly VT100 glow) thought “Hey! Maybe the readers would like to look at all this data on a screen instead of card stock.” And the OPAC was born. And it was an OK system because the multitudes cried out in gratitude: “Well, it’s better than nothing!”

And it really was better than nothing. But it could have been so much better. When the web arrived a lot more options for presenting our list of books arrived with it, and the web-based OPAC was born. Or, rather, the web-based OPAC took the existing OPAC and made it point-and-clickable. It took another decade for librarians and systems vendors to attempt anything that really leveraged the new technology. And it was an OK system because at least you could browse for books from home in your pajamas (as long as you had an Internet connection that wasn’t WebTV.)

We’re making attempts now – to use new technologies, new ways of thinking about search (thanks, Google!), sussing out the ways that our users actually interact or want to interact with our data and taking into account the huge amount of stuff we own or have access to that’s not a book. We’ve had varying degrees of success. As we implement new “next-generation” and “web-scale” systems that are focused on user discovery of info resources, instead of focused on the business of running a library, some features and functions have been set aside. It remains to be seen which, if any, of those will be re-adopted as time goes by, but one feature that was missing from our OneSearch discovery system was the ability to view the original MARC record for the resource you’ve just found in the catalog. And it was missed. Primarily by library staff, not our users, but missed nonetheless, and missed enough to try to get back.

Investigative Process:
Q: Does the feature exist already in Primo? Maybe it’s just not turned on?
A: Nope. That was quick.

Q: What have other Primo customers done? Has anyone already done this and put their code somewhere and we can borrow it?
A: Yes – Jeff Peterson at University of Minnesota posted a pretty nice method to the Primo Discussion List, involving a small jsp file, and changes to the Primo normalization rules and mapping templates that will create an entry in the Links menu on the Primo Details tab.

Q: Did they do it in a way that will work in our environment?
A: Yes, absolutely this would work in our environment, though it would require a full renormalization to create the links, which is less than ideal.

Q: Is there a better way?
A: Better is debatable, but there was a quicker way for sure, involving jsp, XSL and Primo RTA. So that’s what we did. The result is a link in the Actions menu in the Details tab.

Intrigued? If you’ve read this far you must be anxious to get the inside story from Greg McClellan.
menu

Convincing Primo to show us the MARC record

The ability to view the full MARC record is a feature that existed in our previous two catalogs – Louis (Aleph OPAC) and LouFind (VuFind). It’s a feature that is missed by staff, so we explored our options and settled on a modified version of a method devised by Jeff Peterson at University of Minnesota. Jeff’s method involved using jsp, Primo normalization rules, and Primo mapping templates to create an entry in the Links menu on the Primo Details tab. Since that method would require a full renorm/reindex of Primo, we looked for a shortcut, and found one that enabled us to build the links on-the-fly instead of through norm rules.

Only two file modifications are needed to do on-the-fly retrieval of the MARC record from Alma.

  1. footer.html
  2. marc.jsp


footer.html – added a new section

  • uses jquery (a javascript library)
  • checks that it’s on a full record page (details tab)
  • gets the Alma MMS ID from either the URL or embedded in the record in a hidden field (.EXLResultRecordId) (script looks for both because one or the other will exist depending on what path you took to arrive at the details tab)
  • uses javascript to find the Actions menu and append a link called Staff View
  • if the link is clicked, marc.jsp is invoked

marc.jsp – new file

  • lives in a custom directory in fe_web (so it won’t be overwritten by service packs)
  • created by Jeff Peterson at University of Minnesota
  • only modifications needed were to insert our local hostname and institution
  • uses the Alma MMS ID (retrieved by the footer.html jquery section described above) and Primo RTA (Real Time Availability) to retrieve the full marc.xml record from Alma
  • uses embedded XSL to transform the marc.xml record into the huma-readable form of MARC we know and love

Pros

  • Fast to implement
  • No Primo renorm/reindex needed

Cons

  • The MARC record retrieved is the up-to-date version from Alma and reflects any changes to the MARC record that have been made since the last time the record was published to Primo – there is a potential lag time of several hours between record edits in Alma and their appearance in Primo. Because the Staff View is most likely to be used by staff, we felt there wasn’t a large risk for confusion.

If you’d like more details and code samples, please feel free to contact us via the “Leave a Reply” section.

 

actions menu

In December 2012 Brandeis signed contracts with ExLibris to implement Alma and Primo.

baconAlma will replace our current legacy ILS Aleph and a few related systems, and Primo will replace both the Aleph OPAC and our next-gen catalog LouFind.

No matter how much research you’ve conducted, demos you’ve attended, RFP responses you’ve read, migrating to a new ILS is a leap of faith.  You never really know what you bought until you’ve got it in your hot little hands.

We started our Primo implementation in March 2013. We’re bringing it up against Aleph, so we can offer expanded discovery functionality to our users now, and when we Go Live with Alma the transition should be pretty invisible to our community.

Stay tuned here for updates on our Primo progress during the spring semester, and the Alma implementation following that.

Hopefully we’ve chosen… wisely.

I went looking for interesting January 30 events with which to draw parallels to our currents state of hopeful flux (new CIO, new Provost, new President) and found several gloomy events:

  • Charles I of England beheaded in 1649
  • Oliver Cromwell “ritually executed” in 1661
  • The Beatles last public performance in 1969
  • Bloody Sunday in 1972

I found little in the way of uplifting major events on January 30, but I suspect Wikipedia editors (like the rest of humanity) are more fixated on the dramatic and shocking than on the propitious.

What can we look forward to in the year ahead? We can’t know for sure, but I know what I am hopeful for:

  • strong, thoughtful, decisive leadership
  • a renewed and public commitment to library services in both the traditional and technological arenas
  • an acknowledgement of the resource shortages that have plagued us since the 2008 financial crisis and a plan to address them in the short and long term
  • efforts to build a supportive, inclusive and welcoming culture
  • free ice cream

It’s not easy to keep ourselves focused on the positive when there are plenty of negatives to focus on, but we have a whole new world ahead of us.  Which brings me to my post title.  This post was supposed to be about A Whole New World.  I am embarrassed to report that for 20 years I have been laboring under the misapprehension that the song “A Whole New World” was from An American Tail.  Turns out it’s from Aladdin.  But since I like the metaphor of LTS as Fievel, bravely embarking on a promising but scary new journey, I’m going to envision our road to the future as paved with cheese.  I like cheese.fievel

The Library Systems group spent the last year upgrading every damn system we have, save one. As we wrap up the last of them, we stop to take a breath and assess where we are and where we are going. We stare down the months and years ahead and feel a bit directionless. How do the things we want to do fit with the overall LTS plan, and most importantly how do they fit with the direction of higher-ed librarianship?

So we scanned the higher-ed library landscape, technology and industry trends, gleaned what explicit LTS objectives we could, and we wrote a draft five-year plan for the library systems group. We’ve been careful to make it business-like, logical and sober. As un-manifesto-like as possible so we won’t be perceived as parvenu upstarts. Careful not to the ruffle feathers of any who may believe that helping to design the library’s future is outside our purview. Which raises the question – Ain’t I a Librarian? Why do we feel we need to be so cautious? Can we systems librarians not put forth ideas, contribute legitimately to strategic planning, and help lead the charge into the technology-saturated future?

So here goes. We’re making our goals public, though the draft five-year-plan is not ready for prime time yet.  The full plan identifies concrete steps and resources for achieving the goals described below.  At the end of the five year plan period, it is the overall goal of the Library Systems group to be in a position to fulfill the following principles and obligations:

  • Be agile, responsive, innovative
  • Position ourselves to respond to a rapidly changing environment
  • Help Brandeis University Libraries shape the information landscape instead of just being a consumer of it
  • Act as a bridge between traditional library activities and the technology-focused future
  • Establish and maintain systems, technology and integration between systems to support the mission and daily activities of Brandeis University Libraries

 

Library Systems Five Year Plan Goals:

1. Provide always-on, highly available, device-agnostic access to scholarly information resources

LTS needs to provide always-on, seamless access to owned and licensed materials that “just works,” for our users to get full benefit from our extensive investment in information resources.

The discovery environment has evolved dramatically over the past five to ten years, with the rise of next-generation catalogs, Google, social media and Web 2.0. The technological sophistication of the Brandeis user community is increasing and their expectations are increasing accordingly. Users expect a Google- and Amazon-like discovery experience, and expect participatory engagement via Web 2.0 functionality. They expect that information resources will be available to them round-the-clock, and that they will be able to access them using a variety of devices. As of early 2011, more than 25% of Brandeis community members use smartphones and the use of tablet computers is increasing. Our community expects that the library will come to them.

2. Facilitate and support collection sharing, new models of collection development, and data-driven collection management

Budget reductions, user preferences for electronic access to materials, limited physical space, and the inability to financially sustain comprehensive collections have led many academic libraries to shift from a “just-in-case” to a “just-in-time” philosophy.*

Due to these factors, the Brandeis libraries can no longer build and maintain a collection that is all things to all users. Collaboration with other institutions to coordinate collection development, patron-driven acquisition of materials, and partnerships with other libraries to share resources quickly and smoothly through services like RapidILL, BLC Resource Sharing and traditional ILL are all critical areas of growth. LTS needs to strengthen and expand the systems that support current and future services in this area.

3. Support initiatives to make available to the global academic community those materials that make Brandeis unique

The LTS FY11 E-Scholarship plan proposed “preserv[ing] and disseminat[ing] Brandeis’s unique digital assets related to academic and cultural programs and documenting our intellectual history” as an important strategic priority.

Digitization of little-used and hidden collections, collection and preservation of the scholarly output of the university in an institutional repository, providing a platform for open access publishing – all these activities serve to reveal the richness of Brandeis’ unique assets to the global academic community.

4. Identify and foster the development of core technical competencies needed by library staff in today’s information environment

In the face of a rapidly evolving information environment and the accelerating pace of change it can be challenging to build and maintain critical technical skills. First we must identify what the core technical competencies of today’s library staff should be – what skillset should be expected of all library staff: general staff, specialists, technology managers, and systems staff alike. Then we must find ways to foster and focus on these skills, while continually re-evaluating current compentencies and antcipating what future competencies will be needed and how we should staff to meet those needs.

The members of the Library Systems group are in a unique position to help bridge the gap between traditional library activities and the technology-focused future. Library Systems staff cumulatively have 36 years as professional librarians, and a total of 65 years working in libraries. We have worked professionally in all areas of the library, from circulation to systems, and have a broad understanding of the mission and goals of libraries, and their place in the overall information landscape. We are also ideally situated to facilitate knowldege sharing between the two halves of LTS. Our daily activities and interactions with colleagues span all LTS units, from InterLibrary Loan to Information Security.

* ACRL Research Planning and Review Committee, “2010 top ten trends in academic libraries. A review of the current literature.” College & Research Libraries News 71:6 (June 2010): 286-92. http://crln.acrl.org/content/71/6/286.short (accessed April 5, 2011)

Anatomy of a Successful Viral Marketing Campaign

21LouFind We haven’t blogged since October 29, 2009, but there have been good reasons for our silence. Over the past 5 months the systems group, with help from our good friends in public services, worked fiendishly to get our local implementation of the VuFind Next Generation Catalog LouFind out the door, and most recently have been engaged in clandestine operations to launch a viral marketing campaign to generate positive LouFind buzz on campus.

Our campaign involved a 3-pronged approach to spreading the LouFind message.

  1. Members of the LouFind team infiltrated student social hotspots on campus and planted our message directly in the ears of the students.
    • Usdan Game Room – one of our team hustled students at pool, betting them that LouFind was way more rad than Louis
    • Cholmondeley’s – one team member performed several acoustic songs extolling the wonders of LouFind and another performed a spoken-word piece reviling Louis for its lameness.
    • The post office – we bribed a mailroom clerk to pepper his conversations with phrases like “Louis is lame” and “LouFind rocks.”
    • Shuttle to Cambridge – a team member surfed LouFind on her iPhone and exclaimed frequently and loudly about how wonderful the experience was.
    • Sherman Dining Hall – disguised as students, LTS staff took over the dining hall. We held a sit in for two and a half days until campus administration agreed to issue a statement that “Louis is bogus” and “LouFind is way cool.” Our demand for more chicken wings was not as successful.
  2. We placed hidden speakers in the InfoCommons that broadcast “LouFind is da bomb!” and “Louis is bogus” messages at frequencies only detectable by people under 30.
  3. We placed subliminal messages in Louis screens that instilled in users an overwhelming desire to switch to LouFind.

You may have witnessed the astonishing success of our campaign. Undergrads were fighting over InfoCommons seats so they could use LouFind. Students maxed out their text message plans sending themselves Call Numbers. Copies of our LouFind table tents were stolen from the library and sold on eBay for hundreds of dollars. Campus Health Center reported a dramatic upswing in number of carpal tunnel complaints from students who couldn’t stop clicking through facets. Jonathan Coulton even wrote a song about LouFind.

Yours in bibliographic espionage,
– Mata Hari

Gimme a V! Gimme a U! Gimme an F! Gimme an I! Gimme an N! Gimme a D!

What’s that spell?megaphonePROGRESS!

Some days it’s been a struggle, but VuFind is finally cleared for takeoff.

We’ll be speeding through the rest of the technical implementation in the next five weeks, preparing for a soft launch right before Thanksgiving (followed by the official beta launch with fanfare in January). There are not very many technical tasks left ; the primary work to be done is the marketing campaign for the January fanfare, which will be handled by the functional side of the VuFind team.

Next Page »

Protected by Akismet
Blog with WordPress