Further into Alma

This week I dug into Alma a bit further, and learned how to do things like make sets and run jobs. I know I’m a bit behind the curve on this – other CUNY librarians have been doing these things for months – but it felt good to level up and figure this stuff out.

I am awed by the power of Alma. While it gives you the tools to do some amazing stuff, it also gives you the power to cause a lot of problems if you’re not careful. It’s quite formidable. You definitely don’t want to make a misstep when changing hundreds or thousands of records. Nonetheless, I’m glad that – as an electronic resources librarian – I have it at my disposal.

Posted in alma | Leave a comment

Building incrementally

As soon as I finished the Open Journal Matcher and released it to the world, I wanted to rewrite it from the ground up. When I looked at the code, it was clear that so much could be improved: from better variable names, to clearer flow, and more concise functionality. I had to resist the strong urge to tear it all down and start again.

I did completely rewrite a project once before, and it was a good experience. My code got better, and I was happier with my work. It’s often said that a programming project is never finished, and I definitely agree.

I did, however, not rewrite the OJM, at least not yet. I decided it would be better to make incremental improvements, slowly molding the project into the shape that I want it. Over the past few weeks, I’ve added features and improved security, accessibility, and more. The code is already a lot better, with no big rewrite required.

On the upside, this incremental approach has kept the project more stable, as I’m not breaking everything at once. It also allows me to build the improvements on my own timeframe, which is nice. Lastly, I’m gaining a deeper understanding of the code I’ve written, rather than going looking for newer, shinier solutions. This focused attention on a specific implementation seems valuable and constructive.

Posted in journal recommender | Leave a comment

Slow down, be thorough

Since (for now) the Open Journal Matcher is built without using a proper task queue, I’ve been spending a lot of effort handling the various errors thrown by my Google Cloud Function. This is both satisfying and annoying: it is nice to catch and handle each error properly, but it takes some digging to figure out how to deal with some of them.

Mostly, when my cloud function fails, I need to send the request again. Maybe more than once. I’m creating a lot of API calls, but it’s needed for the sake of thoroughness. I want the OJM to cover as many journals as possible, and so I’m willing to wait for a 200 response. The underlying assumption is that people will be willing to wait if the application returns the best results. My approach is probably not the most efficient way to do this, but for now it’s churning out the best recommendations I can muster.

Posted in api, google, journal recommender | Leave a comment

On variable costs

Now that the Open Journal Matcher is live and receiving traffic, I’m wondering how much it is going to cost to keep running.

There isn’t an obvious answer. Mostly this is because Google Cloud Functions scale with your project. This is definitely good for scalability and availability, but it makes it much more difficult to budget. When I flipped the switch to turn on the OJM, I really had no idea how expensive it would be.

(Needless to say, this is a different, contrasting billing approach to PythonAnywhere – which I talked about in my last post)

Since the project has been online for two months, the costs are becoming more clear now. There have been some big spikes in traffic ($141 worth of traffic on August 10th), as well as some quiet periods. At least now I can start to calculate some meaningful averages.

I have a bit of time to figure out funding for the future, since my Google Cloud Platform Credit grant runs until January. The looming question is how much will it cost to fund the project after that?

Posted in budgeting, google, journal recommender | Leave a comment

On scaling

I’ve been using PythonAnywhere to host web projects for some time now, and while I am very happy with the service, one of its weak points is scalability. This is especially problematic when trying to handle unpredictable spikes in traffic. While there are plenty of platforms that will scale seamlessly along with your traffic, PA is not one of these. With PA, you statically set the number of web workers that will serve up your pages. Not only is this number fixed, but it is the same across all of your various projects. This is less than ideal. I haven’t yet been burned by not having enough web workers, although this seems quite possible. On the other hand, I also don’t want to waste money by buying idle resources. For now – or until I can think of something better – my solution is to keep an eye on it and try to strike the right balance.

Posted in pythonanywhere | 1 Response


Recently, CUNY libraries migrated to Alma, our new library services platform. Alma is a pretty mighty piece of software. It can manage many, many library functions. Given how much it does, it amazes me that it works. I would love to look at the codebase for a bit, just to get a sense as to how it is organized.

My experience, as a user, has so far been pretty good. There are things that have confused me, although so far once I’ve learned their purpose, they are useful. I like how it brings together disparate library workflows, and makes them work together. It’s going to be a journey getting it all figured out, but it seems like a positive step.

Posted in alma | Leave a comment

Access to readings during remote instruction

Our students are facing a potential textbook crisis this fall. Many may not even realize it yet. But with in-person library services potentially greatly curtailed, one crucial source of textbooks – the library reserve desk – may not be readily available to our students.

The librarians are working to mitigate this crisis on several fronts. But some of the most effective remedies can come from faculty themselves. Here are some strategies we recommend:

  • Assign readings from the library’s electronic collections. These are already paid for, and are free to our students.
  • Make readings available on Blackboard. When being vigilant about avoiding copyright law violations, this can be an effective strategy.
  • Use open educational resources (OERs) as course textbooks. This is the best long-term solution. OERs can be reused and adapted by anyone, and can be a very effective solution to prohibitive textbook costs.

Please don’t stick your students with huge textbook bills this fall! Take steps to reduce textbook costs while planning your reading list.

Posted in textbooks | Leave a comment


Now that the Open Journal Matcher is more or less complete and shared with the world, I feel like it’s time for a new beginning. I like this part of my job: occasionally I arrive at a point where I get to reset and consider what’s next.

I’m going to take a different approach. In the past, I’ve sought out technologies to learn, and then once I knew enough about them, figured out ways to apply them to my library or my job. While that can be a sensible way to learn a technology, it’s a bit myopic and doesn’t necessarily provide the most benefit to the people around me. Going forward, I will aim to focus more specifically on people. Making this more explicit will hopefully push me to build more constructive projects.

Posted in Uncategorized | Comments closed

Further into Mastodon

Joining Mastodon in 2017 turned out to be a positive change for me. Twitter had been making me depressed and anxious, and I needed a way out. At that point Mastodon – an open source social network – was unfamiliar to me and seemed rough around the edges. But the people I found there were compelling, and the interactions were weirdly real and non-algorithmic feeling. I was hooked.

This week, I began administering my own server in the Mastodon network. I’m not doing the technical bits of system administration; I’ve left that to the much more skilled people at Mastohost. But I’m now actively managing my little corner of the Mastodon network. Mastodon has powerful tools for moderation and countering abuse. These allow me to 100% decide what I want to allow in my timelines and on my server. It feels very positive and constructive.

Anyhow, it’s a work in progress. I’ll report back once I’ve put some more miles on it!

Posted in mastodon | Comments closed

Open Journal Matcher is here

I’m very pleased to let you know about a project I’m releasing today, the Open Journal Matcher. It’s a resource for people looking to find a suitable journal for their work. Paste in a draft abstract and see the best matching open access journals!

Open Journal Matcher uses a natural language processing library called spaCy to compare the text of a draft abstract with the abstracts of ~5000 journals from the Directory of Open Access Journals. It ranks all of these journals by similarity, and returns the top five matches.

While there are other journal matching services available, to my knowledge this is the only one that is fully interdisciplinary and fully open source. The code for the matcher application, the code for the matching algorithm, and the content of the journals, is all openly licensed.

Please enjoy, and let me know what you think of this tool!

Posted in doaj, journal recommender, open access, open source | Comments closed
Need help with the Commons? Visit our
help page
Send us a message
Skip to toolbar