These days, I’m really working hard at improving the library’s web presence. In part, this means moving more content over to LibGuides. As a result, we’re using more of the features of LibGuides, specifically lots of custom JavaScript and CSS. It’s nice to be tapping some of these more advanced features, and it has been nice to discover that, so far, they work pretty well.

For example, I was initially a bit miffed that the custom JS/CSS for a guide was capped at 2000 characters, but it turns out that’s really not a big deal, because you can also put your JS/CSS files in a LibGuides S3 bucket and load them at your convenience.

To push the envelope a little bit, I made a JavaScript-based calendar, using this library, and LibGuides handled it pretty flawlessly. At one point, I accidentally killed all of the JS in the guide editor page by failing to close a style tag, but other than that I have had no problems. While the organization of code in LibGuides is not always super intuitive – it can often live in multiple spots – it works as advertised.

All in all, pretty satisfying as a CMS so far.

Posted in javascript, libguides | Comments closed

Nanogenmo 2020

November is Nanogenmo, or National Novel Generation Month, a cheeky variant on the more well known Nanowrimo, or National Novel Writing Month. The idea is that during the month of November, you write code that produces a 50,000 word novel.

Most of the “novels” produced are of course unreadable. But it’s a great opportunity to play with natural language processing tools, and to try to produce something that is at least interesting or amusing. The challenge has been run annually since 2013, and there have been some really remarkable entries.

My contribution this year consists of driving directions from Google Maps, where the itinerary follows the mentions of U.S. place names from Mark Twain novels. The idea is that you can follow the fictional travels of Twain’s characters, while mostly sticking to today’s U.S. interstate system. Tedious to read, but a somewhat unusual way to experience Mark Twain.

I invite you to take a look at the many oddities from past Nanogenmos!

Posted in books, nanogenmo | Comments closed

Website committee

I was recently appointed chair of our library’s website committee. I am honored to take on this role, as I feel that working to improve our library’s web presence is one of the most useful contributions that I can make to our library. It is also work that I like doing. I can see how many people would find fixing code and solving minor problems with software very tedious or annoying, but I find it enjoyable.

I have some plans: reduce the overall number of pages, update or remove out-of-date content, move more pages over to LibGuides, improve the CSS in many places, and so on. I am glad that many of the other librarians on the committee are on board with these plans. I hope that we will be able to make some constructive improvements.

Anyhow, now it’s time to get to work on this. I will report back later.

Posted in committees | Comments closed

Further into Alma

This week I dug into Alma a bit further, and learned how to do things like make sets and run jobs. I know I’m a bit behind the curve on this – other CUNY librarians have been doing these things for months – but it felt good to level up and figure this stuff out.

I am awed by the power of Alma. While it gives you the tools to do some amazing stuff, it also gives you the power to cause a lot of problems if you’re not careful. It’s quite formidable. You definitely don’t want to make a misstep when changing hundreds or thousands of records. Nonetheless, I’m glad that – as an electronic resources librarian – I have it at my disposal.

Posted in alma | Comments closed

Building incrementally

As soon as I finished the Open Journal Matcher and released it to the world, I wanted to rewrite it from the ground up. When I looked at the code, it was clear that so much could be improved: from better variable names, to clearer flow, and more concise functionality. I had to resist the strong urge to tear it all down and start again.

I did completely rewrite a project once before, and it was a good experience. My code got better, and I was happier with my work. It’s often said that a programming project is never finished, and I definitely agree.

I did, however, not rewrite the OJM, at least not yet. I decided it would be better to make incremental improvements, slowly molding the project into the shape that I want it. Over the past few weeks, I’ve added features and improved security, accessibility, and more. The code is already a lot better, with no big rewrite required.

On the upside, this incremental approach has kept the project more stable, as I’m not breaking everything at once. It also allows me to build the improvements on my own timeframe, which is nice. Lastly, I’m gaining a deeper understanding of the code I’ve written, rather than going looking for newer, shinier solutions. This focused attention on a specific implementation seems valuable and constructive.

Posted in journal recommender | Comments closed

Slow down, be thorough

Since (for now) the Open Journal Matcher is built without using a proper task queue, I’ve been spending a lot of effort handling the various errors thrown by my Google Cloud Function. This is both satisfying and annoying: it is nice to catch and handle each error properly, but it takes some digging to figure out how to deal with some of them.

Mostly, when my cloud function fails, I need to send the request again. Maybe more than once. I’m creating a lot of API calls, but it’s needed for the sake of thoroughness. I want the OJM to cover as many journals as possible, and so I’m willing to wait for a 200 response. The underlying assumption is that people will be willing to wait if the application returns the best results. My approach is probably not the most efficient way to do this, but for now it’s churning out the best recommendations I can muster.

Posted in api, google, journal recommender | Comments closed

On variable costs

Now that the Open Journal Matcher is live and receiving traffic, I’m wondering how much it is going to cost to keep running.

There isn’t an obvious answer. Mostly this is because Google Cloud Functions scale with your project. This is definitely good for scalability and availability, but it makes it much more difficult to budget. When I flipped the switch to turn on the OJM, I really had no idea how expensive it would be.

(Needless to say, this is a different, contrasting billing approach to PythonAnywhere – which I talked about in my last post)

Since the project has been online for two months, the costs are becoming more clear now. There have been some big spikes in traffic ($141 worth of traffic on August 10th), as well as some quiet periods. At least now I can start to calculate some meaningful averages.

I have a bit of time to figure out funding for the future, since my Google Cloud Platform Credit grant runs until January. The looming question is how much will it cost to fund the project after that?

Posted in budgeting, google, journal recommender | Comments closed

On scaling

I’ve been using PythonAnywhere to host web projects for some time now, and while I am very happy with the service, one of its weak points is scalability. This is especially problematic when trying to handle unpredictable spikes in traffic. While there are plenty of platforms that will scale seamlessly along with your traffic, PA is not one of these. With PA, you statically set the number of web workers that will serve up your pages. Not only is this number fixed, but it is the same across all of your various projects. This is less than ideal. I haven’t yet been burned by not having enough web workers, although this seems quite possible. On the other hand, I also don’t want to waste money by buying idle resources. For now – or until I can think of something better – my solution is to keep an eye on it and try to strike the right balance.

Posted in pythonanywhere | Comments closed


Recently, CUNY libraries migrated to Alma, our new library services platform. Alma is a pretty mighty piece of software. It can manage many, many library functions. Given how much it does, it amazes me that it works. I would love to look at the codebase for a bit, just to get a sense as to how it is organized.

My experience, as a user, has so far been pretty good. There are things that have confused me, although so far once I’ve learned their purpose, they are useful. I like how it brings together disparate library workflows, and makes them work together. It’s going to be a journey getting it all figured out, but it seems like a positive step.

Posted in alma | Comments closed

Access to readings during remote instruction

Our students are facing a potential textbook crisis this fall. Many may not even realize it yet. But with in-person library services potentially greatly curtailed, one crucial source of textbooks – the library reserve desk – may not be readily available to our students.

The librarians are working to mitigate this crisis on several fronts. But some of the most effective remedies can come from faculty themselves. Here are some strategies we recommend:

  • Assign readings from the library’s electronic collections. These are already paid for, and are free to our students.
  • Make readings available on Blackboard. When being vigilant about avoiding copyright law violations, this can be an effective strategy.
  • Use open educational resources (OERs) as course textbooks. This is the best long-term solution. OERs can be reused and adapted by anyone, and can be a very effective solution to prohibitive textbook costs.

Please don’t stick your students with huge textbook bills this fall! Take steps to reduce textbook costs while planning your reading list.

Posted in textbooks | Comments closed
Need help with the Commons? Visit our
help page
Send us a message
Skip to toolbar