In praise of buying low quality books

I buy a lot of really bad books on Amazon. They’re minor books, on fringe topics, and they’re usually well out of date. They’re also usually incredibly cheap, like two or three dollars, or some such. And while they’re generally pretty uninteresting, they can often offer a citation to support a minor point in a literature review. (Citations don’t always need to be to authoritative sources. Citing an obscure work can also help make an argument, in a different way.)

So while I use these books for my research, I don’t really want to own a copy. But I buy them anyway. This is because Amazon is often the most effective way to get a hold of them. These books are usually such trash that most libraries that might have once held a copy have long since discarded them. Spending $3 on Amazon is less of a headache than trying to coordinate an ILL from a distant library. Leaving aside the obvious privilege of making such frivolous purchases, and the ethics of buying from a company like Amazon, buying trash books is really a pretty good solution.

Posted in books, research | Comments closed

Copilot

So I set up Copilot last night. Copilot is GitHub’s AI that helps with writing code. I don’t think I’d pay the list price of $10/month for this type of service, but it is free to anyone with a GitHub educator account, which was enough to prompt me to try it.

Needless to say I am suspicious. But ~44% of programmers are using AI tools in their work, and I’m curious what (if anything) they find valuable about it. I’m not far enough into it yet to offer an informed opinion, but I do intend to post here again once I’ve thought this through a bit.

Lastly, getting this working was a bit of a journey. Debian Bullseye did not have a new enough version of neovim. To install a newer version of neovim, I had to install cmake. To install cmake, I had to install a library called gettext. All of this took quite a while, but it was worth it because I can try Copilot in vim. Asking me to use a different IDE would have been a non-starter.

Posted in ai, copilot | Comments closed

DX and vue in libguides

While I was very optimistic about Vue Single File Components (SFCs) a few posts ago, I’ve been having some trouble implementing them in LibGuides. While you can upload whatever JavaScript you want to a LibGuides group (as “Customization Files”), I’m beginning to realize that the LibGuides interface really wasn’t meant to accommodate a SFC workflow. As developer experience (DX), attempting to use SFCs in LibGuides is trying to put a square peg into a round hole. While you’d ultimately want to automate your JavaScript build process, LibGuides actively works against this. You’re left moving a lot of files around manually, which is not ideal. It gets confusing.

Nonetheless, I learned a lot about Vue components as I slowly figured this out. I was able to take the methods I wrote for my SFCs and re-implement them in regular old components while drawing upon Vue from a CDN. LibGuides is much more suited to this approach. You lose scoped CSS and some nice build features, but as far as I can tell, that’s about it. I can keep my components in the Customization Files, and cut out the build process entirely. It’s the best approach I’ve found so far to integrate Vue into LibGuides.

Posted in javascript, libguides, vue | Comments closed

Minimum viable website

In my last post I talked about pushing back on the complexity of JavaScript frameworks. Now I’m thinking about taking this further. In the name of maintainability, I think I am going to make an alternate version of our library page: a “minimum viable website”. I’ll strip out as much of the JavaScript as absolutely possible. I’ll simplify the CSS. I’ll format it nicely with prettier, then I’ll stick it in a LibGuides group (and on GitHub) as a backup, in case for some reason no one is around to maintain the current production page. This way, if someone needs to roll back to a very technically simple page, there will be something for them to turn to.

Posted in css, homepage, javascript, libguides | Comments closed

Pushing forward; pushing back

Lately, I’ve been trying to push forward with best practices for our library webpage. This has meant moving toward Vue and moving away from jQuery. The result is a more modern site. While a lot of the changes that I’ve made recently have been invisible to the end user, there has been a lot of progress behind the scenes on modernizing our code.

But I’m also pushing back. I know we can’t get into a situation where the site is unmaintainable by another librarian, so I’ve been trying to reduce complexity. Pushing back against complexity is necessary in the face of the (seemingly inevitable?) complexity of many JavaScript frameworks.

Pushing back can be strategic: we can pick the parts of a framework that help us and discard the ones that just make things unweildly. There’s no need for an “all in” mentality on a framework; a selective approach is likely more suitable for a small department like ours.

Posted in javascript, vue | Comments closed

In praise of single file components

One of the recommended ways to organize Vue code is to use Single File Components, and I have to say, they are wonderful. Instead of the traditional separation of concerns into HTML, CSS and JS, Single File Components allow you to separate your project into chunks that are intuitive to you. You get to decide what you want to make into a “component”. The component is written as a single file with a “.vue” extension and all of the relevant bits get included. As a bonus, you get scope for the component, which you did not get when everything was dumped into the global namespace.

My hope is that using SFCs will also improve the maintainability of our library website. I am hoping that a librarian who is new to our codebase will more easily be able to make edits, because they can focus on a specific component that they want to modify, and they will not have to worry about what is going on with the rest of the page. SFCs provide a more modular approach, which I think is valuable.

So now I’m in the process of breaking down our page into SFCs. This will be fairly easy to do for the JavaScript, because there is not too much of it, but a bit more onerous for the CSS. The benefits of scoped CSS are obvious, however, and I am committed to getting us there.

Posted in vue | Comments closed

Up to date

There is more than one way to simplify code. You could, for example, adopt new abstractions that make things that were previously difficult, easier to implement. Of course this requires learning new things, and it requires pushing yourself a bit to adapt.

Or you could stay simple by not adapting. By using the same tools we’ve always used, you could reduce the amount of mental effort you need to put into a new project. This is simplicity of a sort, too.[1]

The dilemma is salient for librarians. Do we use the old ways, so that more or less anyone in the department can maintain our web stuff, or do we try to stay up to speed with modern tooling, which many librarians will not touch? Both of these approaches have risks. Life will obviously not stand still, but we need to decide if, as a department, we’re going to keep up with the leading edge or the trailing edge of web technologies.

Anyhow, these problems raise important questions. I don’t have the answers. I do have my opinions, but I am very cognizant that not everyone will agree with me.

[1] There are probably other ways to simplify programming, but I’ll leave those aside for now.

Posted in libraries | Comments closed

May is library infrastructure month

In advance of a very busy summer, I have some free time right now to focus on the library website. So I decided that May is Library Infrastructure Month, and I have been going about on Mastodon pretending like this is actually a thing.

It’s not entirely a joke. Academic libraries are very infrastructural projects. Academic librarians make sure that students, faculty and others get the building blocks they need for their academic endeavors. But academic libraries aren’t usually thought of as infrastructure, except in the frustratingly old-timey way that a library is seen as a building to hold books. But library websites are infrastructure, discovery services are infrastructure, ILL is infrastructure, metadata is infrastructure, etc.

Those who build this infrastructure know that their work is foundational, but much of it goes mostly unseen by the academic community. Let’s add a bit of visibility this year by celebrating Library Infrastructure Month this May.

Posted in infrastructure, libraries | Comments closed

I successfully did a javascript build!

For a long time, I’ve been intimidated by JavaScript builds. They have a reputation for being complicated, difficult to understand, and unforgiving. This tracks with my past experiences. I have been burned before, so my approach has been to avoid builds, and just use script tags like it is 2005. Yes, it is possible to do this.

A blog post by Julia Evans offered a way forward. She describes her experiences with a simpler build tool called esbuild. Following Julia’s lead, I gave esbuild a try with some Kingsborough Library Vue code. It really opened up a lot more doors than I expected. Helpfully, using a build for Vue allowed me to take advantage of Vue’s single-file component system. The result was that I could make my code more modular, and access nice features like scoped CSS. Most importantly, the build itself just worked! No impenetrable errors, no mysterious behaviors. It behaved as expected. This is a low bar, but I was thrilled.

I don’t think I trust esbuild enough to use it on the production library homepage yet, but I will keep experimenting with it. So far it is promising.

Posted in javascript | Comments closed

A tentative use case for machine learning in academic libraries

Being a subject selector in an academic library is pretty repetitive. I’m basically applying the same selection criteria to different materials over and over again. In my specific role, I’m almost always looking for books (and ebooks) that are for lower-division undergraduates/general readers; that are from reputable academic presses; and that fall within the subject in question. I have some other more subtle requirements that I won’t attempt to explain here, but the point is that I’m applying these same criteria every single time.

Perhaps we could train a machine learning model to do this work. If it watched me select books long enough, I bet such a model could even pick up on the finer nuances by which I’m choosing titles. It could then probably do a great job of applying those criteria. The relatively constrained nature of the problem might make this a feasible problem to solve.

Librarians would probably want to train the model with their own preferences for their subject areas. A human-in-the-loop approach might be sensible. Maybe the team at ProQuest’s Oasis (for example) would be willing to build us such a feature?

Posted in acquisitions, ai | Comments closed
  • Subscribe to this blog

Skip to toolbar