On indexing

Before working at CUNY, I occasionally made back-of-the-book indexes for books in religious studies, anthropology and gender studies. Indexing is fun, though very time-consuming work. It doesn’t make much money, but it’s gratifying and interesting.

I feel that indexing is a field with a lot of potential. Building conceptual maps of book-length texts is, in my opinion, very useful work. But book indexing is a profession that has its problems. Many of these have to do with technology. Most indexers rely on niche software tools that are easy to use but are often de-skilling. Our dependence on these tools contributes to the perception that indexing is low-value work.

Rather than rely on vendors, indexers could build their own tools to support their indexing. We could write and open source code to move the profession forward. To make this point using a concrete example, I built a sample index visualization tool:

Screen shot of the index visualizer
Click the image to try it out.

The point is not that this tool is particularly sophisticated (it’s not), nor that it’s particularly novel (also not). Instead, its rhetorical agenda is that it is made by a (former) indexer and is open sourced for other indexers. It’s a very modest effort, but my point is to suggest that if indexers prioritized open source work, we’d be much further along in developing our tools and our skill sets.

Posted in indexing, open source | Comments closed

Building usability testing tools

My colleague Carlos and I have been doing some usability testing recently, and have built some of our own tools to make it happen.

We created a testing interface using JavaScript, which has largely been a success. Our interface gives students tasks to complete, and workspaces to complete them. Reassuringly, students participating in our study seem to understand this interface intuitively. They are generally able to work through the tests with very little guidance. Developing this tool has been a positive experience, allowing me to level up my JavaScript skills and teaching me about some of the subtleties of JS.

We also needed to track the data produced by the tests. After a failed attempt to use Hotjar, a subscription-based analytics tool, we decided to do this ourselves too. I made a Flask application that would process and save parameters passed in the url when a user completed a test.

To do this, the Flask application would serve up a “Congratulations, you’ve completed this task” page, and record a bunch of data from the url about how the user got there. Ultimately the data was appended a CSV file and saved. Analyzing this CSV is still going to require a few more steps, but we’ll get there.

My lack of patience with existing tools led me to put far more time into building these solutions than I initially expected. Nonetheless, because we built them ourselves, they met our needs very well. Besides, building them has been a good learning experience.

Posted in javascript, python, usability | Comments closed

Programming language matters

While it is probably true that you can learn to code in any programming language, lately I’ve felt that language choice is nonetheless important. The languages we learn affect the kind of work we end up doing a bit further down the road. I’ve recently begun to notice how leaning to code in Python has subtly shaped the kind of projects I work on. Things could have easily turned out differently.

There are probably a couple of reasons why language choice matters. Here are two from my experience:

  • In my programming journey, I’ve enjoyed the weekend afternoons I’ve spent in the company of the Learn Python NYC meetup group. That IRL community has been formative for me. People really matter.
  • Tools also matter. Python has excellent data science libraries, so I’m not surprised I headed further down the scientific Python rabbit hole. It makes sense to follow the most interesting parts of a language. Those interesting parts both make the language unique and shape our development.

So perhaps language choice sorts us in some way. Python has been interesting for me, but I’m also curious about what’s next. Language will influence the journey ahead.

Posted in language, python | Comments closed

Making an annotation tool at CodexHack

I spent last weekend at CODEX Hackathon working on a tool called LitRen. LitRen is meant to make ebooks editable and annotatable. The idea behind this project was that editable ebooks would help people who write fan fiction: fanfic authors could insert their ideas and stories into ebooks, or even modify the existing text as they like. With public domain books, entirely reimagined versions of the story could be created by modifying and expanding on the original text.

Making this annotation tool required some heavy lifting in javascript. There are some existing javascript annotation tools that we adapted to work with ebooks. As much of the javascript was over my head, my small contribution was to watch others code, and to build a responsive landing page with Bootstrap.

By the end of the weekend, the basic annotation/editing tool was working, though a lot of supporting functionality remained unbuilt. That was fine; our goal was to have a proof of concept. Anyhow, it was a fun project and a great weekend!

Posted in ebooks, hackathon | Comments closed

Archiving with TCAT

For quite some time now, our library has been archiving tweets about our college using twarc. This has been fine, so I hadn’t really dug any deeper into the world of archiving bots until earlier this week when my colleague Shawna Brandle approached me about using TCAT, the Twitter Capture and Analysis Toolset.

TCAT has a lot more packaging than twarc: from installation scripts, to a GUI, to extensive reports. Once it’s set up, it is a lot more user friendly than twarc. If you want a web-based twitter archiving tool that doesn’t require any command line knowledge of its users, TCAT is a good choice. The biggest technical hurdle is getting TCAT running on a linux (virtual) server. I set it up on AWS, with help from the good installation documentation.

TCAT offers a great opportunity to give your colleagues a tool to archive tweets. Beyond that, it provides a lot of ways to analyze and export collections of tweets. It’s got a lot more overhead — as well as more user-focused functionality — than twarc.

Posted in archives, tcat, twarc, twitter | Comments closed

Integrating open source projects in our library

Recently, our library was considering adopting Augur, a CUNY-made open source reference desk transaction tracking program. It’s a nice program that fills a very specific niche function. We tested Augur at our library for a couple of weeks. Yet despite its niftyness, we didn’t implement it at Kingsborough. This was mainly because it added an unnecessary layer of complication to what is currently a very simple, manual process for keeping reference desk statistics.

I wasn’t too disappointed that the project didn’t go ahead. Manual reference desk tracking has been working fine at Kingsborough, and in some regards there is no reason to interfere with that workflow.

Yet evaluating Augur got me thinking about the value of open source projects for libraries. I found myself revisiting some of the old tropes of open source advocacy: projects like Augur can provide opportunities for us to expand our technical skill sets. They allow us to build collaboratively and to contribute actively to open projects. And so on.

Even though these issues are not new, they are still important in our libraries. Building our librarians’ skills is an important long term goal, as is creating software that benefits libraries. So I hope that we can find opportunities to integrate open source tools that meet the needs of our librarians and our communities.

Posted in open source | Comments closed

I made my own altmetric

I’m waiting for one of my colleagues to lend me some books on bibliometrics. However, in the meantime, in my naïveté, I have created a metric[1].

My metric is not a terribly good one, though perhaps it is no worse than some other well-established ones. While it somewhat defensibly measures reach and productivity, my metric also fails on other fronts.

I’ve called my metric gh-index. It works on the same math as h-index, which is fairly widely known. I’ve translated the logic of h-index to evaluate GitHub stars. The (questionable) assumption is that GitHub stars are the open source software equivalent to academic citations.

It’s a comparison that is kind of interesting. To be clear, I’m not trying to make OSS contributions equivalent to academic citations.  But my rhetorical point is that is that GitHub stars and scholarly citations are both hard-earned recognition, even though they represent very different types of labor.

So, you can calculate the gh-index of any GitHub user here.  This web tool queries the GitHub API, and parses the resulting data to make gh-index calculations. Also, if you’re interested, you can see the code here.

[1] Here is an example of a much more well thought out analysis.

Posted in git, metrics | Comments closed

DIY Twitter analytics

Our library uses Twitter (@kbcclibrary) to communicate with our students and faculty. Along with our tweeting, we rely on metrics to keep tabs on our Twitter presence. We get these metrics exclusively from free tools: the native Twitter Analytics page, but also third party analytics sites like Tweetstats and the free version of (the unfortunately named) SocialBro.

I like Tweetstats, because it is clearly a passion-project of one developer. For some time, I regularly used Tweetstats’ “tweets by month” chart to make sure we were on track to meet self-imposed targets of tweets for the month.

But problems came up in December and January when Tweetstats stopped working reliably. It became so inconsistent that it was unusable. When I learned that the Tweetstats creator was trying to sell the site, I basically gave up on the service. I reluctantly looked at other free tools that might offer a similar display of tweets per month, and was quickly reminded that the world of third party Twitter analytics sites is pretty unappealing.

An obviously better solution is to build the functionality I needed for myself. It would be a good programming challenge, and we’d end up with a home-grown analytics tool. Our library has built tools on the Twitter API previously, so I didn’t need to start from scratch.

Creating a list of tweet dates from the API was not too difficult; what proved more challenging was producing a visual representation of this data. I imported pandas, numpy and matplotlib, all of which were unfamiliar to me. I spent a lot of time messing with pandas dataframes. In the end, the result was a visualization that looks like this:

tweets-by-month

It’s not pretty, but it is exactly what I needed.

Posted in twitter, visualization | Comments closed

On trying (and failing) to learn shell scripting

I tried to lean shell scripting in the summer of 2000. It seemed doable: isn’t it basically executing a bunch of shell commands in a row? I got some books, which I read half-heartedly, tried a few things, and then gave up. Shell scripting, which I had hoped would be the easiest entrée into programming, was too hard.

I think part of the problem was the undeveloped state of learn-to-code resources in the early 2000s. I wanted the hand-holding of resources like Codecademy and Treehouse to make those first steps, but if something like that existed in the summer of 2000, I never found it.

Also, I think my approach was conceptually not helpful. Stringing a bunch of shell commands together does not make a programmer. There were a lot of core programming concepts that I was ignoring entirely. In hindsight, I think it makes more sense to learn core ideas – like variables, loops, functions, boolean, and so on – and bring those concepts back to shell scripting.

With a bit of perspective from time spent learning things like python and javascript in the past year, shell scripting recently began to make much more sense. I now have some ksh scripts automating library processes: like restarting certain programs when needed, or clearing out logs periodically. A shell script, triggered by cron, is much more reliable at doing this than I am. Our library projects benefit from this reliability. Unfortunately, I just took the very long road to finally being able to write those scripts.

Posted in ksh, shell | Comments closed

Visualizing library data

Using Twarc-Report, a tool made by Peter Binkley at the University of Alberta Libraries, I made some visualizations of our library’s archive of twitter data. Here’s one of them:

trviz

This shows how the hashtags in various tweets about Kingsborough are related. You can see the full interactive version of that visualization here.

Neat, right? Twarc-Report builds visualizations based on data captured by Twarc. It uses d3.js, a javascript library that provides tools for data-based manipulations of the DOM. Twarc-Report does this nicely, and it prompted me to try something similar with some other library data.

The APIs for Primo, CUNY’s discovery layer, provide interesting data and metadata about searches. Using d3.js and Flask, a Python framework, I made a web tool to visualize some of this information. This tool takes the user’s search terms and parameters, makes an API call to Primo, and passes the resulting data to a d3.js script (adapted from here) to make the visualization. The whole thing produces something that looks like this:

primoviz

This is a visual rendering of where Kingsborough books with the keyword “president” appear in the Library of Congress classification. You can try the tool yourself here; the code is also on GitHub.

Posted in d3, twarc, visualization | Comments closed
css.php
Need help with the Commons? Visit our
help page
Send us a message
Skip to toolbar