I made my own altmetric

I’m waiting for one of my colleagues to lend me some books on bibliometrics. However, in the meantime, in my naïveté, I have created a metric[1].

My metric is not a terribly good one, though perhaps it is no worse than some other well-established ones. While it somewhat defensibly measures reach and productivity, my metric also fails on other fronts.

I’ve called my metric gh-index. It works on the same math as h-index, which is fairly widely known. I’ve translated the logic of h-index to evaluate GitHub stars. The (questionable) assumption is that GitHub stars are the open source software equivalent to academic citations.

It’s a comparison that is kind of interesting. To be clear, I’m not trying to make OSS contributions equivalent to academic citations.  But my rhetorical point is that is that GitHub stars and scholarly citations are both hard-earned recognition, even though they represent very different types of labor.

So, you can calculate the gh-index of any GitHub user here.  This web tool queries the GitHub API, and parses the resulting data to make gh-index calculations. Also, if you’re interested, you can see the code here.

[1] Here is an example of a much more well thought out analysis.

Posted in git, metrics | Comments closed

DIY Twitter analytics

Our library uses Twitter (@kbcclibrary) to communicate with our students and faculty. Along with our tweeting, we rely on metrics to keep tabs on our Twitter presence. We get these metrics exclusively from free tools: the native Twitter Analytics page, but also third party analytics sites like Tweetstats and the free version of (the unfortunately named) SocialBro.

I like Tweetstats, because it is clearly a passion-project of one developer. For some time, I regularly used Tweetstats’ “tweets by month” chart to make sure we were on track to meet self-imposed targets of tweets for the month.

But problems came up in December and January when Tweetstats stopped working reliably. It became so inconsistent that it was unusable. When I learned that the Tweetstats creator was trying to sell the site, I basically gave up on the service. I reluctantly looked at other free tools that might offer a similar display of tweets per month, and was quickly reminded that the world of third party Twitter analytics sites is pretty unappealing.

An obviously better solution is to build the functionality I needed for myself. It would be a good programming challenge, and we’d end up with a home-grown analytics tool. Our library has built tools on the Twitter API previously, so I didn’t need to start from scratch.

Creating a list of tweet dates from the API was not too difficult; what proved more challenging was producing a visual representation of this data. I imported pandas, numpy and matplotlib, all of which were unfamiliar to me. I spent a lot of time messing with pandas dataframes. In the end, the result was a visualization that looks like this:


It’s not pretty, but it is exactly what I needed.

Posted in twitter, visualization | Comments closed

On trying (and failing) to learn shell scripting

I tried to lean shell scripting in the summer of 2000. It seemed doable: isn’t it basically executing a bunch of shell commands in a row? I got some books, which I read half-heartedly, tried a few things, and then gave up. Shell scripting, which I had hoped would be the easiest entrée into programming, was too hard.

I think part of the problem was the undeveloped state of learn-to-code resources in the early 2000s. I wanted the hand-holding of resources like Codecademy and Treehouse to make those first steps, but if something like that existed in the summer of 2000, I never found it.

Also, I think my approach was conceptually not helpful. Stringing a bunch of shell commands together does not make a programmer. There were a lot of core programming concepts that I was ignoring entirely. In hindsight, I think it makes more sense to learn core ideas – like variables, loops, functions, boolean, and so on – and bring those concepts back to shell scripting.

With a bit of perspective from time spent learning things like python and javascript in the past year, shell scripting recently began to make much more sense. I now have some ksh scripts automating library processes: like restarting certain programs when needed, or clearing out logs periodically. A shell script, triggered by cron, is much more reliable at doing this than I am. Our library projects benefit from this reliability. Unfortunately, I just took the very long road to finally being able to write those scripts.

Posted in ksh, shell | Comments closed

Visualizing library data

Using Twarc-Report, a tool made by Peter Binkley at the University of Alberta Libraries, I made some visualizations of our library’s archive of twitter data. Here’s one of them:


This shows how the hashtags in various tweets about Kingsborough are related. You can see the full interactive version of that visualization here.

Neat, right? Twarc-Report builds visualizations based on data captured by Twarc. It uses d3.js, a javascript library that provides tools for data-based manipulations of the DOM. Twarc-Report does this nicely, and it prompted me to try something similar with some other library data.

The APIs for Primo, CUNY’s discovery layer, provide interesting data and metadata about searches. Using d3.js and Flask, a Python framework, I made a web tool to visualize some of this information. This tool takes the user’s search terms and parameters, makes an API call to Primo, and passes the resulting data to a d3.js script (adapted from here) to make the visualization. The whole thing produces something that looks like this:


This is a visual rendering of where Kingsborough books with the keyword “president” appear in the Library of Congress classification. You can try the tool yourself here; the code is also on GitHub.

Posted in d3, twarc, visualization | Comments closed

The many uses of Git

Git is version control and collaboration software. It’s initially unintuitive and takes some time to learn (command line!), but it’s also powerful, broadly useful and generally awesome. I wish more librarians used Git because of the benefits it could bring to our collaborations.

Git is closely related to Github, which makes it possible to share Git repositories much more broadly. Git and Github are mostly used for coding projects, but librarians have used them to share lesson plans and to write peer-reviewed articles. (Stephen Zweibel helpfully pointed out to me that academics can get free private Github repositories.)

This past semester, I used Git to keep track of my lesson plans. This was useful because Git can divide projects into distinct “branches”, which allow you to work on different variations of the project separately from the “master” branch. I created a “master” lesson plan for my library instruction sessions at the start of this semester, and divided and sub-divided it into individual branches for each class that I taught. Git kept track of all of the changes and variations.

There are a number of places to learn Git. Here in New York City, METRO and the LACUNY Emerging Technologies Committee have recently had workshops on Git. Sometimes groups on Meetup.com have sessions devoted to Git. The Atlassian tutorials are really useful for figuring out the nitty-gritty. And of course you can learn Git on Github itself, with step by step tutorials here and here.

Posted in git | Comments closed

Archiving tweets about Kingsborough

Last year, I heard about a python program called Twarc, which was developed by Ed Summers, a software developer at the University of Maryland, to capture and archive tweets. Back in August, Ed demonstrated the effectiveness of this tool by capturing over 13 million tweets about events in Ferguson, MO as they unfolded over the course of 17 days. He blogged about the process here. Twarc brought an archivist’s collecting impulse to events that were happening very quickly, which was not only a brilliant idea, but captured valuable data as well.

The value I saw in Ed’s tool, however, did not have much to do with current events, but rather with my own college. Kingsborough has thousands of students, who are tweeting all the time about their classes, their commutes and the food in the cafeteria, among many, many other things. The immediately evident value of Twarc for me is that it can systematically and continuously archive tweets about Kingsborough. There are obvious benefits to having this kind of archive. Twarc can listen to all of twitter, all of the time, in a way that is not possible by one librarian, no matter how enthusiastically they use the twitter advanced search.

Twarc is a python command line tool that requires Twitter API keys. Registering for the Twitter API is fairly easy. Moreover, Ed has documented Twarc quite well, so that not much more is needed to use it than basic knowledge of the command line. After spending some time trying out different Twarc searches, I settled on:

twarc.py –stream ‘kbcc kingsborough’,\#kbcc,\#kingsborough, >> results.json

Basically this will listen to Twitter continuously for mentions of both the words “kingsborough” and “kbcc” in the same tweet, or tweets containing the hashtags “#kbcc” or “#kingsborough”. The results are continuously added to the end of the results.json file. The JSON output is not particularly human-readable, but when run through a JSON visualizer, or through some of the utility scripts that are provided with Twarc, turns out to be quite detailed and interesting.

For our library’s purposes, I want Twarc to be running all of the time, so I run it in tmux on a server where I have a shell account. I also wrote a shell script (periodically triggered by cron) that re-starts Twarc if it is stopped for some reason, such as the server being rebooted. As a result, since I got everything working in February, I have basically been able to leave Twarc running unattended, while it continues to archive tweets about my college.

The tweets archived by Twarc will hopefully end up in the Kingsborough archive, but I think they have other value as well. I hope to post soon about some of the other things we’ve done with the twitter data that we’ve gathered.

Posted in archives, twarc, twitter | Comments closed
Need help with the Commons? Visit our
help page
Send us a message
Skip to toolbar