At our DH 101 session, we had the great pleasure of learning from Miriam Posner, Coordinator and Core Faculty, Digital Humanities Program, University of California, Los Angeles. This workshop turned out to be a particularly reflective, even philosophical one. Miriam is interested in uncovering the typically unexamined actions, practices, assumptions, and decisions made over the course of a digital humanities project. She urged us to be more open and reflective when we talk and write about our projects, to explain the assumptions in our work and help our readers/users understand how and why decisions were made.

Here is Miriam’s DH101: A Highly Opinionated Resource Guide with links to all the resources discussed today and then some.

“What is DH?”

Miriam’s own preferred definition is “the use of digital tools to explore humanities questions.” She says “explore” rather than “answer” because she doesn’t want to be overly positivist and claim that digital methods give us one single interpretation of any humanities question. Miriam shared a list of project types—exhibit, digital edition, map, data visualization, text analysis, 3D imaging, multimedia narrative, timeline—and said that once you have a data set these can also be combined or layered.

When you’re considering a digital project, think about “sources, processes, and presentation.”

  • Sources: files, images, text, numbers, artifacts, etc.
  • Processed: what you do to the sources, for example organize, edit, enhance, digitize, quantify, etc.
  • Presented: visualized, mapped, made searchable or interactive, made web-accessible, etc.

We looked at examples of completed digital humanities, which can seem like “black boxes,” and asked how did they make that?”  Miriam showed us how to read about and investigate a project to understand how it was constructed, emphasizing the importance of making decisions thoughtfully. Miriam created How Did They Make That? to expose and explain the methods and technologies that went into the digital humanities projects presented on the site.


Data categorization is reductive and may not reflect the lived experience of the people who are reflected in the data. Miriam gives as an example the National Geographic’s The Changing Face of America, which presents photographs of people who self-identify as multiracial. We can see that the flexibility with which these individuals describe their own biracial identity conflicts with the rigid and limited choices offered by the US Census categories.

To illustrate how reductive metadata can be, we downloaded the metadata for the photographs in the Charles W. Cushman Photograph Collection at the University of Indiana. We then looked at the photos themselves and reflected on what can’t be captured in the metadata or what assumptions or perspectives are encoded in the metadata.

We then uploaded the Cushman metadata to Google Fusion Tables and explored many of the visualization options (maps, charts, etc.) to look at the data. (Note: staff at NYU Libraries Data Services can help you clean and visualize your data).

Text Analysis

As an introduction to text analysis, we explored the sample texts and tools available in Voyant Tools. Voyant includes tools for word cloud, keyword in context, frequency visualization for words, a customizable stopword list, the ability to load multiple data sets and compare them, and more. For output, you can create a link to your data within the tool, export your data to another analysis or visualization tool, download your analyzed data, etc. If you like this tool but want more control over the environment and your texts, you can download Voyant and run it on your own computer.

We touched on topic modeling, but didn’t get any hands-on experience. Instead we discussed our qualms about the process of topic modeling, which seemed to some to be an opaque process. Miriam suggested giving the aptly named Topic Modeling Tool a try.

Network Analysis

The basic process for creating a network analysis is to specify a question, find the data that stipulates the relations you want to depict, specify nodes and explore and analyze your data, and interpret your results. Like all data analysis processes, this is a very iterative activity.

We downloaded sample data from a survey and used Raw to visualize the relationships among the people surveyed. We then used Gephi, which Miriam warned us is a bit buggy, especially on a Mac. In fact some of us couldn’t even get it to open on our Macs! If you are having this problem, this blog post might help: How to fix Gephi on Mac OS & Windows.

We wound down the day by sharing what we plan to do with our new knowledge and skills.

To learn more about what notable scholars are doing in digital humanities, attend one of our upcoming public events:

♦ Miriam Posner on Head-and-Shoulder-Hunting in the Americas: Lobotomy Photographs and the Visual Culture of Psychiatry

Date: Thursday, May 28, 2015
Time: 1:00pm – 2:30pm
Location: Avery Fisher Center, Avery Room, 2nd Floor, Bobst Library

♦ Mark Algee-Hewett on Data and the Critical Process: Knowledge Creation in the Digital Humanities

Date: Thursday, June 4, 2015
Time: 1:00pm – 2:30pm
Location: Avery Fisher Center, Avery Room, 2nd Floor, Bobst Library

♦ Jennifer Guiliano on Humanities Infrastructure versus the Digital Humanities: Confronting the Legacies of Intellectual Property, Resources, and Labor in the Academy

Date: Tuesday, June 9, 2015
Time: 1:00pm – 2:30pm
Location: Avery Fisher Center, Avery Room, 2nd Floor, Bobst Library


This workshop was part of the spring 2015 Polonsky Foundation Graduate Student Workshops in Digital Humanities: Tools and Methods. Visit the NYU Libraries Digital Scholarship Services website and blog to learn about our services. To contact us, fill out our appointment request form or email us at We look forward to helping you with your digital projects.