Digital Humanities Abu Dhabi DHAD conference
New York University Abu Dhabi
10-12 April 2017
#DHAD2017

Abstracts 

(in alphabetical order by first listed author)

 

From Alley to Landfill: Challenges of and Design Opportunities for Cleaning Dhaka’s Communal Trash
Abouzeid, Azza

NYU Abu Dhabi

Garbage is an endemic problem in developing cities due to the continual influx of migrants from rural areas coupled with deficient municipal capacity planning. In cities like Dhaka, open waste dumps contribute to the prevalence of disease, environmental contamination, catastrophic flooding, and deadly fires. Recent interest in the garbage problem has prompted cursory proposals to introduce technology solutions for mapping and fundraising. Yet, the role of technology and its potential benefits are unexplored in this large-scale problem. In this paper, we contribute to the understanding of the waste ecology in Dhaka and how the various actors acquire, perform, negotiate, and coordinate their roles. Within this context, we explore design opportunities for using computing technologies to support collaboration between waste pickers and residents of these communities. We find opportunities in the presence of technology and the absence of mechanisms to facilitate coordination of community funding and crowd work.

 

Spatial UAE
Sam Ball, Lucas Olscamp, Connor Pearce, Matt Sumner, Brittany Trilford, David Wrisley
NYU Abu Dhabi

This collective paper was written by the students enrolled in AHC AD 141 “Spatial Humanities” in the spring term at NYU Abu Dhabi.  It stems from a sequence of exercises we did in our seminar including digitizing cartographic depictions of the cities of Dubai and Abu Dhabi in the 1960s and 1980s found in Frauke Heard-Bey’s From Trucial States to United Arab Emirates, as well as comparing them with other NYUAD special collections, a collection of 1974-78 Soviet maps of the UAE held in NYU’s Spatial Data Repository and contemporary satellite data.  The purpose of the exercise was both to compare spatial representations across the sources as well as to analyze change over time in this relatively young, and quickly changing, country both before, during and after federal unification of the seven Emirates in 1971.  In the paper, the students will briefly describe their results.

 

ض : Avenues for Digital Research Projects on Morocco
Ahmed, Sumayya (@likeimnothere)

UCL-Qatar 

Sample (2010) wrote that the “true power” of the digital humanities has nothing to do with either the production of tools or research. Instead, he argued that the “heart of the digital humanities is not the production of knowledge; it’s the reproduction of knowledge.” Whatwe now call the Digital Humanities changes the ways in which we share knowledge. As an inherently interdisciplinary field, it changes its course as it flows through various humanities disciplines. When it makes its way to Middle East and North African Studies, it has (too)often stayed near the Eastern (mashriqī) bank, with the most notable projects focusing on the “central lands.” This paper, focusing on the content aspect of DH projects, draws on years of research on book history and manuscripts culture in Morocco to propose some possible Maghribi (Moroccan, Andalusī, and general North African) material for future digital humanities projects.

 

Not a Single Bit in Common: Issues in Collating Digital Transcriptions of Ibn Rušd’s Writings in Multiple Languages (Arabic, Hebrew and Latin)
Barabucci, Gioele 
Cologne Center for eHumanities

Can computers help us collate hundreds of witnesses of texts written in Arabic, Hebrew and Latin? Collating witnesses is a fundamental task in any edition. For the Averroes edition, hundreds of transcriptions of Ibn Rušd’s writings will be collated using a computer-based workflow developed at the Cologne Center for eHumanities and based on CollateX. There is a catch, however. None of the existing collation algorithms can compare witnesses in different languages, let alone in different scripts. Finding a way to compare, collate and display witnesses of “almost the same text” in different languages is an open research question in computer science that can be addressed only delving into profound philological issues. This talk will show and discuss the state of the art, the current limitations, as well as the future research directions in the field of computer-based interlingual collation.

 

Digital Humanities and Libraries in a Global Context: An Integrated Approach to Supporting Our Advanced Scholars
Coble, Zach and Beth Russell
NYU New York and NYU Abu Dhabi

Establishing support structures for digital scholarship services in academic libraries is becoming increasingly common. NYU Libraries has followed this movement by launching a Digital Scholarship Services (DSS) unit at Bobst Library in New York, and the Center for Digital Scholarship (CDS) at NYU Abu Dhabi Library. Both units have had to define what it means for them to help scholars explore the integration of digital tools and methodologies into their research and teaching. This presentation examines how these units were developed at their respective campuses; what questions they asked their community in order to shape services, staff and support structures; and how these departments have shifted and progressed in direct response to the needs of their scholars and the changing landscape of digital humanities research.

 

Arabic Collections Online
Danielson, Virginia and Beth Russell
NYU Abu Dhabi

Arabic Collections Online (ACO) is a groundbreaking digital library created and made available to the public worldwide by New York University Abu Dhabi. The collection will eventually comprise 25,000 digitized volumes drawn from distinguished research library collections in the U.S. and Middle East; to date we have digitized and posted over 5,000 of these volumes online, and made them freely available. Other than small, focused collections of scholarly Arabic literature and theology, there is very little Arabic language content openly available to the public. Originally conceived as an online library for readers and scholars, the potential for ACO to play a role in digital humanities research in Arabic has also become clear. To realize its potential, the rather nasty problem of OCR for Arabic will need to be solved. This presentation introduces ACO and outlines its present state and plans for its future development, including our recent conversations around Arabic OCR initiatives in order to allow scholars to work more closely with the texts.

 

Computational Linguistics for Identifying the Vorlage or Anciently Translated Texts into Arabic
Dannaoui, Elie
University of Balamand

Language changes over time and varies according to place and social setting. In the case of Arabic, we can observe grammatical variation like differences in the structure of words, phrases or sentences by comparing the same translated text taken from different manuscripts. One of the major problems that is standing in the way of studying the old Arabic translations is the original language from which the text has been taken and translated. This issue gains additional importance when the same text exists in more than one translation taken from multiple languages: Greek, Syriac, Coptic and Latin… Accordingly, the Arabic translation of the text could have reached us from a previous translation taken from another language. This intervention seeks to present the methodology, which has been considered for a research project at the Digital Humanities Centre at Balamand University aiming at automating this process and enabling the researcher to follow the original language from which the old Arabic translations have been processed.  This project offers automated linguistic corpus processing features. All transcribed texts are subject to a morphosyntactic annotation. Lexical, grammatical and inflectional properties (tense, grammatical mood, grammatical voice, aspect, person, number, gender and case) are associated with the annotated text. These linguistic properties allow the system to perform complex searches based on abstract representations of a specific word, sentence, paragraph, syntax and occurrence.

 

Practical Named Entity Recognition
Erdmann, Alexander
Ohio State U / NYU Abu Dhabi 

Named Entity Recognition (NER) involves automatically identifying certain classes of proper names in raw text, traditionally, persons, groups/organizations, and places. An upstream task facilitating downstream applications from machine translation to digital historiography, NER is typically treated as a black box and its output assumed to be sufficiently accurate. However, the digital historiographer who frequently deals with challenging, non-standard texts (in terms of domain, style, etc.) cannot rely on this assumption. Furthermore, he/she may not be well versed in Natural Language Processing technologies. In this work, I discuss a novel active learning strategy for adapting an NER model to a non-standard domain as well as the applications it was designed to facilitate, namely, inducing social networks of ancient peoples and tracking semantic change/drift in named entities mentioned in ancient texts. Crucially, this strategy requires no technical expertise from the user while allowing him/her to specify the desired level of accuracy before adapting the model. It then iterates through the active learning process, at each step predicting how accurately the updated model would be able to identify named entities in the non-standard domain until reaching the pre-specified level of accuracy, thus meeting the practical NER needs of the digital historiographer.

 

 

The Contemporary Wayang Archive: Javanese Theatre as Data
Escobar Varela, Miguel (@miguelJogja)
National University of Singapore

The Contemporary Wayang Archive is a digital archive of contemporary versions of Javanese Wayang Kulit, one of the oldest performance traditions of Southeast Asia. The archive includes video recordings, transcriptions and scholarly translations with notes that can be watched and read online. However, the archive also treats its documents as data that can be re-purposed for other kinds of projects: stylometric analysis of the transcripts, network analysis of characters, Arduino-powered tangible interfaces, interactive scholarship platforms and video-precessing for quantitative close analysis. This talk describes the creation of the archive and discusses the problems and opportunities of considering theatre as data.

On the Visual Complexity of Medieval Altarpieces
Falkenburg, Reindert, Godfried T. Toussaint, and Daniel Watson
NYU Abu Dhabi

Late-medieval sculpted altarpieces, some of them more than 12 meters high, and still in situ in central and northern-European churches, show a dense variety of architectural and vegetative motifs that form large, visually highly complex patterns framing and intersecting with the central (religious) figure. The structure of these altarpieces suggests that their complexity varies as a function of elevation. This paper describes computational experiments using various objective measures of visual image complexity performed with images of several medieval altarpieces to determine if and how their complexity depends on the height with respect to the ground. This work is part of a general digital humanities project to determine whether these altarpieces possess structural properties that may cause a viewer to experience motion illusions from static stimuli, and the resulting ramifications.

An Arabic Fiction Corpus to Develop Graded Readers
Familiar, Laila ()
NYU Abu Dhabi

Corpus based approaches are emerging as essential paths towards improving the materials available for Teaching Arabic as a Foreign Language, and specially towards the design of efficient Graded Readers. One important step towards this end is building a multimillion-word corpus composed of contemporary Arabic fiction. With carefully designed criteria, the literary corpus currently under construction can provide reliable lexical frequency data and enable curricula designers develop effective Graded Readers and extensive online reading repositories. Well-designed literary corpora will serve also in building bridges with other disciplines such as lexicography, sociolinguistics and literary theory, opening the door to investigating unexplored terrains in the Humanities.

 

Computer Vision for Historical Document Image Analysis
Fornes, Alicia (@AliciaFornes )
Computer Vision Center, Autonomous University of Barcelona

Document Image Analysis and Recognition (DIAR) is an important field in Computer Vision whose aim is the automatic analysis of contents of document images (either printed or handwritten, textual or graphical), towards their recognition and understanding. Traditionally, DIAR has focused on the recognition of scanned document images, and has been instrumental in the development of key technologies such as Optical Character Recognition. In the last decades, the discipline has become a cornerstone technology for the preservation of cultural heritage. This paper will overview the typical DIAR processes (e.g. handwriting recognition, document classification, information spotting, graphics recognition, writer identification, etc.) and show how they have been applied to several Digital Humanities Projects.

Open Arabic Periodical Editions (OpenArabicPE) establishes a framework for and produces open, collaborative, and scholarly digital editions of early Arabic periodicals. Beginning with editions of Muḥammad Kurd ʿAlī’s Majallat al-Muqtabas and ʿAbd al-Qādir al-Iskandarānī’s al-Ḥaqāʾiq, OpenArabicPE aims at showing that through re-purposing well-established open software and by bridging the gap between popular, but non-academic online libraries of volunteers and academic scanning efforts as well as editorial expertise, one can produce scholarly editions that offer solutions for some of the problems pertinent to the preservation of and access to the early periodical press in the region: active destruction by war and cuts in funding for cultural heritage institutions; focus on digital imagery due the absence of reliable OCR technologies for Arabic fonts; absence of reliable bibliographic metadata on the issue and article level; anonymous transcriptions of unknown quality; and slow and unreliable internet connections and old hardware. This paper will discuss the ideas behind and our experiences implementing OpenArabicPE.

In this presentation I will discuss the ongoing project of Sounds of Sir Bani Yas, which consists in the collection sounds from different points of the island that is the largest wildlife reserve in the Arabian Gulf.  The project started as location recording exercise with the class “Designing Sound for Scene and Screen” and has gradually evolved to a soundscape monitoring project. The current goals of the project are to record, analyze, and preserve the soundscape of this protected site while developing innovative methodologies for environmental monitoring and soundscape analysis using current web streaming technologies and inexpensive location devices. After briefly describing the two forms the project has assumed, both as a website and as interactive installation, I will discuss some ideas about what can be learned from the evolution of the soundscape at a location and if sound could/should be considered a form of heritage or not.

 

Computational Processing of Arabic for Digital Humanistic Research
Habash, Nizar
NYU Abu Dhabi

Arabic poses a number of challenges for researchers in the digital humanities.  Arabic’s orthography is highly ambiguous; its morphology rich and complex; and it has a large number of dialects with very important differences from the Standard Arabic.  Furthermore, much of the tools developed for Arabic focus on Modern Standard Arabic, and specifically on the news genre, which lowers the quality of these tools on other verities of text and dialects.  We present a review of the challenges, as well as existing and ongoing research on solutions.

 

Analysis Beyond Analytics: Expanding the Digital Humanities through Cinema & Media Studies
Hassapopoulou, Marina  (@ephemeralmedia & @conferenceDH)
NYU New York

This talk will historically approach the Digital Humanities to analyze productive points of convergence of developments in process-oriented film theory/practice and computational tools. This historical grounding –inspired by the methodologies of early film theorists-practitioners such as Kuleshov, Vertov, and Eisenstein –aims to propose frameworks for prioritizing the “so what?” questions that sometimes become obscured by the over-emphasizing of the computational aspects. Advocating for both the close and the distant reading, the work of pre-digital (and early digital) scholar-practitioners from the field of Film and Media Studies offers productive insights into approaches that fuse conventional methods of studying time-based media with new modes of inquiry.

 

Beirut Publishes: Macroanalysis of a Century of Lebanese Publishing
Hawat, Mario (@kyraneth) and David Joseph Wrisley (@DJWrisley)
American University of Beirut
There is a received idea associated with Arab world cultural production: “Cairo writes, Beirut Publishes (prints) and Baghdad reads.” This paper uses the metadata from various library collections around the world about books published in Beirut, Lebanon over the long twentieth century to trouble the mono-directional nature of historical print culture in the region. It also reflects on a corpus of books that shifts over time in both topic and language. National libraries around the world have not collected books published in Lebanon in a uniform fashion, nor have they tagged these books in the same ways. A macroanalysis of this data serves, therefore, as an index for the ways in which divergent discourses about multiconfessional Lebanon have been crafted. The uneven nature of the data from different collections as well as the limitations of the metadata as an object of study will be discussed as we attempt to paint a picture of this complex industry  with broad strokes. The paper will present some significant results and anticipate future avenues for data collection allowing us tell a richer story about the industry.

 

Hughes, Lorna (@lornamhughes)
University of Glasgow
A significant investment has been made in digital collections by cultural heritage organisations. What are the key lessons for the digital humanities from the ecosystem of creation, curation, and use of digital heritage? Can we build on existing collaborations to bring curation and research practices closer together? How can digital heritage shape emerging areas for research and engagement in the digital humanities? This talk will draw on examples of a number of digital heritage projects and explore the potential of new collaborations with cultural heritage organisations as the basis for sustained engagement around an emerging critical framework for the digital humanities.

 

It has been almost thirty years since the launch of the World Wide Web. A little under twenty years since we started Google-ing for information. And a little more than ten years since Facebook put social media firmly on the map. During these swells of innovation that are the hallmarks of the fast-paced Information Age, society often finds itself both swept up in the euphoric and utopic possibility of glorious new technology and bristling with anxiety and uncertainty from the rapidity of change and electric pace of daily life. So, what do we do about it? How do we matter as scholars in this new era? How do we respond when the Jekyll and Hyde of our utopia and anxiety is further battered by the Mephistophelean intervention of media manipulation, data-veillance, and malicious hacking? We double down on our pedagogy, we preach the good word of digital fluency, we train our students to be impactful tomorrow not acquiescent to yesterday, and we choose projects that are righteous, meaningful, and engaged with the too oft-forgotten notion of the public good. And at the core is the digital medium. A mythical flow of ones and zeroes that is in fact real, material, and within our grasp to engage with, master, and deploy. This talk will be about thinking about all of this and thinking about the worth of the Digital Humanities in the year 2017.

 

NYU Abu Dhabi
Arabic suffers from near complete lack of graded fiction readers based on clear and systematic standard in simplification. Much is left to the opinions and experiences of the simplifier, which naturally differ from one person to another. So the main objective of the Simplification of Arabic Masterpieces for Extensive Reading (SAMER) project is to create a standard for the simplification of modern fiction in Arabic and to use this standard to simplify a number of Arabic fiction masterpieces for developing readers. To achieve this goal, the researchers have to design a Graded Reader Scale (GRS) to build their computational tools on, a scale that weds textual readability to a fairly universally-accepted staging scheme of the development of a learner’s reading ability. This naturally opens up a plethora of technical and educational questions relating to how to build the GRS, the definition of readability, how to measure it, the targeted learners, and the balance between art and science in the simplification process.

Staging Archival Affinities and Recombinant Performance in Scalar
Levine, Debra
NYU Abu Dhabi

Choreographer Trajal Harrell has created a repertoire of experimental archival performances that refer to his deep engagement with historical archives of performance. As a scholar of Harrell’s work, I look to the realm of the digital for its potential to represent, critically analyze and extend a somatic experience of this performative methodology, which I am calling “recombinant archival performance.”  In this talk, I will discuss my ongoing project to build a digital humanities scholarly platform using Scalar, that has the capacity to “stage” the rich media materials (photographs, documentary footage, scripts, choreographic notes, production texts, programs, journals, sound recording) produced by performing artists who articulate their work as an engagement with performance archives and as a methodology of archiving.  My intent is to work with Scalar’s capacity to visualize, exhibit and critically reflect upon the relationship between the contemporary performances that Harrell is making now, his methodologies of archival engagement, and the archives that have prompted his new work. For the past decade, Harrell and a number of performing artists have insisted that the labor of performing allows them to become archivists and embodied archives – not merely performers who revise and extend past repertoires.  This project explores the ethical, aesthetic, and political stakes in this claim and how DH platforms like Scalar, whose technology has been conceived to exhibit the interdependence between critical scholarship and the archives upon which the scholarship depends, can also performatively simulate those methodologies. I will be talking about my efforts to make Scalar serve as a virtual site for a collaborative and dialectical engagement between the artist, her or his artistic production, the archives from which performance research is drawn, and the critical reflection of performance scholarship.

 

Muehlhaeusler, Mark
American University of Cairo
Traditional tools for indexing — and searching — source materials in libraries are the catalogue, and its derivatives: OPACS, shelflists, finding aids, and (more recently) discovery layers. What all these tools have in common is they they are built on descriptions of discrete objects (books, images, letters, etc…), which are represented individually, or as lists in response to queries. Attributes selected from controlled vocabularies may imply linkages between related collection items, but the nature of the these links is often not made explicit.  It is possible, however, to represent collections as a whole, in such away as to highlight particular types of linkages between objects. For example, one can represent a group on images on a map to illustrate their spatial relation, or a historical development over time. One can also represent archival collections as social networks, in order to emphasize links between people. Examples of such representation already exist in various forms, but they are typically not set up to function as access tools. This talk aims to present a couple of prototypes that show how libraries can provide alternative means to search for objects in collections, and to consider what this may mean for the research process.

 

Murray, Padmini
Shristi Institute of Art, Design and Technology
While digital humanities as a discipline comes of age in the Anglo-American academic establishment, it is still a relatively unfamiliar conceptual and disciplinary framework in India. Despite much excellent work in the region that falls under the rubric of the discipline, many of these projects do not define themselves as such, and my talk will discuss how embracing the term unreservedly might be problematic in our specific regional context. This assertion will build the foundation for a discussion of what we need to prioritise locally in order to ensure a contextually relevant and culturally specific digital humanities praxis for India.

 

Digital Libraries in Arabic Countries: Digitization Workflows
Nashed, Michael
Bibliotheca Alexandrina

In order to establish digital libraries in Arab countries, a great deal of initial investment is required. Developing online Arabic materials, as well as a wide range of universal information in Arabic and facilitating their accessibility for Arab users may be a challenge and may seem problematic, but the beneficial outcomes are outstanding and will show significant results on educational, economical, and social levels within the region. This talk will present the development phases of digitizing Arabic materials in the region in addition to milestones and challenges for the Arabic library.

 

Digital Heritage Experiments: New Spaces, Old Sites
Parthesius, Robert 
NYU Abu Dhabi

Emerging research is challenging ideas surrounding what heritage is and how it is perceived. Heritage is no longer seen as a collection of “things” passed from one generation to the next but as a nuanced and dynamic process that reflects changing visions, values and understandings of the past. Heritage is a mutable phenomenon ascribed with value and meaning by contemporary societies. The Dhakira Heritage Centre’s HeritageLab at NYUAD is experimenting with technologies to provide a digital platform that empowers civil societies to become curators of their own heritage and custodians of their cultural memory.  Building on the experience of the rapidly developing field of Digital Humanities, the Heritage Lab envisions the use of both big data as well as more local and specific stories to allow people to tell the stories they want, experimenting further with opening up data collections on a local and global level and permitting perspectives using multiple scales of reference.

 

Digital tools allow for the combination of various layers of information referring to the same source, artefact, site or region; multiplex visualisations emerging from such overlays can already provide new insights in the actual complex interplay of factors. Tools of quantitative and network analysis however enable us to not only visualise, but also to create models to determine possible correlations and weight potential impacts between socio-economic, cultural and environmental dynamics in a specific region across specific timespans, for instance.  This will be demonstrated for some examples from the transition zone between ancient and medieval Afro-Eurasia.

 

Pue, Sean
Michigan State University

Hindi/Urdu began as two names for a common language written in two scripts. Over the nineteenth and twentieth centuries, it became attached to different religious and ultimately national communities. Many proponents of Hindi aimed to solidify the language’s relationship with Sanskrit; certain writers emphasized Urdu’s connections with Arabic and Persian; others valued colloquial usages and vocabularies shared with other regional languages, such as Panjabi. Through all these changes, the languages retained a measure of mutual intelligibility, particularly when spoken and heard. This paper considers how digital textual encoding can enable new discoveries and analyses of Hindi/Urdu poetry, both as text and as performance, from the disciplinary perspectives of natural language processing, informational retrieval, linguistics, literary criticism, sound studies, and public digital humanities.

 

 

In many ways, the 18th century can be seen as one of the last in a long line of “commonplace cultures” extending from Antiquity through the Renaissance and Early Modern periods. Recent scholarship has demonstrated that the various rhetorical, mnemonic, and authorial practices associated with commonplacing – the thematic organization of quotations and other passages for later recall and reuse ­– were highly effective strategies for dealing with the perceived “information overload” of the period. This paper describes recent efforts at identifying commonplaces and commonplace practices over the entirety the Gale-Cengage Eighteenth Century Collections Online (ECCO) database, representing some 32 million pages of text. Given the size of this collection, as well as the state of the data in terms of its OCR output, identifying shared passages that exhibit the textual characteristics of commonplaces – e.g., are relatively short, repeated, and rhetorically significant – has proven to be a non-trivial computational task, the challenges of which we will explore and discuss.

 

 Arabic biographical collections constitute one of the most voluminous and unexplored genres in the Arabic literary tradition. They are particularly valuable as a source for the social history of the Islamic world, especially up until 1500 CE, before which we are often poorly served by documentary evidence. Numbered in the hundreds, biographical collections include hundreds to tens of thousands of biographies and thus are ideal for prosopographical research of any kind. Scholars have recognized the value of these texts for decades, but their sheer volume has posed a formidable challenge, so their potential has remained untapped. This paper offers an efficient method for studying these texts through algorithmic analysis, which is here understood as a step-by-step reduction of texts written in a natural language to machine-readable data, and exploratory techniques that rely heavily on the use of graphs, cartograms and networks to identify and interpret chronological, geographical and social patterns from these texts. While the method so far has been applied only to a small number of our texts, I will also lay out current work in progress aimed at the development of maintainable infrastructure that will facilitate the analysis of not only all surviving texts, but also all of them taken together.

 

A View of Social Knowledge, Localised and at Scale
Siemens, Ray (@RayS6)
University of Victoria
My talk considers the notion of open social scholarship, and the way in which it’s framing of issues related to the production, accumulation, organization, retrieval, and navigation of knowledge encourage building knowledge to scale in Humanistic contexts.  Open social scholarship involves creating and disseminating research and research technologies to a broad audience of specialists and active non-specialists in ways that are accessible and significant. As a concept, it has grown from roots in open access and open scholarship movements, the digital humanities’ methodological commons and community of practice, contemporary online practices, and public facing “citizen scholarship” to include i) developing, sharing, and implementing research in ways that consider the needs and interests of both academic specialists and communities beyond academia; ii) providing opportunities to co-create, interact with, and experience openly-available cultural data; iii) exploring, developing, and making public tools and technologies under open licenses to promote wide access, education, use, and repurposing; and iv) enabling productive dialogue between academics and non-academics.

 

 

Siskin, Clifford
NYU New York
The first part of my title is a phrase from David’s conference bullet points–now in the plural.  The mirror metaphor was Marshall McLuhan’s way of describing how “we march backwards into the future.”  I’ll be engaging the historicity of knowledge (before the humanities) and the digital (before the digital)–including the current shift from classical to quantum computation–to ask whether we are re-answering old questions (per Moretti) or posing new ones.  To describe my own sense of the new, I’ll discuss my work with the Cambridge Concept Lab and on a new taxonomy of “information” and “knowledge” with the physicists David Deutsch and Chiara Marletto.

 

 

Fully Automatic Algorithmic Generation of Musical Rhythms and its Applications
Toussaint, Godfried T. 
NYU Abu Dhabi

The most economical representation of a musical rhythm is as a binary sequence of symbols that represent sounds and silences, each of which have a duration of one unit of time. Such a representation is eminently suited to objective mathematical and computational analyses, while at the same time, and perhaps surprisingly, provides a rich enough structure to inform both music theory and music practice. A musical rhythm is considered “good” if it belongs to the repertoire of the musical tradition of some culture in the world, is used frequently as an ostinato or timeline, and has withstood the test of time. In this talk a very simple deterministic mathematical algorithm for generating musical rhythms is shown to generate almost all the traditional rhythms used in cultures all over the world, and thus has the capability to capture “goodness.” Its applications to music composition as well as other areas such as visual pattern design and analysis, and planning leap years in calendar design, are outlined.

Further Reading: G. T. Toussaint, The Geometry of Musical Rhythm, Chapman & Hall: CRC Press, January 22, 2013.

 

Developing a Digital Infrastructure for Chinese and other East Asian Languages
De Weerdt, Hilde (@hild_de)
Leiden University 

I will introduce the thinking behind and the functionality of MARKUS, a digital text analysis platform designed for the customized automated annotation of Chinese texts and the analysis and visualization of the resulting data. I will first discuss the main functionality of the MARKUS platform including the automated and manual mark-up of default named entities and user-generated tags, keyword discovery, genre-specific mark-up, close reading tools including linked reference materials, data export, and data exploration in the associated data visualization interface VISUS. We will also explore more recent features including text import from external text databases, an automated learning module in which MARKUS can be trained to obtain higher accuracy and greater recall, and a forthcoming module on text comparison called COMPARATIVUS. The presentation will also cover the ways in which we are continuing to develop MARKUS to facilitate research and teaching in a variety of Asian languages and across disciplines. MARKUS and VISUS were designed by Brent Ho and Hilde De Weerdt at Leiden University.

 

Why Use the TEI Framework for Linguistic Annotation?
Witt, Andreas
Institut für Deutsche Sprache/University of Cologne

From early on, the TEI Guidelines included an inventory of encoding options for linguistic applications. Indeed, the Association for Computational Linguistics was one of the three founding organizations of the TEI enterprise. Moreover, the well-known British National Corpus is an early example that demonstrates the usefulness of TEI for linguistic purposes.  The TEI makes it possible to annotate the basic grammatical and semantic information, either in a simple way (inline) or by using more complex mechanisms, such as feature structures. Its applicability is not limited to written language, since modules for the transcription of spoken language or even for computer-mediated communication exist. However, within linguistics – and especially within Computational Linguistics, the TEI is not the encoding scheme of choice in many cases.  The TEI special interest group “TEI for linguists (LingSIG)”, that is co-convened by the speaker, has as its aim making the TEI even more competitive in the area of linguistic annotation frameworks.

 

NYU Abu Dhabi
This paper investigates how we might use text reuse algorithms to think about intertextuality in medieval poetic narrative.  The problem of intertextuality is particularly complex in such medieval text traditions since variance is simultaneously linguistic, literary and performative, a phenomenon that Zumthor named mouvance.  In this paper, I describe an experiment with a researcher in data visualization in creating dynamic synoptic readings of computationally aligned texts from the different genres in medieval French, the chanson de geste, the saint’s life, romance and fabliau, what they taught us about the behavior of medieval verse across manuscript witnesses, but also how visualization is more than a product or an output, but how, as a medium of user-driven discovery, it can contribute to rethinking old problems in literary history.

 

Akkasah (IG, FB) is a Center for Photography at New York University Abu Dhabi dedicated to establishing a research-active archive of photography from the Middle East and North Africa. The centre focuses in particular on historical vernacular and documentary photography. The presentation will describe the process of collecting, archiving and preservation at the Center and outline the Center’s broader project.