Funding the Digital Humanities: A conversation with Mr. Brett Bobley

“adicel20″, a student enrolled in this semester’s ENG 5998 reading group, reflects on categories or types of advice provided by the Director of the Office of Digital Humanities:

On April 8, 2015, the Digital Scholars Group at Florida State University had the opportunity to converse with Mr. Brett Bobley, Chief Information Officer of the National Endowment for the Humanities (NEH) and Director of the Office of Digital Humanities. Mr. Bobley joined us via teleconference and gave us a few tips on the most important criteria the NEH uses when deciding whom to award grants to. The discussion centered on the Digital Humanities grant programs, which include: the DH Start-up Grants, the DH Implementation Grants, the Humanities Open-Book Program and the Training Programs for Advanced Topics in the Digital Humanities. The DH Start-up Grants Program, Mr. Bobley told us, is typically rewarding “new innovative and interesting ideas”, while the DH Implementation Grants support projects that already went through a successful start-up phase and proved their feasibility. The Humanities Open-Book Program, Mr. Bobley explained, is more of a “digitization program”, which seeks to find a larger audience for scholarly, out-of-print humanities books by providing open, online access to them, under a Creative Commons License. The Summer Training Grants Program is a very popular option for scholars and graduate students interested in advancing their knowledge of Digital Humanities. The grants offer the opportunity to study for a few days or for several weeks at multiple locations associated with the Institutes for Advanced Training in the Digital Humanities.

Answering some of the questions addressed by the audience, Mr. Bobley remarked that the NEH generally looks for two main components in each application: the intellectual aspect of the proposal (“the right idea” or “the great idea”) and the technological support of the project. In discussing the second aspect, he emphasised the importance of having the “right team”, the appropriate “human infrastructure” for the project, even if that requires teaming up with a scholar from another institution. He also recommends that proposals should be written for a general audience and avoid very specific jargon and terminology, since the peer review panel is a diverse team, comprised of scholars from various fields (humanities, computer scientists, information scientists and librarians). And although the NEH is typically looking for projects that have “an immediate impact” on the society and on the general public at large, it also supports ideas “that have potential down the road”, innovative approaches that lead to interesting discoveries in time.

We thank Mr. Bobley for his very informative talk and we hope his tips and suggestions will help many scholars write successful grant applications with the National Endowment for the Humanities (NEH).

Money for Medievalists?: Questions about Digital Humanities Funding

The scholarly method or scholarship is the body of principles and practices used by scholars to make their claims about the world as valid and trustworthy as possible, and to make them known to the scholarly public. It is the methods that systemically advance the teaching, research, and practice of a given scholarly or academic field of study through rigorous inquiry. Scholarship is noted by its significance to its particular profession, and is creative, can be documented, can be replicated or elaborated, and can be and is peer-reviewed through various methods. (Wikipedia)

Given the above definition of scholarship, is it really necessary to question whether work in the digital humanities qualifies? In the first section of Stephen Ramsey and Geoffrey Rockwell’s “Developing Things: Notes toward an Epistemology of Building in the Digital Humanities,” they tackle this question anyway, equating digital humanities – by its attributes – to other activities:

“People in DH will sometimes point to the high level of technical skill required, to the intellectual nature of the pursuit, and to the plain fact that technical projects usually entail an enormous amount of work. But even as these arguments are advanced, the detractions seem obvious. Repairing cars requires a high level of technical skill; the intellectual nature of chess is beyond dispute; mining coal is backbreaking work. No one confuses these activities with scholarship.”

I think one of the most important aspects of the provided definition of scholarship, and one that is overlooked by Ramsey and Rockwell’s comparisons, is that of “rigorous inquiry.” The common necessity of collaboration within digital humanities projects points to this factor, as efforts to create and answer bigger and bigger questions call for the expertise of individuals in other fields and of other abilities.

Some might wonder why these questions are important; scholars have been creating new modes and methods of research for centuries, collaboration and incorporation of technology is nothing new to academia, and questions about what “is” or “isn’t” scholarship are simply distracting from more important topics. But these questions become all important when it comes to one issue: funding.

Yes, funding: That impossible possibility, the dreamy realm of investigation in which the project is all-important rather than side work, the opportunity of full investment in your ideas. And someone is responsible for deciding which projects warrant this academic bliss. In the case of tomorrow’s meeting, we’ll hear from Mr. Brett Bobley, Chief Information Officer of the NEH and Director of the Office of Digital Humanities. This videoconferenced interaction will permit members of our group to learn about funded projects and the NEH’s approach to digital humanities when it comes to determining fundability.

As a medievalist, I must say that this opportunity is quite appealing. If I can’t convince my parents that the study of medieval texts and culture is worthwhile, how will I ever persuade any funding entity that my work can fulfill a pressing need of contemporary society? Reading the articles associated with this meeting did little to allay my fears, as I read that the NEH has an agency-wide initiative – related to its efforts to digitize scholarly works in the humanities – targeting “associated projects [which] will frame the contemporary study of humanities through a series of questions on such matters as technology, security, biomedical issues, recent wars and conflicts, the country’s changing demographics, and increasing political polarization” (Peet). Is there space for my scholarship here?

Of course, as an academic in need of project funding, I could point you to recent discoveries about medieval medical treatments that have been applicable to today (MRSA, anyone?), but I’m inclined to push against this utilitarian approach to scholarship. Must a project be immediately and tangibly useful for it to be valuable?

It might be this emphasis on usefulness that allows certain projects to thrive. For example, inquiries into the history of medicine appear to warrant funding, as they demonstrate how responses to outbreaks or other public health issues impact the resolution of those issues. One project – funded by the NEH – traces the publications and documents surrounding the 1918 pandemic of Spanish Influenza in the United States, especially as they relate to Royal S. Copeland, New York City’s health commissioner at the time.

The one reading that resolved this anxiety about the usefulness of my own line of study was the interview of Mr. Bobley by Michael Gavin and Kathleen Marie Smith, published in 2009. In the interview, Bobley’s description of the types of digital humanities projects that they’re looking for felt a little bit closer to home: “In all seriousness, though, we’re looking for innovative projects that demonstrate how technology can be brought to bear on a humanities problem and, ultimately, yield great scholarship for use by a variety of audiences, whether it be scholars, students in a formal classroom setting, or the interested public” (Gavin and Smith). This is a very different kind of emphasis on usefulness, one that acknowledges that use may not have an outcome beyond altering scholarly approaches to teaching, thinking, or writing about a particular issue. But those alterations, useful to us at least, can be groundbreaking and field-changing work.

And as many of the articles reminded us, Classics departments have been incorporating digital tools with great success.

RCSDuke


Gavin, Michael, and Kathleen M. Smith. “An Interview with Brett Bobley.” Debates in the Digital Humanities (2012): Debates in the Digital Humanities. Web. 04 Apr. 2015. <http://dhdebates.gc.cuny.edu/debates/text/49&gt;.

Howard, Jennifer. “Big-Data Project on 1918 Flu Reflects Key Role of Humanists.” The Chronicle of Higher Education. The Chronicle of Higher Education, 27 Feb. 2015. Web. 04 Apr. 2015. <http://chronicle.com/article/Big-Data-Project-on-1918-Flu/190457/&gt;.

Peet, Lisa. “NEH, Mellon Foundation’s Humanities Open Book Program to Revive Backlist Work.” Library Journal. Library Journal, 4 Mar. 2015. Web. 04 Apr. 2015. <http://lj.libraryjournal.com/2015/03/digital-resources/neh-mellon-foundations-humanities-open-book-program-to-revive-backlist-work/#_&gt;.

Ramsey, Stephen, and Geoffrey Rockwell. “Developing Things: Notes toward an Epistemology of Building in the Digital Humanities.” Debates in the Digital Humanities (2012): n. pag. Web. 04 Apr. 2015. <http://dhdebates.gc.cuny.edu/debates/text/11&gt;.

Funding the Digital Humanities

Wednesday, April 8, 2:00-3:15 pm
Williams Building 013 (English Common Room, Basement Level)

“Because democracy demands wisdom”: Funding the Digital Humanities

Among its many functions, the National Endowment for the Humanities (NEH) sponsors 38 award types as part of its prestigious annual grants program, at least 6 of which explicitly accommodate work in the digital humanities, and many of those intended to develop digital projects from prototype to proof-of-concept. Now in its 50th year of funding proposals that promote excellence in the humanities, the NEH continues to offer new programs at the convergence of curating, constructing, and critiquing — three activities or postures that the digital humanities value. (See, for example, the new Humanities Open Book Program, which utilizes low-cost “ebook” technology to digitize and make available scholarly works that are not currently in the public domain.)

For our final Digital Scholars meeting of the year, we will be joined via videoconference by Mr. Brett Bobley, Chief Information Officer of the NEH and Director of the Office of Digital Humanities. Mr. Bobley will discuss the importance of his office to the NEH’s public mission, share some of the unique projects the ODH has funded at various intersections of history and technology, and give us an opportunity to ask questions about the benefits of claiming digital disciplinarity and the challenges of identifying projects at the broad intersection of “digital” and other fields.

We may also consider differences between large-scale big-data projects and small-scale boutique projects, all of which help further the NEH’s mission to address important cultural changes underlying the work that humanities scholars do on their own, and in collaboration with scientists, librarians, museum staff, and members of the public. Finally, we may consider various paradigms that drive NEH funding or public-stream grant programs in general — including those ideas that move funding from object-oriented preservation toward open-access initiatives.

Participants are invited to read the following:

And to review:

  • Stephen Ramsey and Geoffrey Rockwell, “Developing Things: Notes toward an Epistemology of Building in the Digital Humanities,” Debates in the Digital Humanities, ed. Matthew K. Gold (online version: http://dhdebates.gc.cuny.edu/debates/text/11).

We hope you can join us,

-TSG

Cat Not Found: The Wonderful World of APIs

Rachel Stuart, a student enrolled in this semester’s ENG 5998 reading group, describes the function of Owen Mundy’s “Cat” site and responds to some assumptions about Data Visualization and Graphics Scripting:

In the era of Edward Snowden and “dickpics” are any of us really surprised to hear that privacy no longer exists? Tech advances make it possible for enormous amounts of information to be collected, stored, and combed through for various purposes, including marketing and – in Owen Mundy’s case – art. We were fortunate to have Professor Mundy in person to talk us through the fundamental functionality of one of his creations: the web site I Know Where Your Cat Lives.

One of the first questions addressed by Mundy was how to define and categorize “big data,” a term that gets bandied about without any real limitation on its qualifications. He asked questions that got us thinking about this quandary: Is it big data when you have so much that it won’t fit on just one machine? Is it big data when it is incomprehensible? Is it big data when you need a tool to sift through it in order for any meaning to be extracted from it? It’s possible that the answer to all of these questions is YES, but Mundy pointed out that this is a relative concern; as time passes and our storage devices advance, big data will be redefined again and again.

But why is big data – or its accessibility – so scary and fascinating? Mundy started down this line of thought when he noticed that a picture of his daughter on Instagram could be “mapped” with the simple click of a button. He discovered that the data for location of Instagram pictures is collected whether or not you choose to have your image mapped, and that this location data is readily available to the average person. (While I say average person, I personally had no idea how this was possible until I attended Mundy’s talk. Perhaps the average technical person or individual with enough curiosity?)

The web site that resulted went viral, and while the subject matter is interesting on its own, two of Mundy’s theoretical approaches to this instructive piece of art likely had a lot to do with its success. First, Mundy insists, art should be able to compete with television (at least his art). Even though he is pointing to a very real, very scary problem in our society, the likelihood that the message will get through to an audience is lessened if the message is couched in a – and I quote – “duh, duh, DUH” – terrifying web page. Second, all people need is one button. Initially, Mundy set up a web page that allowed the user to sort through the cats categorically, but eventually he realized that one button was necessary as there was really only one goal of the site.

Another aspect of Mundy’s work that reveals his interest in the web page’s social impact is that he allows users to remove their cats from the map, even providing a link for the user to up their privacy settings on Instagram and make this sort of breach less likely in the future. This must be working, as over 40,000 cats have been removed. When they are removed, the picture is no longer there, but a signpost is left behind, reminding cat pic surfers that there is a point to all of this. Mundy provokes action with entertainment.

cat not found

An Application Program Interface (API) is a tool that either provides data, functionality, or both – usually both. If you are as confused as I was when I first heard of an API, this is the comb or filter that digs through that big data for you, sorting out what you want to see from the stuff that doesn’t matter. They provide access to big data, but in some cases they limit access. Facebook, for example, gives access to some data, but limits access in many cases (unless you are trying to sell something and want to pay, but that’s a whole other blog post).

Some tools Mundy shared with us:

DPLA – Digital Public Library of America – Check out the apps page, where there are tons of funky API tools for sorting through the big data of the DPLA. You can create your own queries to serve your own needs

Sunlight Foundation – An amazing effort to use APIs and public databases to encourage government transparency.

Give Me My Data – An API that pulls all of the information that Facebook has collected about you, allowing you to keep it and, if you’d like, delete it.

Wikipedia CongressEdits – runs an API on Wikipedia’s most recent edits within the geographical location of the House of Representatives and tweets the results. It is hilarious and also, pretty damning.

I, unfortunately, had to step out to go teach before Professor Mundy really got into using JSFiddle, so if you attended and want to add that information in the comments, I’d really appreciate it. Thank you to Professor Mundy for this amazing learning experience.

RCSDuke

Animating the Data of Online Lives

Rachel Stuart, a student enrolled in this semester’s ENG 5998 reading group, reflects on some of the readings provided by Professor Mundy for our upcoming discussion on Data Visualization and Graphics Scripting:

This week, FSU’s Digital Scholars group has access to a participant in the kinds of projects that engage digital data’s proliferation in society. The linkage between data, information, culture, and art is made visible in the research and works created by individuals like Professor Owen Mundy of FSU’s Department of Art. Our speaker this week is the only person I know of who has had his work covered by entities like NPR and Vice, and can simultaneously boast that his name in a Google search bar is automatically paired with the word “cat.”

Screen Shot 2015-03-23 at 11.20.02 AM

As technology has changed, two factors have increased that greatly contribute to the need for thinkers like Mundy: the proliferation of data, created and collected through digital tools and resources, and the ubiquity of our online lives, bordering on oversharing. Sepandar D. Kamvar and Jonathan Harris explore the connection between these factors, considering the ways that society-at-large records emotion via publicly posted social or blog media. Their project is called We Feel Fine, and these efforts go beyond creating an artistic representation of emotion as it exists online.

The tool that resulted is an emotional search engine, what Kamvar and Harris call “Experiential Data Visualization” and provides “immersive item-level interaction with data” (1). Ultimately, We Feel Fine operates with an interface that invites users to play with data, to learn from universal experience, and to think about their own emotions within the context of this larger data sampling of emotion. It is simultaneously instructive and fun, which might be linked back to what it is doing to begin with; this tool takes data (objective and measurable numbers of emotional mentions) and translates it into art (far more subjective and interactive, even hypothetical).

There is also a divide between the source material (data) and the end result of their efforts (the work) in their mobility; Kamvar and Harris even call the different approaches that a user can take to the information “movements.” The data does truly move, swirling and growing, trembling and falling as the user delineates how they want to experience the data. This “animation of data” relates back to a point made by Mitchell Whitelaw of the University of Canberra, in his article “Art Against Information: Case Studies in Data Practice.” According to Whitelaw, data becomes information when it is granted contextualization and organization – some might argue, when it is granted meaning. This “transubstantiation” of sorts collapses the gap between the data set and the data referent. In a beautiful moment of linguistic serendipity, the animation (from Latin animus, animi, “mind, soul, life force”) of data by Kamvar and Harris takes us beyond the numbers of individuals feeling anger or sympathy or ennui and connects us back to the soul of the individual behind the numbers.

What we don’t recognize is that while data appears to be lifeless, objective, and harmless, the streams of data that occur online carry information useful to many individuals besides artists. Professor Mundy points to the accessibility of personal data in his “I Know Where Your Cat Lives” project, where images of cats are linked to the geographical information available when the person who posts the image uploads it. We create data. We create online trails of our lives that are trackable and mappable and, in contrast to the social media records we curate, often are an accurate history of our lives both online and off. Mundy’s map of cats makes it clear how we lack privacy online, no matter how we may try to erase traces of our true selves.

1798597_10100771350843709_2486243083374159774_n

In a sense then, while these digital data projects often incorporate art as a means of communicating the informative aspect of data, there is an attempt to avoid artifice in the data communicated. In fact, Kamvar and Harris considered how to map emotions without granting positive or negative associates via the tool. They were careful not to rate these emotions, and built an interface that would give the same approaches to anger as it would to joy or embarrassment. In order to differentiate, however, they did color code the emotions. (This does, in my opinion imbue them with some kind of status. A bubble that is a sunny yellow is obviously preferable to one that is a muddy puce. Perhaps that’s just me?)

Screen Shot 2015-03-23 at 1.53.42 PM

Kamvar, Sepandar D. and Jonathan Harris. “We Feel Fine and Searching the Emotional Web.” Web Search and Data Mining 2011. Hong Kong, China. 9-12 Feb. 2011.

Whitelaw, Mitchell. “Art Against Information: Case Studies in Data Practice.” The Fiberculture Journal 11 (2008): n. pag. Web. 16 May 2015.

Willis, Derek. “What the Internet Can See From Your Cat Pictures.” The New York Times. The New York Times, 22 July 2014. Web. 14 Mar. 2015.

Colors, Shapes, and Information: Finding “Meaning” in Large-Scale Digital Data Presentations

“yesstairway”, a student enrolled in this semester’s ENG 5998 reading group, reflects on some of the readings provided by Professor Mundy for our upcoming discussion on Data Visualization and Graphics Scripting:

The readings Professor Mundy has provided to orient our discussion next week are illuminating and thought provoking. Two main trends dominated the overall through-line: how to organize and present large scale information in an meaningful way to the “average” user, and (especially considering Prof. Mundy’s project) how the access to and use of said information will affect the way we live and view ourselves on a macro scale.

The most striking realization about the We Feel Fine and The Dumpster projects is their similarity in format. Both projects archive statements from blogs, online articles, and other websites that mention feeling or breakups, respectively. Both projects organize the data by us balls whose color and size depended on the prevalence of their corresponding data. Finally, both attempt to present a representation of the state of mind of certain demographics (depending on how the user searches the database) or humankind in general. In their overview, Kamvar and Harris stressed how users were emotionally impacted by using We Feel Fine because they saw how many other people in the world experienced the same emotions and thoughts as they. The projects represent an overall interest in finding meaning in the intangible aspects of the human experience (such as feelings and interpersonal relationships); they assume that chronicling and archiving relevant information is a step toward this higher comprehension.

However, as Whitelaw points out, the assumption is not totally correct:

Both aim to visualize and portray not merely data, but the personal, emotional reality the dataset refers to. […] This approach begs a dull (but necessary) critique: that these works do not provide an interface to feelings, or breakups, but texts that refer – or seem to refer – to them. […] These works rely on a long chain of signification: (reality); blog; data harvesting; data analysis; visualization; interface. Yet they maintain a strangely naive sense of unmediated presentation.

Of course it is not the feelings themselves being represented, but the texts which speak about them. In this sense, programs such as these are great tools for not only contemporary sociological data pooling, but historical analysis and archiving as well. Take, for example, Dr. Hanley’s presentation to our group two weeks ago. In it he discussed his difficulties in (among other things) finding a way to effectively record and consider census information of 19th century immigrants to the Mediterranean region. A visualization tool such as that used by We Feel Fine would be an interesting way for him to look at disparate data locate trends. And, just as these modern programs do, uploading historical information would allow us to think about historical persons’ deeper socio-cultural mindset rather than simple quantifiable data. As the other projects discussed in Whitelaw’s article demonstrated, by forming abstract art by inputting data into an algorithm, there is is the potential for meaning to be gleaned deeper than what the data literally says.

Mundy’s rumination on Johannes Osterhoff’s Google showcase, in which individual’s search queries are archived, brought the issue of privacy into the equation. He made a point that, especially online, we make decisions based on how we want to be perceived – yet in studying this desired perception, we inadvertently learn a little bit about how we really are. Perhaps that is the benefit of our information, with all of its insecurities and imperfection, becoming available to the world to see. Through showcasing the aggregate of a population’s online presence and production, a broader community can be formed. We sacrifice privacy for emotional security and empathy.

Works Cited:

Kamvar, Sepandar D. and Jonathan Harris. “We Feel Fine and Searching the Emotional Web.” Web Search and Data Mining 2011. Hong Kong, China. 9-12 Feb. 2011.

Mundy Owen. “The Unconscious Performance of Identity: A Review of Johannes P. Osterhoff’s “Google.” Rhizome. 22 Aug. 2012. Web. 16 May 2015.

Whitelaw, Mitchell. “Art Against Information: Case Studies in Data Practice.” The Fiberculture Journal 11 (2008): n. pag. Web. 16 May 2015.

Data Visualization and Graphics Scripting

Wednesday, March 25, 2:00-3:30 pm
Fine Arts Building (FAB) 320A [530 W. Call St. map]

“I Know Where Your Cat Lives”: The Process of Mapping Big Data for Inconspicuous Trends

Big Data culture has its supporters and its skeptics, but it can have critical or aesthetic value even for those who are ambivalent. How is it possible, for example, to consider data as more than information — as the performance of particular behaviors, the practice of communal ideals, and the ethic motivating new media displays — as both subject and material? Professor Owen Mundy from FSU’s College of Fine Arts invites us to take up these questions in a guided exploration of works of art that will highlight what he calls “inconspicuous trends.” Using the “I Know Where Your Cat Lives” project as a starting point, Professor Mundy will introduce us to the technical and design process for mapping big data in projects such as this one, showing us the various APIs (Application Programming Interfaces) that are constructed to support them and considering the various ways we might want to visualize their results.

This session offers a hands-on demonstration and is designed with a low barrier of entry in mind. For those completely unfamiliar with APIs, this session will serve as a useful introduction, as Professor Mundy will walk us through the process of connecting to and retrieving live social media data from the Instagram API and rendering it using the Google Maps API. Participants should not worry if they do not have expertise in big data projects or are still learning the associated vocabulary. We come together to learn together, and all levels of skill will be accommodated, as will all attitudes and leanings. Desktop computers are installed in FAB 320A, but participants are welcome to bring their own laptops and wireless devices.

Participants are encouraged to read the following in advance of the meeting:

and to browse the following resources for press on Mundy’s project:

For further (future) reading:

We hope you can join us,

-TSG

The Complexities of Quantifying the Human Experience

“yesstairway”, a student enrolled in this semester’s ENG 5998 reading group, offers a reflection on the possibilities and limitations of historical tools meant to quantify experience.

In Dr. Hanley’s discussion last Wednesday (3/4), one underlying factor became apparent: humans are far more complex and difficult to categorize than standard data or physical artifacts. As part of his presentation on digital archives and the challenges that come along with organizing information, he showed us his chart for displaying census records of 19th century immigrants in the Mediterranean region. The chart, based on consulate records from the time period, contained a myriad of information: dates that people arrived, where the emigrated from, occupation and titles, as well as personal ephemera (one man was said to “take afternoon naps” and “enjoyed candy”). The collective data was complicated by the fact that some bits of information was missing for individuals. Furthermore, in what way could the chart best be organized? The legal information was varied, job titles and occupation was as wide as it is today, and the random tidbits on different characters were haphazard at best. What can be done with this information?

The answer seems to depend on what the information is going to be used for. If sorted for use in a research project or other larger piece of work, then using a program such as OpenRefine (http://openrefine.org/) to chart data as Dr. Hanley is seems appropriate. It provides an amazing tool to sort and compare quantifiable date stets of otherwise unwieldy information (such as Dr. Hanley’s project PROSOP). However this method is not practical for a long term reference resource like the creators of SNAC seem to have in mind. A visual network detailing the connections between historical people of note certainly has its benefits, but the system is slightly obtuse. The large web of people held up for an example in Lynch’s article can quickly become overwhelming for someone who does not have a very specific research goal.

In both situations, as mentioned earlier, it is imperative to have a specific question and use the technology as a resource to hone in on the question. Take OpenRefine. One can order items in a data set by most appearances, allowing for the location of patters. Then the user can conflate categories that, depending on the information and query, seem to be related. For example, the occupation of immigrants can be compared to their country of origin over a given time period to perhaps determine why people moved from a certain nation (say, England) to the Mediterranean. It is up to the user to sort the information into meaningful categorizations based on the subject being pursued — there is just too much information for an archivist to form it into a meaningful base and please everyone. Preserving the original way the information was recorded is another priority that seems counter to the process of streamlining the information that these programs undertake.

Ultimately, I find it exciting that the breadth of the human experience cannot be quantified into a tidy table. The final question for consideration is one that seems obvious be is becoming increasingly relevant as we move archiving and cataloging into the digital realm: how do we effectively archive information records in a way that is easily accessible and orderly for academic use without corrupting the original format for historical context? It would be great to develop an archive program that made the raw data accessible and allowed each user to create his or her own “file” that could be manipulated and changed as necessary without affecting the original data. That way people could toggle settings and organization methods to suite their own research needs. The ultimate answer is beyond me right now, but Dr. Hanley provided an interesting insight into the complexities of this transition.

THAT camp UCF: A taste of what Digital Humanists talk about

UCFemerging media

Refractorymuse, a student in the ENG 5998 Reading/Discussion group, posts from THATCamp Florida, locally hosted at UCF.

Greetings from Orlando. As rain steadily drenched the city, librarians, English faculty, History faculty, and grad students sat snugly and warmed themselves with informal, welcoming, and open-minded discussion. Below I bring a sampling of the various sessions.

Barry Mauer of UCF presented his “Citizen Curator” project: He wants to encourage non-academics to curate “public history.” Though there is lots of content, there’s not a lot of participation. Though creating an exhibit involves both archiving (collecting and processing material) and curating (exhibiting the material), Mauer’s expertise lay with curating.He asks, If curating is a type of writing, how do we use digital media with digital objects to generate this writing? He posits that this writing is similar to academic writing, but it’s also an Inventive process. And it involves partnerships.

Mauer would love to see curation capabilities move from PhDs to undergads to communities outside UCF. At present, he is working on guidebook for citizen curators.

In no way does Mauer decree a single ideal curation standard. You can curate materials multiple times, as there are many perspectives as to what materials are culturally important. There are conventional & unconventional approaches. An unconventional approach would be the way artists have curated. They do a kind of “disrespect the integrity of the object.” Sometimes that approach will trigger critical thinking. As an example, Mauer uses Lyotard’s exhibit in the 1980s, where he juxtaposes a visual path with and audio path, not making their relationships explicit. People have to infer the connections.

Mauer delineates 3 types of exhibits: Educational, Rhetorical (which Mauer favors for public history because it requires making a case with the project), and Experimental (or artistic).

What Mauer’s team have been working on is the Carol Mundy Digital Archive. He argues for the need of mediating in an exhibit. His archive has racist material which you cannot present with contextualization because it’s too inflammatory. Other curating problems include multiple overlooked perspectives, archival illiteracy, adapting to new technology, inaccessible documents, and emergent crises.

Mauer is not just curating objects, but curating relationships (people to people, and/or people to objects).

The Charles Brockden Brown Archive – Mark L. Kamrath (UCF) and a grad student who was code-savvy: The Charles Brockden Brown Archive is a big local and global team originating out of UCF. They are using approximately 900 Charles Brockden-Brown (CBB) texts. They used an XTF platform (coding that’s suitable for displaying digital objects).

Their archive has recently been peer reviewed by NINES (a “hub” for c19 digital projects), and they’ve been asked to revise. One of the main reasons was because of a copyright matter. At first they wanted to be the “one-stop-shop” for all CBB needs. But they could not publish full-texts of secondary sources because of copyright regulation. Nevertheless, they had access to many pdfs of the scholars’ articles.

XSLT is what they use to globally search for texts as XML documents. Transcription standards, in conjunction with TEI markup protocol, were created and applied. Different transcriptions were made and then compared to find the most precise one. They chose to do both an XML version and an “as-is” version side by side. When dealing with handwritten items, they coded for gaps, strike-throughs, and underlines.

All this description of the project is to show that making an editing protocol is a dialectical process. They create and revise. They mentioned that their markup works with structures and are not interpretive, but they have added to the DTD of TEI. They used the TEI-P5 for the mark up rules, as well as a cloud drive for public sharing and storing. They use the Library of Congress Subject List to which they suggested (and added) their own subjects. The subject list operates like a bibliographical index, and also as a way to find themes in the materials.

Their search engine is PHP script to look for an XML file. For images, they used TIF images that they turned into JPEGs.
A question they had was how the site would be maintained, for example, ten years from now when the original creative team moves on to other things.

Kacy Tillman (University of Tampa) – How to use Genius in the classroom: Kacy Tillman’s web site has Genius resources. Genius is an annotation site that evolved from Rap Genius. Originally Genius was designed for K-12 students, but now you can see transcripts of texts from all disciplines. You can even annotate the text of Genius. You need to know some basic html coding to create clean annotations.
But Tillman argues that this program fosters for critical thinking about interpreting fiction or poetry, for example, and it invites conversation about ethical research practices. You can have 3-tiered conversations – students can annotate another student’s annotation.

You can also make pages in Genius; it’s in a blog-like format. You do need experience points to acquire permission to do it. You can also communicate to the builders of Genius (They respond).

Tillman uses Genius to get her students to make digital anthologies. Other developments include Multimodal Timelines. Genius can be embedded into an LMS (Learning Management System).

As of today, image annotation is possible as well. Genius is open to everyone, so it’s Wikipedia style in the sense of crowdsourced editing, but there is an administrator. Daily, the administrator consolidates annotations with similar ideas, and weeds out annotations made by trolls.

Soon, Genius will have access to select JSTOR articles for linking purposes.

“Inclusion and Digital Media” – Haven Hawley (formerly worked at the Immigration and History Research Center of the University of Minnesota)

Understanding the complexity of cultural identity is important when you’re trying become an ally of a cultural group.
There are privacy issues when you’re archiving cultural history, especially online. An example is the Sheeko project, developed by undergraduates.

What does inclusion in digital media mean? You can look at it as the gulf that separates the digitally savvy from the nots; or you can look at access to technology, or local knowledge vs power uses, sustaining relationships between the project and the community, problems of exploitation (“rip-n-strip”), universal design (designing the project from the beginning to be as accessible as possible), the including of as many renditions as possible, the including of the physically disabled, and the issue of authenticity and ownership.

For a university to develop a trust with communities, you should put staff into place who are sensitive and knowledgeable about the community. You can try to get trustworthy institutions to support your archiving – church, local historical societies, local artists, people who listen, public libraries. Hawley cautions that the academy cannot always assume it is the center of or the authority on archiving.

“The Hard Problems of Digital Humanities” – Bruce Janz (UCF): Janz used this session to examine unanswered and complicated DH questions.

In 2017 HASTAC (Humanities, Arts, Science, and Technology Alliance and Collaboratory) wants to have a conference in FL. DH has made much progress in establishing itself as a field. For example, DH has done much to facilitate stronger (and more visible) interaction between the artist and the critic/historian. However, there are still questions unanswered that prevent it from becoming perceived as a discipline. Pressing is how does DH figure philosophically.

It is feasible to do a DH project without having an understanding of its own ontology. A method laid out in the meeting is represented below:
1. Pre-Research (prepping the data for studying by way of tagging)
2. Research (asking the focused question and sifting the data for answers)
3. Creative Work
4. Post Research

One prevailing collective issue amongst the DH community is that they do not take into account that, according to Janz, humans live digital as well as analog lives. Ushahidi, a program that tracks global crises, responses, and locations of resources, is an example of the output of people living digitally. In Africa, there are “born African” digital programs that were created by Africans to counter African problems. A non-digital example of Africans using digital practice is isicathamiya. This is a practice of a capella singing amongst men that actually is used to communicate and to respond to other communities.

Another prevailing problematic concept is that, roughly speaking, DH should not be analogous to “missionary work,” such that one power center spreads its ideology over places that “need it.” Instead, DH projects should be seen more egalitarian, as a give and take of ideas and tools.

A third problem is that there is a scarcity of DHers who are actually making DH or born-digital objects a focus of study.

A fourth problem is finding a way to make a scholarly (peer-reviewed) process publicly available without jeopardizing the credibility of peer-reviewed scholarship.

The final problem has to do with opposition to DH stemming from how strong “confirmation bias” is. Digital Humanities projects are risky in that the project team is often inventing the mode of research as they are researching. The unfulfilled promise is an outcome not considered productive to those who distribute funding for such projects. Also, peer-review is tricky to accomplish on not-overtly-bibliographical inventions, and (still) doesn’t carry as much clout as a monograph.

How does one promote an institution(?) that appears as if you have to overhaul your cultural values and belief systems to engage it? Janz asks “How do you sell ugly?”

Spaces for Critically Questioning and Analyzing Digital History

Megan Keaton, a student enrolled in this semester’s ENG 5998 reading group, uses this week’s suggested readings to discuss the ways in which the tools we use affect what we can see and the knowledge we can make. 

In preparation for Professor Hanley’s visit, Dr. Graban introduces us to “The (un)Certainty of Digital History and Social Network,” writing that “while databases often serve as tools for gathering and curating data, they can also serve as spaces for critically questioning and analyzing the motives that guide our conceptions of what it means to do digital history with any certainty.” We can see this theme running throughout the suggested readings; each scholar pushes us to recognize that the tools we use (a) shape what we can(not) discover and (b) can help us acknowledge and make explicit our assumptions.

Ansley T. Erickson points directly to uncertainty in “Historical Research and the Problem of Categories: Reflections on 10,000 Digital Note Cards”: “much of our work happens while our research questions are still in formation. Uncertainty is, therefore, a core attribute of our research process.” This uncertainty is beneficial when we allow ourselves to search for, identify, and entertain connections we had not originally intended to find. This potential for unintended connections is at least partially dependent on “the challenge of information management…[because] where, when, and how…we organize and interact with information from our sources can affect what we discover in them.” Because print databases – such as Erickson’s note cards – are not easily searchable, reorganizing them to newly identified categories may seem too cumbersome, stopping researchers from exploring possibilities that they are not sure will be fruitful. Digital databases, on the other hand, allow us to search by term, which enables the researcher to quickly re-categorize information under newly found connections. Erickson recommends that we utilize digital databases as they

offer a kind of flexibility that can allow us to create and re-create categories as we work with notes, to adjust as we know more about our sources, about how they relate to one another and how they relate to the silences we are finding. That flexibility means that we can evaluate particular ways of categorizing what we know and then adapt if we realize that these categories are not satisfactory. In doing so, we are made more aware of the work of categorization and are reminded to take stock of how our ways of organizing help and what they leave out.

In addition to helping us see outside the categories with which we begin our research, Erickson argues, thinking about the mechanics of our databases and our categorization systems can help us reflect on our “implicit categories or habits of thought that might shape our analysis,” our assumptions about which historical stories should be prioritized.

Similarly, in “Social Networks and Archival Context Project: A Case Study of Emerging Cyberinfrastructure,” Tom J. Lynch shows how print finding aids and Encoded Archival Content – Corporate bodies, Persons, and Families (EAC-CPF) affect the kinds of connections we can make among parts, persons, and places in archives. He defines a finding aid as “a printed document of all the records left in an archive with a common creator or source. A finding aid contains a description of the creator, functions performed by the creator, and the records generated by the creator through the performance of those functions.” Lynch explains, “Reading finding aids and collecting names found therein is a method for building up a list of leads to new sources.” However, the print finding aid is “inflexible and inefficient when dealing with complex, interrelated records” because “[a]rchival records are often of mixed provenance or the records of the same provenance can be dispersed over numerous archives”; this issue is being solved by the EAC-CPF, which

enabl[es] the separation of creator description from record description. Maintaining a unique, centralized creator record not only reduced redundancy and duplication of effort, but also provides an efficient means of linking creators to the functions and activities carried out by them to the dispersed records they created or to which they are related, and to other related creators.

The different archival and categorization tools, then, allow different links – different connections, different sources – in ways similar to Erickson’s note cards and database. As new digital tools enable less redundancy in collecting and sorting data and save researchers time, we can entertain more connections more easily.

Lynch “defin[es] a set of variables to consider when approaching the design of a new tool”: (1) collaborations between humanists and non-humanists, including “librarians, archivists, programmers, and computer scientists;” (2) a balanced scope of audience and goals; and (3) a balance between traditional and new infrastructures/methodologies so that “new technologies…push the boundaries of scholarly activities, yet remain accessible and meet real needs.” We can utilize these variables as a heuristic – analyzing (a) our relationships with other scholars, (b) our intended audiences, (c) which goals we deem beneficial, (d) which methodologies and infrastructures we find value, and (e) the ways in which (a)-(d) affect the knowledge we can and do produce – to gain a better understanding of the tools we create and the assumptions that guide our research of and with these tools. The variables within the heuristic are also interconnected, as one variable can shine light on another. For instance, Lynch writes that “collaboration itself is a challenge that requires careful resolution of methodological differences and regular communication about each collaborators’ perspective.” In other words, our collaboration with other fields and other scholars can push us to consider the effectiveness of our methodologies.

Finally, Claire Lemercier, in “Formal Network Methods in History: Why and How?,” speaks to connections (or ties, as she puts it) we can identify among nodes in a network. “The interest of formal network methods in history is…not limited to inter-individual ties. Networks of firms, towns or regions can also be considers.” Lemercier points to ties between places, individuals, and organizations. As we look to different ties within different circumferences (from individual to organizations) of networks, we can see different “patterns.” Because each circumference shows us different things, toggling between different circumferences, we can determine whether patterns are due to a particular cause, to multiple causes, or to “pure chance.” Without this toggle, we are able to see less, perhaps assuming causes that are not there.

She also points us to the metaphors we use in relations to our tools. She suggests that historians tend to use the metaphor of a map when analyzing networks. Fleckenstein et. al acknowledge that “the metaphors by which researchers orient themselves to the object of study affect the research methods they choose and the nature of the knowledge they create” (389). The map metaphor, Lemercier suggests, implies that we can map all of the relationships within a particular network. However, she writes,

Social network analysis does not allow [us] to “draw a map” of an individual’s network or of all the relationships inside a community, to describe the network of this person or the social structure of this group…It is in fact possible to “draw maps” of networks, but only if we remember that the map is not the territory: it concentrates on some precisely defined phenomenon, momentarily forgetting everything else.

She encourages us, then, to use our metaphors as well as the methodology of social network analysis to reflect on our “boundary specification” choices – “whom do we observe? which ties? when?” – and how these choices “constrain” the questions we can ask and answer. These metaphors link to our implicit theories and, Lemercier argues, “[w]ell-conducted qualitative research often helps to make them more explicit, as the researcher has to define which factors she takes into account, how she defines them, which are the dependent and independent variables, etc.”

A final note: During our last meeting, Dr. Fife stated that digital replication/reproduce is an addition to, rather than a replacement of, non-digital spaces. Erickson emphasizes the same about the tools we use. “Digital note taking may add to but does not of necessity replace varied encounters between researcher and sources” (emphasis mine). This suggests that we need to be critical of the tools we use, considering which tools we can use as additions rather than replacements and what we may gain or lose by looking at tools as additions.

Works Cited

Erickson, Ansley T. “Historical Research and the Problem of Categories: Reflections on 10,000 Digital Note Cards.” Writing History in the Digital Age. Eds Kirsten Nawrotzki and Jack Dougherty. University of Michigan, 2013.

­Fleckenstein, Kristie, et al. “The Importance of Harmony: An Ecological Metaphor for Writing Research.” College Composition and Communication 60 (2008): 388-419.

Lemercier, Claire. “Formal Network Methods in History: Why and How?.” iSocial Networks, Political Institutions, and Rural Societies. Ed. Georg Fertig. Turnhout: Brepols, 2011.

Lynch,Tom J. “Social Networks and Archival Context Project: A Case Study of Emerging Cybrainfrastructure.” Digital Humanities Quarterly 8.3 (2014).