Funding the Digital Humanities

Wednesday, April 8, 2:00-3:15 pm
Williams Building 013 (English Common Room, Basement Level)

“Because democracy demands wisdom”: Funding the Digital Humanities

Among its many functions, the National Endowment for the Humanities (NEH) sponsors 38 award types as part of its prestigious annual grants program, at least 6 of which explicitly accommodate work in the digital humanities, and many of those intended to develop digital projects from prototype to proof-of-concept. Now in its 50th year of funding proposals that promote excellence in the humanities, the NEH continues to offer new programs at the convergence of curating, constructing, and critiquing – three activities or postures that the digital humanities value. (See, for example, the new Humanities Open Book Program, which utilizes low-cost “ebook” technology to digitize and make available scholarly works that are not currently in the public domain.)

For our final Digital Scholars meeting of the year, we will be joined via videoconference by Mr. Brett Bobley, Chief Information Officer of the NEH and Director of the Office of Digital Humanities. Mr. Bobley will discuss the importance of his office to the NEH’s public mission, share some of the unique projects the ODH has funded at various intersections of history and technology, and give us an opportunity to ask questions about the benefits of claiming digital disciplinarity and the challenges of identifying projects at the broad intersection of “digital” and other fields.

We may also consider differences between large-scale big-data projects and small-scale boutique projects, all of which help further the NEH’s mission to address important cultural changes underlying the work that humanities scholars do on their own, and in collaboration with scientists, librarians, museum staff, and members of the public. Finally, we may consider various paradigms that drive NEH funding or public-stream grant programs in general — including those ideas that move funding from object-oriented preservation toward open-access initiatives.

Participants are invited to read the following:

And to review:

  • Stephen Ramsey and Geoffrey Rockwell, “Developing Things: Notes toward an Epistemology of Building in the Digital Humanities,” Debates in the Digital Humanities, ed. Matthew K. Gold (online version: http://dhdebates.gc.cuny.edu/debates/text/11).

We hope you can join us,

-TSG

Animating the Data of Online Lives

Rachel Stuart, a student enrolled in this semester’s ENG 5998 reading group, reflects on some of the readings provided by Professor Mundy for our upcoming discussion on Data Visualization and Graphics Scripting:

This week, FSU’s Digital Scholars group has access to a participant in the kinds of projects that engage digital data’s proliferation in society. The linkage between data, information, culture, and art is made visible in the research and works created by individuals like Professor Owen Mundy of FSU’s Department of Art. Our speaker this week is the only person I know of who has had his work covered by entities like NPR and Vice, and can simultaneously boast that his name in a Google search bar is automatically paired with the word “cat.”

Screen Shot 2015-03-23 at 11.20.02 AM

As technology has changed, two factors have increased that greatly contribute to the need for thinkers like Mundy: the proliferation of data, created and collected through digital tools and resources, and the ubiquity of our online lives, bordering on oversharing. Sepandar D. Kamvar and Jonathan Harris explore the connection between these factors, considering the ways that society-at-large records emotion via publicly posted social or blog media. Their project is called We Feel Fine, and these efforts go beyond creating an artistic representation of emotion as it exists online.

The tool that resulted is an emotional search engine, what Kamvar and Harris call “Experiential Data Visualization” and provides “immersive item-level interaction with data” (1). Ultimately, We Feel Fine operates with an interface that invites users to play with data, to learn from universal experience, and to think about their own emotions within the context of this larger data sampling of emotion. It is simultaneously instructive and fun, which might be linked back to what it is doing to begin with; this tool takes data (objective and measurable numbers of emotional mentions) and translates it into art (far more subjective and interactive, even hypothetical).

There is also a divide between the source material (data) and the end result of their efforts (the work) in their mobility; Kamvar and Harris even call the different approaches that a user can take to the information “movements.” The data does truly move, swirling and growing, trembling and falling as the user delineates how they want to experience the data. This “animation of data” relates back to a point made by Mitchell Whitelaw of the University of Canberra, in his article “Art Against Information: Case Studies in Data Practice.” According to Whitelaw, data becomes information when it is granted contextualization and organization – some might argue, when it is granted meaning. This “transubstantiation” of sorts collapses the gap between the data set and the data referent. In a beautiful moment of linguistic serendipity, the animation (from Latin animus, animi, “mind, soul, life force”) of data by Kamvar and Harris takes us beyond the numbers of individuals feeling anger or sympathy or ennui and connects us back to the soul of the individual behind the numbers.

What we don’t recognize is that while data appears to be lifeless, objective, and harmless, the streams of data that occur online carry information useful to many individuals besides artists. Professor Mundy points to the accessibility of personal data in his “I Know Where Your Cat Lives” project, where images of cats are linked to the geographical information available when the person who posts the image uploads it. We create data. We create online trails of our lives that are trackable and mappable and, in contrast to the social media records we curate, often are an accurate history of our lives both online and off. Mundy’s map of cats makes it clear how we lack privacy online, no matter how we may try to erase traces of our true selves.

1798597_10100771350843709_2486243083374159774_n

In a sense then, while these digital data projects often incorporate art as a means of communicating the informative aspect of data, there is an attempt to avoid artifice in the data communicated. In fact, Kamvar and Harris considered how to map emotions without granting positive or negative associates via the tool. They were careful not to rate these emotions, and built an interface that would give the same approaches to anger as it would to joy or embarrassment. In order to differentiate, however, they did color code the emotions. (This does, in my opinion imbue them with some kind of status. A bubble that is a sunny yellow is obviously preferable to one that is a muddy puce. Perhaps that’s just me?)

Screen Shot 2015-03-23 at 1.53.42 PM

Kamvar, Sepandar D. and Jonathan Harris. “We Feel Fine and Searching the Emotional Web.” Web Search and Data Mining 2011. Hong Kong, China. 9-12 Feb. 2011.

Whitelaw, Mitchell. “Art Against Information: Case Studies in Data Practice.” The Fiberculture Journal 11 (2008): n. pag. Web. 16 May 2015.

Willis, Derek. “What the Internet Can See From Your Cat Pictures.” The New York Times. The New York Times, 22 July 2014. Web. 14 Mar. 2015.

Colors, Shapes, and Information: Finding “Meaning” in Large-Scale Digital Data Presentations

“yesstairway”, a student enrolled in this semester’s ENG 5998 reading group, reflects on some of the readings provided by Professor Mundy for our upcoming discussion on Data Visualization and Graphics Scripting:

The readings Professor Mundy has provided to orient our discussion next week are illuminating and thought provoking. Two main trends dominated the overall through-line: how to organize and present large scale information in an meaningful way to the “average” user, and (especially considering Prof. Mundy’s project) how the access to and use of said information will affect the way we live and view ourselves on a macro scale.

The most striking realization about the We Feel Fine and The Dumpster projects is their similarity in format. Both projects archive statements from blogs, online articles, and other websites that mention feeling or breakups, respectively. Both projects organize the data by us balls whose color and size depended on the prevalence of their corresponding data. Finally, both attempt to present a representation of the state of mind of certain demographics (depending on how the user searches the database) or humankind in general. In their overview, Kamvar and Harris stressed how users were emotionally impacted by using We Feel Fine because they saw how many other people in the world experienced the same emotions and thoughts as they. The projects represent an overall interest in finding meaning in the intangible aspects of the human experience (such as feelings and interpersonal relationships); they assume that chronicling and archiving relevant information is a step toward this higher comprehension.

However, as Whitelaw points out, the assumption is not totally correct:

Both aim to visualize and portray not merely data, but the personal, emotional reality the dataset refers to. […] This approach begs a dull (but necessary) critique: that these works do not provide an interface to feelings, or breakups, but texts that refer – or seem to refer – to them. […] These works rely on a long chain of signification: (reality); blog; data harvesting; data analysis; visualization; interface. Yet they maintain a strangely naive sense of unmediated presentation.

Of course it is not the feelings themselves being represented, but the texts which speak about them. In this sense, programs such as these are great tools for not only contemporary sociological data pooling, but historical analysis and archiving as well. Take, for example, Dr. Hanley’s presentation to our group two weeks ago. In it he discussed his difficulties in (among other things) finding a way to effectively record and consider census information of 19th century immigrants to the Mediterranean region. A visualization tool such as that used by We Feel Fine would be an interesting way for him to look at disparate data locate trends. And, just as these modern programs do, uploading historical information would allow us to think about historical persons’ deeper socio-cultural mindset rather than simple quantifiable data. As the other projects discussed in Whitelaw’s article demonstrated, by forming abstract art by inputting data into an algorithm, there is is the potential for meaning to be gleaned deeper than what the data literally says.

Mundy’s rumination on Johannes Osterhoff’s Google showcase, in which individual’s search queries are archived, brought the issue of privacy into the equation. He made a point that, especially online, we make decisions based on how we want to be perceived – yet in studying this desired perception, we inadvertently learn a little bit about how we really are. Perhaps that is the benefit of our information, with all of its insecurities and imperfection, becoming available to the world to see. Through showcasing the aggregate of a population’s online presence and production, a broader community can be formed. We sacrifice privacy for emotional security and empathy.

Works Cited:

Kamvar, Sepandar D. and Jonathan Harris. “We Feel Fine and Searching the Emotional Web.” Web Search and Data Mining 2011. Hong Kong, China. 9-12 Feb. 2011.

Mundy Owen. “The Unconscious Performance of Identity: A Review of Johannes P. Osterhoff’s “Google.” Rhizome. 22 Aug. 2012. Web. 16 May 2015.

Whitelaw, Mitchell. “Art Against Information: Case Studies in Data Practice.” The Fiberculture Journal 11 (2008): n. pag. Web. 16 May 2015.

Data Visualization and Graphics Scripting

Wednesday, March 25, 2:00-3:30 pm
Fine Arts Building (FAB) 320A [530 W. Call St. map]

“I Know Where Your Cat Lives”: The Process of Mapping Big Data for Inconspicuous Trends

Big Data culture has its supporters and its skeptics, but it can have critical or aesthetic value even for those who are ambivalent. How is it possible, for example, to consider data as more than information — as the performance of particular behaviors, the practice of communal ideals, and the ethic motivating new media displays — as both subject and material? Professor Owen Mundy from FSU’s College of Fine Arts invites us to take up these questions in a guided exploration of works of art that will highlight what he calls “inconspicuous trends.” Using the “I Know Where Your Cat Lives” project as a starting point, Professor Mundy will introduce us to the technical and design process for mapping big data in projects such as this one, showing us the various APIs (Application Programming Interfaces) that are constructed to support them and considering the various ways we might want to visualize their results.

This session offers a hands-on demonstration and is designed with a low barrier of entry in mind. For those completely unfamiliar with APIs, this session will serve as a useful introduction, as Professor Mundy will walk us through the process of connecting to and retrieving live social media data from the Instagram API and rendering it using the Google Maps API. Participants should not worry if they do not have expertise in big data projects or are still learning the associated vocabulary. We come together to learn together, and all levels of skill will be accommodated, as will all attitudes and leanings. Desktop computers are installed in FAB 320A, but participants are welcome to bring their own laptops and wireless devices.

Participants are encouraged to read the following in advance of the meeting:

and to browse the following resources for press on Mundy’s project:

For further (future) reading:

We hope you can join us,

-TSG

The Complexities of Quantifying the Human Experience

“yesstairway”, a student enrolled in this semester’s ENG 5998 reading group, offers a reflection on the possibilities and limitations of historical tools meant to quantify experience.

In Dr. Hanley’s discussion last Wednesday (3/4), one underlying factor became apparent: humans are far more complex and difficult to categorize than standard data or physical artifacts. As part of his presentation on digital archives and the challenges that come along with organizing information, he showed us his chart for displaying census records of 19th century immigrants in the Mediterranean region. The chart, based on consulate records from the time period, contained a myriad of information: dates that people arrived, where the emigrated from, occupation and titles, as well as personal ephemera (one man was said to “take afternoon naps” and “enjoyed candy”). The collective data was complicated by the fact that some bits of information was missing for individuals. Furthermore, in what way could the chart best be organized? The legal information was varied, job titles and occupation was as wide as it is today, and the random tidbits on different characters were haphazard at best. What can be done with this information?

The answer seems to depend on what the information is going to be used for. If sorted for use in a research project or other larger piece of work, then using a program such as OpenRefine (http://openrefine.org/) to chart data as Dr. Hanley is seems appropriate. It provides an amazing tool to sort and compare quantifiable date stets of otherwise unwieldy information (such as Dr. Hanley’s project PROSOP). However this method is not practical for a long term reference resource like the creators of SNAC seem to have in mind. A visual network detailing the connections between historical people of note certainly has its benefits, but the system is slightly obtuse. The large web of people held up for an example in Lynch’s article can quickly become overwhelming for someone who does not have a very specific research goal.

In both situations, as mentioned earlier, it is imperative to have a specific question and use the technology as a resource to hone in on the question. Take OpenRefine. One can order items in a data set by most appearances, allowing for the location of patters. Then the user can conflate categories that, depending on the information and query, seem to be related. For example, the occupation of immigrants can be compared to their country of origin over a given time period to perhaps determine why people moved from a certain nation (say, England) to the Mediterranean. It is up to the user to sort the information into meaningful categorizations based on the subject being pursued — there is just too much information for an archivist to form it into a meaningful base and please everyone. Preserving the original way the information was recorded is another priority that seems counter to the process of streamlining the information that these programs undertake.

Ultimately, I find it exciting that the breadth of the human experience cannot be quantified into a tidy table. The final question for consideration is one that seems obvious be is becoming increasingly relevant as we move archiving and cataloging into the digital realm: how do we effectively archive information records in a way that is easily accessible and orderly for academic use without corrupting the original format for historical context? It would be great to develop an archive program that made the raw data accessible and allowed each user to create his or her own “file” that could be manipulated and changed as necessary without affecting the original data. That way people could toggle settings and organization methods to suite their own research needs. The ultimate answer is beyond me right now, but Dr. Hanley provided an interesting insight into the complexities of this transition.

THAT camp UCF: A taste of what Digital Humanists talk about

UCFemerging media

Refractorymuse, a student in the ENG 5998 Reading/Discussion group, posts from THATCamp Florida, locally hosted at UCF.

Greetings from Orlando. As rain steadily drenched the city, librarians, English faculty, History faculty, and grad students sat snugly and warmed themselves with informal, welcoming, and open-minded discussion. Below I bring a sampling of the various sessions.

Barry Mauer of UCF presented his “Citizen Curator” project: He wants to encourage non-academics to curate “public history.” Though there is lots of content, there’s not a lot of participation. Though creating an exhibit involves both archiving (collecting and processing material) and curating (exhibiting the material), Mauer’s expertise lay with curating.He asks, If curating is a type of writing, how do we use digital media with digital objects to generate this writing? He posits that this writing is similar to academic writing, but it’s also an Inventive process. And it involves partnerships.

Mauer would love to see curation capabilities move from PhDs to undergads to communities outside UCF. At present, he is working on guidebook for citizen curators.

In no way does Mauer decree a single ideal curation standard. You can curate materials multiple times, as there are many perspectives as to what materials are culturally important. There are conventional & unconventional approaches. An unconventional approach would be the way artists have curated. They do a kind of “disrespect the integrity of the object.” Sometimes that approach will trigger critical thinking. As an example, Mauer uses Lyotard’s exhibit in the 1980s, where he juxtaposes a visual path with and audio path, not making their relationships explicit. People have to infer the connections.

Mauer delineates 3 types of exhibits: Educational, Rhetorical (which Mauer favors for public history because it requires making a case with the project), and Experimental (or artistic).

What Mauer’s team have been working on is the Carol Mundy Digital Archive. He argues for the need of mediating in an exhibit. His archive has racist material which you cannot present with contextualization because it’s too inflammatory. Other curating problems include multiple overlooked perspectives, archival illiteracy, adapting to new technology, inaccessible documents, and emergent crises.

Mauer is not just curating objects, but curating relationships (people to people, and/or people to objects).

The Charles Brockden Brown Archive – Mark L. Kamrath (UCF) and a grad student who was code-savvy: The Charles Brockden Brown Archive is a big local and global team originating out of UCF. They are using approximately 900 Charles Brockden-Brown (CBB) texts. They used an XTF platform (coding that’s suitable for displaying digital objects).

Their archive has recently been peer reviewed by NINES (a “hub” for c19 digital projects), and they’ve been asked to revise. One of the main reasons was because of a copyright matter. At first they wanted to be the “one-stop-shop” for all CBB needs. But they could not publish full-texts of secondary sources because of copyright regulation. Nevertheless, they had access to many pdfs of the scholars’ articles.

XSLT is what they use to globally search for texts as XML documents. Transcription standards, in conjunction with TEI markup protocol, were created and applied. Different transcriptions were made and then compared to find the most precise one. They chose to do both an XML version and an “as-is” version side by side. When dealing with handwritten items, they coded for gaps, strike-throughs, and underlines.

All this description of the project is to show that making an editing protocol is a dialectical process. They create and revise. They mentioned that their markup works with structures and are not interpretive, but they have added to the DTD of TEI. They used the TEI-P5 for the mark up rules, as well as a cloud drive for public sharing and storing. They use the Library of Congress Subject List to which they suggested (and added) their own subjects. The subject list operates like a bibliographical index, and also as a way to find themes in the materials.

Their search engine is PHP script to look for an XML file. For images, they used TIF images that they turned into JPEGs.
A question they had was how the site would be maintained, for example, ten years from now when the original creative team moves on to other things.

Kacy Tillman (University of Tampa) – How to use Genius in the classroom: Kacy Tillman’s web site has Genius resources. Genius is an annotation site that evolved from Rap Genius. Originally Genius was designed for K-12 students, but now you can see transcripts of texts from all disciplines. You can even annotate the text of Genius. You need to know some basic html coding to create clean annotations.
But Tillman argues that this program fosters for critical thinking about interpreting fiction or poetry, for example, and it invites conversation about ethical research practices. You can have 3-tiered conversations – students can annotate another student’s annotation.

You can also make pages in Genius; it’s in a blog-like format. You do need experience points to acquire permission to do it. You can also communicate to the builders of Genius (They respond).

Tillman uses Genius to get her students to make digital anthologies. Other developments include Multimodal Timelines. Genius can be embedded into an LMS (Learning Management System).

As of today, image annotation is possible as well. Genius is open to everyone, so it’s Wikipedia style in the sense of crowdsourced editing, but there is an administrator. Daily, the administrator consolidates annotations with similar ideas, and weeds out annotations made by trolls.

Soon, Genius will have access to select JSTOR articles for linking purposes.

“Inclusion and Digital Media” – Haven Hawley (formerly worked at the Immigration and History Research Center of the University of Minnesota)

Understanding the complexity of cultural identity is important when you’re trying become an ally of a cultural group.
There are privacy issues when you’re archiving cultural history, especially online. An example is the Sheeko project, developed by undergraduates.

What does inclusion in digital media mean? You can look at it as the gulf that separates the digitally savvy from the nots; or you can look at access to technology, or local knowledge vs power uses, sustaining relationships between the project and the community, problems of exploitation (“rip-n-strip”), universal design (designing the project from the beginning to be as accessible as possible), the including of as many renditions as possible, the including of the physically disabled, and the issue of authenticity and ownership.

For a university to develop a trust with communities, you should put staff into place who are sensitive and knowledgeable about the community. You can try to get trustworthy institutions to support your archiving – church, local historical societies, local artists, people who listen, public libraries. Hawley cautions that the academy cannot always assume it is the center of or the authority on archiving.

“The Hard Problems of Digital Humanities” – Bruce Janz (UCF): Janz used this session to examine unanswered and complicated DH questions.

In 2017 HASTAC (Humanities, Arts, Science, and Technology Alliance and Collaboratory) wants to have a conference in FL. DH has made much progress in establishing itself as a field. For example, DH has done much to facilitate stronger (and more visible) interaction between the artist and the critic/historian. However, there are still questions unanswered that prevent it from becoming perceived as a discipline. Pressing is how does DH figure philosophically.

It is feasible to do a DH project without having an understanding of its own ontology. A method laid out in the meeting is represented below:
1. Pre-Research (prepping the data for studying by way of tagging)
2. Research (asking the focused question and sifting the data for answers)
3. Creative Work
4. Post Research

One prevailing collective issue amongst the DH community is that they do not take into account that, according to Janz, humans live digital as well as analog lives. Ushahidi, a program that tracks global crises, responses, and locations of resources, is an example of the output of people living digitally. In Africa, there are “born African” digital programs that were created by Africans to counter African problems. A non-digital example of Africans using digital practice is isicathamiya. This is a practice of a capella singing amongst men that actually is used to communicate and to respond to other communities.

Another prevailing problematic concept is that, roughly speaking, DH should not be analogous to “missionary work,” such that one power center spreads its ideology over places that “need it.” Instead, DH projects should be seen more egalitarian, as a give and take of ideas and tools.

A third problem is that there is a scarcity of DHers who are actually making DH or born-digital objects a focus of study.

A fourth problem is finding a way to make a scholarly (peer-reviewed) process publicly available without jeopardizing the credibility of peer-reviewed scholarship.

The final problem has to do with opposition to DH stemming from how strong “confirmation bias” is. Digital Humanities projects are risky in that the project team is often inventing the mode of research as they are researching. The unfulfilled promise is an outcome not considered productive to those who distribute funding for such projects. Also, peer-review is tricky to accomplish on not-overtly-bibliographical inventions, and (still) doesn’t carry as much clout as a monograph.

How does one promote an institution(?) that appears as if you have to overhaul your cultural values and belief systems to engage it? Janz asks “How do you sell ugly?”

Spaces for Critically Questioning and Analyzing Digital History

Megan Keaton, a student enrolled in this semester’s ENG 5998 reading group, uses this week’s suggested readings to discuss the ways in which the tools we use affect what we can see and the knowledge we can make. 

In preparation for Professor Hanley’s visit, Dr. Graban introduces us to “The (un)Certainty of Digital History and Social Network,” writing that “while databases often serve as tools for gathering and curating data, they can also serve as spaces for critically questioning and analyzing the motives that guide our conceptions of what it means to do digital history with any certainty.” We can see this theme running throughout the suggested readings; each scholar pushes us to recognize that the tools we use (a) shape what we can(not) discover and (b) can help us acknowledge and make explicit our assumptions.

Ansley T. Erickson points directly to uncertainty in “Historical Research and the Problem of Categories: Reflections on 10,000 Digital Note Cards”: “much of our work happens while our research questions are still in formation. Uncertainty is, therefore, a core attribute of our research process.” This uncertainty is beneficial when we allow ourselves to search for, identify, and entertain connections we had not originally intended to find. This potential for unintended connections is at least partially dependent on “the challenge of information management…[because] where, when, and how…we organize and interact with information from our sources can affect what we discover in them.” Because print databases – such as Erickson’s note cards – are not easily searchable, reorganizing them to newly identified categories may seem too cumbersome, stopping researchers from exploring possibilities that they are not sure will be fruitful. Digital databases, on the other hand, allow us to search by term, which enables the researcher to quickly re-categorize information under newly found connections. Erickson recommends that we utilize digital databases as they

offer a kind of flexibility that can allow us to create and re-create categories as we work with notes, to adjust as we know more about our sources, about how they relate to one another and how they relate to the silences we are finding. That flexibility means that we can evaluate particular ways of categorizing what we know and then adapt if we realize that these categories are not satisfactory. In doing so, we are made more aware of the work of categorization and are reminded to take stock of how our ways of organizing help and what they leave out.

In addition to helping us see outside the categories with which we begin our research, Erickson argues, thinking about the mechanics of our databases and our categorization systems can help us reflect on our “implicit categories or habits of thought that might shape our analysis,” our assumptions about which historical stories should be prioritized.

Similarly, in “Social Networks and Archival Context Project: A Case Study of Emerging Cyberinfrastructure,” Tom J. Lynch shows how print finding aids and Encoded Archival Content – Corporate bodies, Persons, and Families (EAC-CPF) affect the kinds of connections we can make among parts, persons, and places in archives. He defines a finding aid as “a printed document of all the records left in an archive with a common creator or source. A finding aid contains a description of the creator, functions performed by the creator, and the records generated by the creator through the performance of those functions.” Lynch explains, “Reading finding aids and collecting names found therein is a method for building up a list of leads to new sources.” However, the print finding aid is “inflexible and inefficient when dealing with complex, interrelated records” because “[a]rchival records are often of mixed provenance or the records of the same provenance can be dispersed over numerous archives”; this issue is being solved by the EAC-CPF, which

enabl[es] the separation of creator description from record description. Maintaining a unique, centralized creator record not only reduced redundancy and duplication of effort, but also provides an efficient means of linking creators to the functions and activities carried out by them to the dispersed records they created or to which they are related, and to other related creators.

The different archival and categorization tools, then, allow different links – different connections, different sources – in ways similar to Erickson’s note cards and database. As new digital tools enable less redundancy in collecting and sorting data and save researchers time, we can entertain more connections more easily.

Lynch “defin[es] a set of variables to consider when approaching the design of a new tool”: (1) collaborations between humanists and non-humanists, including “librarians, archivists, programmers, and computer scientists;” (2) a balanced scope of audience and goals; and (3) a balance between traditional and new infrastructures/methodologies so that “new technologies…push the boundaries of scholarly activities, yet remain accessible and meet real needs.” We can utilize these variables as a heuristic – analyzing (a) our relationships with other scholars, (b) our intended audiences, (c) which goals we deem beneficial, (d) which methodologies and infrastructures we find value, and (e) the ways in which (a)-(d) affect the knowledge we can and do produce – to gain a better understanding of the tools we create and the assumptions that guide our research of and with these tools. The variables within the heuristic are also interconnected, as one variable can shine light on another. For instance, Lynch writes that “collaboration itself is a challenge that requires careful resolution of methodological differences and regular communication about each collaborators’ perspective.” In other words, our collaboration with other fields and other scholars can push us to consider the effectiveness of our methodologies.

Finally, Claire Lemercier, in “Formal Network Methods in History: Why and How?,” speaks to connections (or ties, as she puts it) we can identify among nodes in a network. “The interest of formal network methods in history is…not limited to inter-individual ties. Networks of firms, towns or regions can also be considers.” Lemercier points to ties between places, individuals, and organizations. As we look to different ties within different circumferences (from individual to organizations) of networks, we can see different “patterns.” Because each circumference shows us different things, toggling between different circumferences, we can determine whether patterns are due to a particular cause, to multiple causes, or to “pure chance.” Without this toggle, we are able to see less, perhaps assuming causes that are not there.

She also points us to the metaphors we use in relations to our tools. She suggests that historians tend to use the metaphor of a map when analyzing networks. Fleckenstein et. al acknowledge that “the metaphors by which researchers orient themselves to the object of study affect the research methods they choose and the nature of the knowledge they create” (389). The map metaphor, Lemercier suggests, implies that we can map all of the relationships within a particular network. However, she writes,

Social network analysis does not allow [us] to “draw a map” of an individual’s network or of all the relationships inside a community, to describe the network of this person or the social structure of this group…It is in fact possible to “draw maps” of networks, but only if we remember that the map is not the territory: it concentrates on some precisely defined phenomenon, momentarily forgetting everything else.

She encourages us, then, to use our metaphors as well as the methodology of social network analysis to reflect on our “boundary specification” choices – “whom do we observe? which ties? when?” – and how these choices “constrain” the questions we can ask and answer. These metaphors link to our implicit theories and, Lemercier argues, “[w]ell-conducted qualitative research often helps to make them more explicit, as the researcher has to define which factors she takes into account, how she defines them, which are the dependent and independent variables, etc.”

A final note: During our last meeting, Dr. Fife stated that digital replication/reproduce is an addition to, rather than a replacement of, non-digital spaces. Erickson emphasizes the same about the tools we use. “Digital note taking may add to but does not of necessity replace varied encounters between researcher and sources” (emphasis mine). This suggests that we need to be critical of the tools we use, considering which tools we can use as additions rather than replacements and what we may gain or lose by looking at tools as additions.

Works Cited

Erickson, Ansley T. “Historical Research and the Problem of Categories: Reflections on 10,000 Digital Note Cards.” Writing History in the Digital Age. Eds Kirsten Nawrotzki and Jack Dougherty. University of Michigan, 2013.

­Fleckenstein, Kristie, et al. “The Importance of Harmony: An Ecological Metaphor for Writing Research.” College Composition and Communication 60 (2008): 388-419.

Lemercier, Claire. “Formal Network Methods in History: Why and How?.” iSocial Networks, Political Institutions, and Rural Societies. Ed. Georg Fertig. Turnhout: Brepols, 2011.

Lynch,Tom J. “Social Networks and Archival Context Project: A Case Study of Emerging Cybrainfrastructure.” Digital Humanities Quarterly 8.3 (2014).

Digital History and Social Networking

Wednesday, March 4, 2:00-3:30 pm
Williams Building 454

The (un)Certainty of Digital History and Social Networking

For microhistorians investigating how people conduct their lives within particular groups or units of culture, local and transnational methodologies can operate in a complementary way, simultaneously reducing and broadening historians’ scales of observation, allowing them to notice both outliers and patterns. Databases of historical individuals are one kind of methodology (or tool) that emerges from this work, and while databases often serve as tools for gathering and curating data, they can also serve as spaces for critically questioning and analyzing the motives that guide our conceptions of what it means to do digital history with any certainty. For the third meeting of this semester’s Digital Scholars reading and discussion group, Professor Will Hanley from FSU’s Department of History will lead us in a discussion of these concerns.

Drawing on his experiences with Prosop, a graph database of persons appearing in eastern Mediterranean archives, Professor Hanley will explore the particular challenges of recording, sharing, and serializing historical person-data of uncertain form and meaning. He will consider this category problem in the context of SNAC and other historical social network databases, most of which do not face similar problems of uncertainty. Veterans of Digital Scholars may remember Professor Hanley’s 2011 presentation on Prosop, and we look forward to engaging him again, and learning about how the project has grown from its original conceptions as an ontology into a crowd-sourced tool for sharing and aggregating historical data.

Participants are invited to read the following:

 

We hope you can join us,

-TSG

3D Thinking

Jie Liu, a student enrolled in this semester’s ENG 5998 reading group, offers a starting point for conversation about ways in which to locate sites for collaboration via “3D thinking.” 

The project Victoria’s Lost Pavilion is truly a great example to show the promise of Henry Jenkins’s convergence culture. While Jenkins has a particular focus on fan culture in his book, this project aiming to digitally reconstruct an intriguing historical building (Queen Victoria’s Buckingham Palace garden pavilion (1842-1928)) is probably what some scholars are excited to see and illustrates how far we can go. Such a project does not only depend on collective intelligence, but also touches on a critical question about digital humanities, namely, how scholars can participate. It seems that to turn digital 3D representation into interpretation and knowledge production requires more than a fundamental understanding of technology; it also asks for a different mindset, which I venture to call “3D thinking” from an English major’s perspective. While close reading and individual projects remain important, an additional dimension demands more attention in this digital age. Scholarship now needs a different type of collaboration and interaction.

First, Victoria’s Lost Pavilions progress shows that in terms of digital reconstructions of the past, a nexus connecting different discourses effectively serves for scholars to participate. In “Beyond the Big Tent,” Patrick Svensson views the digital humanities as “a meeting place, innovation hub, and trading zone,” which is based on “interdisciplinary work and deep collaboration.” Such a model, he believes, can “attract individuals both inside and outside the tent.” His endeavor to revise the theoretical frame is undoubtedly significant. However, Alan Liu also sees the challenges digital scholars face and shares his experience of “working… at the seams between exsiting literary fields, periods, personnel levels, management structures, and so on” in “Digital Humanities and Academic Change” (24).

The digital humanities may not exist as an ideal space where scholars meet and trade ideas; instead Liu finds seeds in various projects faculty and students can work on together. In other words, to be practical, digital scholars may need to locate particular sites where they are able to collaborate. It can be a small project at first and gradually grows into a meeting place inviting more individuals to come. Then we can see the Pavilion project interestingly demonstrates how to work at a seam and develop it into a trading zone that attracts scholars from different areas.

Because of the pavilion’s own complexity, its digital reconstruction becomes an important intersection of disciplines (e.g. literature, architecture, archaeology, art, and computer science), and this convergence supports multiplicity and shared interests, laying a solid foundation for interdisciplinary collaboration. As Professor Fyfe mentions, “the conversation came to include several more participants in the department and across campus, each of whom saw opportunities to engage different field conversations and disciplinary problems.” (“Is this a DH project?”) Hence, finding a nexus and turning it into a trading zone is a critical step digital scholars need to consider, which requires thinking from the perspectives of other disciplines.

Moreover, the Pavilion project urges us to rethink the process of collaboration. “For me, this has valuably complicated my naive thinking about projects beyond the spectrum of proposal -> process -> product.” (Fyfe “Elegant Scaffolding”) Here collaboration itself involves active interrogation, competing interpretations, contested assumptions, and further exploration. Conversations between scholars from different disciplines can raise more questions and open new possibilities. In a certain sense, the dynamic of collaboration reveals a way of knowledge production. Diane Favro also tends to see digital reconstruction as a process, not simply a final product: “the real value of historical simulations lies not in the representations themselves, but in the process of their creation and in the subsequent experiments now possible to be conducted within the simulated environments” (276). (Hence archiving also plays a key role and requires a new model.)

As the process itself becomes more important in a digital context, collaboration can be productive even before a digital reconstruction is finished. Therefore, apart from a new model for a digital reconstruction, digital scholars also need to create a different model for their collaboration, which foregrounds knowledge production in the process and effectively communicates the new knowledge to the academia. (Considering that some projects may not last long because of limited time and funding, such a model appears to be more important.) Recognizing this new challenge, Professor Fyfe comes with an interesting scaffolding theory. “Projects can equally possess a ‘elegant scaffolding’ to deliver useful structures throughout their life cycle” (“Elegant Scaffolding”). Even “a half-built virtual model of a historical building” (“Elegant Scaffolding”) may give us valuable lessons; this is another thing we need to think about differently.

But designing a new model for a digital reconstruction is not that simple either, because of the emphasis on interaction. It seems that from the beginning the purpose is twofold. Digital collaborators do not only need to investigate the veracity of a digital representation and agree on a particular edition of a historical simulation, but also have to take into account of its users’ participation, playing “the role of choreographer” (Favro 274). Projects like Victoria’s Lost Pavilion and Virtual Paul Cross Project aim to create “a more immersive experience of gallery space” (Fyfe “What Victoria Saw”), which do not rely primarily on sight but become more polysensory. This hopefully will inspire visitors to examine historical environments in different ways and lead to new, insightful discoveries. However, it seems that a user’s experience is not the only form of interaction digital collaborators need to worry about. If we see the model for a digital reconstruction as a research platform, other possibilities also emerge. While a digital reconstruction as a nexus is complicated enough itself, its extensibility remains an important aspect. Can it be easily repurposed for other digital projects? For instance, “a well-made 3-D model can be ported to an acoustical program for a study of the relationship between the architectural design and music performances” (Favro 275). Is there a platform to help digital scholars trade and transform such models? Though different software programs and copyright issues already muddy the water, when building a particular model, digital collaborators may still need to think about the circulation of their digital reconstruction and its future possible uses.

On the other hand, a digital reconstruction as a platform may also lead to more research methods, especially for English majors. For example, while Franco Moretti’s maps and diagrams look interesting, his distant reading can be time-consuming and impractical to those who are not familiar with geography and relevant technologies. However, suppose we have a digital reconstruction of Old London (or just a part of it), visitors do not only obtain a deeper understanding of the historical buildings and the structure of the city, but also find an easier way to observe a novel’s characters’ interaction and movement in such a space and possibly produce more maps (or more than maps). In this sense, digital representations also set the stage for 3D reading and new, exciting ways to analyze literary texts. The possibilities are definitely there.

References

Favro, Diane. “Se non èvero, èben trovato (If Not True, It Is Well Conceived): Digital Immersive Reconstructions of Historical Environments.” Journal of the Society of Architectural Historians 71.3 (Sep. 2012): 273-77.

Fyfe, Paul. “Is this a DH project?,” “Elegant Scaffolding,” and “What Victoria Saw.” The Pavilion project blog.

Liu, Alan. “Digital Humanities and Academic Change.” ELN 47.1 (Spring 2009): 17-35.

Svensson, Patrick. “Beyond the Big Tent.” Matthew Gold, ed. Debates in the Digital Humanities, 2013 open-access edition.

Neo Antiquarianism: The Walter Scott Phenomenon in Virtual Building Building

To English Department scholars skeptical of virtualizing in three-dimensions historical structures and wondering whether there is literary merit here: there is. In digital humanities projects like Victoria’s Lost Pavilion, What Jane Saw, and Virtual Paul’s Cross Project emerges the enthusiasm, research methods, and collaboration not unlike the project of the Walter Scott novels. We can look in particular to Waverly, a kickstarter event.

Place consumed Scott. He wanted to inhabit as much of it as possible in one instance, especially when he was confined for illness as child, which motivated his appetite for “desultory” reading, and building in his mind. Besides fiction, he read: “histories, memoirs, voyages and travels,” and the truth that he derived from them were as “wonderful” as the fiction. From the memories of his readings, and the memory of how (disorderly) he studied in his young adulthood in the Edinburgh library, he researched for Waverly. Part of the fiction, Scott also claims, comes from an excavation of his past writing. But much of it was collaboration. He wandered the Highlands. Veterans told him war stories. He immersed himself in a culturally preserved society. What came out was a bumbling protagonist whose wanderings “permitted [him] to introduce some descriptions of scenery and manners, to which the reality gave an interest which the powers of the Author might have otherwise failed to attain for them.”

In the 21st century the bumbling protagonists are ourselves as we navigate the virtual terrains of the internet. But they are also, for example, Fyfe and his collaborators. For the Pavilion Project, a conference paper, primary architectural data, the actual remains of the structures on the earth, some art historians, some research assistants, some English Literature scholars, some money, and some management skills, a dynamic project, an object very Scott-like emerges. The Waverly novel becomes the Waverly series, and the Waverly series grows into a fictional oeuvre.

Victoria’s Lost Pavilion and similar projects are operating together to form a flexible, living, rhizomatic genre, very much like what was happening in Britain in the early nineteenth century. Though Scott embraced the romance qualities of Waverly, romance meaning the idealized, the legendary, the incredible, the emotionally charged, Scott takes the pains in his general preface and the voluminous meticulous description to justify the credibility of the history in this work. And after about 185 years and countless prefatory appeals like Scott’s, we now we retroactively (perhaps also conveniently) call Waverly and other like objects a “novel.”

Perhaps the credibility of virtual structure building projects and other such geospatial outgrowths as an emerging genre comes in the embracing of the romance of it. In the same way the novel, on its way to being established as “literature,” utilized the tactics of the romance.