Theorizing Models in the Digital Age

The articles that prepare us for Dr. Richard Urban’s talk on Friday, November 6, ask questions about what a model is and/or can do. While we may think of our models as transparent reflections of what is being modeled, Julia Flanders and Fotis Jannidis observe that it is not enough to have the database or model be a theory in itself—a practice that fully justifies and explains itself through its use. While this pragmatic approach can be sufficient up to a certain point, theories of modeling help us reflect on our praxis. As Flanders and Jannidis write, “Theory is usually the theory of something, trying to spell out the basic concepts relevant in the praxis of doing something . . . a theory of digital humanities cannot simply coincide with its praxis” (2-3).

To this end, I found Willard McCarty’s essay, “Modeling: A Study in Words and Meanings,” particularly helpful in thinking about the different sides to an understanding of models. The core, I think, of McCarty’s essay lies here, as he’s introducing the different synonyms to “model” that he’s going to consider (analogy, representation, experiment, etc.): “But perhaps the most important lesson we learn from seeing the word in the context of its synonym set is not the range and variety of its meanings; rather, again, its strongly dynamic potential.” Theorizing modeling as a dynamic tool in digital humanities helps us avoid some of the blind spots that might occur otherwise.

Arianna Ciulu and Øyvind Eide point towards this dynamic quality as well, stating, “In digital humanities we do not only create models as fixed structures of knowledge, but also as a way to investigate a series of temporary states in a process of coming to know. The point of this kind of a modeling exercise lies in the process, not in the model as a product” (37). For me, this emphasizes two points in regards to TEI coding.

First, the modeling of a text is an ongoing process that requires interpretation, judgement, and observation/perspective—all partially subjective elements of textual coding. For example: if there are typos in a manuscript, a coder can make a judgment and indicate a spelling/typing error, even if the author of the manuscript gave no indication (did not cross out the word or correct it in any way). The XML tags the coder uses let the reader know that they are perceiving a spelling error so that it is not mistaken as a correction made by the manuscript author. We can easily imagine a case of mistaken judgment, however, if the coder perceives an error that was fully intentional. Perhaps the author meant to spell the word this way for whatever reason.

Another example would be if the coder overlooked something that someone else deems important to code. In the Beckett Digital Manuscript Project, you can search the coded manuscript for both gaps in the text and the doodles that Beckett often drew on his pages. These are textual elements that we may at first be inclined to ignore because they are not part of the “text” as we traditionally conceive it. But of course these are part of the manuscript and have been shown to have significance in relation to the other parts of the page.

I give these examples to show why coding, as a form of modeling, cannot be seen as a fully cut-and-dry process that ends as a product as soon as the text is coded by a competent coder. How well a model represents its object is often up for dispute and revision. Ideally, a digital archive would allow for feedback and suggestions to improve or revise the model, open to new ways of representing the original manuscript. This openness to revision keeps both the original and its model incomplete in terms of knowledge. The “temporary states in a process of coming to know” generate “structures of knowledge,” but these structures are not fixed. They are tentative wholes that help us understand the heterogeneous parts of a given text.

This brings me to my promised second point, which is in regards to the modeling tool itself, i.e. TEI. Similar to what I said above, TEI as a standard is also not fixed; while it may not change as frequently as some would like, the guidelines have not stayed the same since its inception decades ago. It’s true that, as Ciulu and Eide say, “Even by abstracting away the text itself, the stripped out XML tree constitutes basically a model of one way of seeing the text structure: the place name is part of a sentence which is part of a paragraph which is part of a chapter and so on” (40). Thus, our model intrinsically comes with a set of assumptions about the originals we’re trying to represent.

Nevertheless—and I say this with the small knowledge I currently have on XML coding—there seems to be a good amount of flexibility in XML coding to adapt it to particularities. In other words, there will never be a “universal grammar” of modeling, in which the particular cases are always subordinate and fully represented by the current modeling tools. As we continue to experiment and think about new and better ways to represent data, likely based on a case-by-case basis, our tools will continue to change and adapt.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s