200 Fingers Listening: The New Discipline and Matters of Alterity

Last Friday, we had the privilege of hearing Alex Gil speak, and while participants expected he would be speaking to the race and alterity issues present in our readings, I was a little surprised when he did not. I kept thinking, “He’s going to talk about race and alterity now.” He mentioned it a few times, and when our conversation via Google hangouts ended, I was scratching my head. Retrospectively, though, much of what he said relates to race and alterity, and I hope to tease out some of those things through the course of this blog post.

During his talk on April 21, 2017, Gil advocated for a new discipline that existed outside of digital humanities whose purpose was to “understand the creation, distribution, and location of all scholarship in all languages at all times.” He also noted that such a discipline was very materialist, based on empirical observation, and that the documents such disciplinarians were dealing with were both finite in number and could exist in analog (pottery shard, paper manuscript) or as bytes (electronic journal article). We are still unsure of what this discipline will be called, but for the sake of this blog post, I will refer to it as the new discipline.

Such a discipline has, according to Gil, three notable characteristics: knowledge architecture, cultural analytics, and a jack-of-all-trades workforce where each member has multiple analytical and technological literacies, making such workers capable not only of gathering and creating knowledge but also building the databases and platforms that host that information, storing it on non-institutional hardware like jump drives, publishing the data in journals, and analyzing the data there. Gil’s discussion dealt heavily with the survivability of materials and how we should be thinking about this materially. He gave an example of a database that could be placed on a jump drive.

Among all of this talk of disciplinarity, materiality, and access, I see faint but definite connections to alterity woven through these diverse issues. In her 1999 CCCC address, Cynthia Selfe notes that generally, marginalized groups, specifically African Americans and lower-class citizens, lack access to technology and therefore cannot gain the literacies to succeed. In his talk, Gil noted that likely community colleges and smaller institutions would be the root of this new discipline. In relation to the readings we did for this week, Sayers advocates for minimal barriers to access, and McGrail notes that universities and community colleges have power differences that are “mask[ed]” by their “share[d] disciplinary affinities.” Finally, Gallon discusses a “technology of recovery” through with African Americans and other marginalized groups can reclaim certain knowledges and epistemologies.

In a way, the new discipline seeks to do just that: it seeks to reclaim from those in power the rights to data (freely given) and to move away from data (which must be taken). While somewhat idealistic in its constructs and despite doubts of success, to understand all knowledges in all languages, the new discipline’s greatest asset might be diversity. If the 200 fingers listening that Gil spoke of are all diverse in background, skillsets, and demographics, then perhaps that would further contribute to the bodies of knowledge, to the new disciplines ability to know everything in every language.

In Which we Talked about Systems, Publishing, and Open Access: Or the Theme was Alterity and Race in the Digital Humanities

Alex Gil was kind enough to join us for an afternoon in a Google Hangout to talk about data, publishing, systems, open access, and creating more accessible digital programs and scholarship. His talk focused on the need to create a digital revolution, one that used the humanities unique abilities to be good distant readers, knowledge architects, and cultural analyzers, combined with a digital consciousness towards access and labor, to create and encourage new epistemologies. He argued for a new paradigm of thought that was technologically literate, resources conscious, open, and aware of labor. He positioned the Digital Humanities as a place where this sort of translingual and transnational work can be cultivated.

I found his call for a new discipline for the 21st century academy intriguing. This discipline would work to “understand the creation, distribution, and location of all scholarship, in all languages, at all times” (Gil). There is a sense of speaking truth to power through a questioning and unsettling the underlying assumptions of the labor and material practices of modern scholarship within the academy, and the associated businesses. Gil detailed the pervasive issues with the ways in which knowledge, in its various material forms, is housed, sold, distributed, and used. There are clear needs to look not only at what we produce, but how, where, and for who. Underlying these issues are concerns with labor practices, intellectual property, ownership, and expertise for digital circulation. Between Gil’s talk, our questions, and the readings, it seems that there is a need for developing within the digital humanities a critical mass to lead to a more fair, global, tranlingual, and socially just academy. We need to encourage the development of critical technical ideologies and practices in our work, scholarship, writing, and teaching.

These all bring important rhetorical concerns: how does work circulate, who owns it, how does or can an audience interact with it, and what effects does this have on the ultimate labor that knowledge does? In an open access world, we would do our best to account for the variety of technological barriers and literacies needed for effective and just knowledge making and sharing. His positioning of work, from within and without digital humanities and libraries, serves as a potential rallying cause for accessing new epistemologies and circulating them through new, digitally aware, texts and compositions. The de-centralized, data conscious, and knowledge centric views that he discussed were, in many ways, both inspiring and daunting.

Alex Gil’s various projects, Around DH in 80 Days, “The Digital Library of Babel,” “The User, the Learner and the Machines W Make,” and his talk all provide us with potentials for the sort of scholarship that access conscious, Digital Humanities, can or could do. They also bring attention to the knowledge, linguistic, and access gaps that exists in scholarship, and that the Digital Humanities perpetuate through unquestioned assumptions and differences in resources. I found the readings on Global Outlook::Digital Humanities (GO::DH), and their work to utilize the transligingual, transnational, and technological abilities of the digital humanities, as a vital and important project. Additionally, tools like Jekyll, and a call for becoming proficient (and perhaps even expert teachers) in the technologies we use, so we can be more critical and purposeful in our choices, and to better be able to describe our reasoning. This would allow us to exert resistance on the hegemonic structures that so often control scholarship.

I was worried by how logocentric this work tended to be, at least as presented to us. I know that text, in the material sense, is cheap. It is easy, it flows very quickly with very little bandwidth. This is also true if you think of labor, cost, and time differences between black and white, text only layout, versus producing something in color, or historically typesetting versus engraving. However, I think that it misses many of the epistemologies that the digital humanities can bring attention to and study. There is much to be gained from the study and production of visual, digital, and/or multimodal texts. There needs to continue to be a careful consideration of the many different ways in which we intersect and interact with technologies and knowledge.

Ultimately, I would like to join Alex Gil’s rebel force. I see a value in the open access, and for allowing epistemologies to negotiate and work with one another. This also gets to important questions of access to technology, social justice, and the connections between knowledge and production. I also believe that we can become more aware of the rhetorical nature of not only our compositions, but also the webs in which they circulate. Thinking about our work in a broader networked and material sense is important to engaging with the Digital Humanities. There is also a value in thinking about the material, embodied, and situated nature of the tools and technologies we use, and how to best and most equitably engage with them.

I found his talk and our discussion productive, and I look forward to seeing where else we can draw from, and what new knowledge can arise out of these technological confluences.

DH, Alterity, & Cybercrud

The “rules” of UNIX (and moreso in Richard Stallman’s open source stewardship of GNU in 1983) are the following: modularity, clarity, transparency, and simplicity all work to establish a unity in programming and a connectedness in work to come. There’s an inherent body of collaboration—maybe out of necessity and maybe out of spirit, depending on who is framing it—that runs through this UNIX narrative. Others will need to see, understand, and most importantly access this information later if the core of UNIX is going to segue into more usable programs in the future.

To extend Tara McPherson’s brilliant analogy of the UNIX timeline and the cultural movements of the ’60s, and please forgive the reductive nature of my poor generalization, I can’t help but focus on the zeitgeist of both branches of her timeline. As McPherson notes, the nature of UNIX’s establishment was both a response to the widening field of what programmers could work with and also an establishment of what they thought would soon be possible. There was a recognized kairos, and we see optimism and ingenuity emerge in response.

In 1972, Ted Nelson coined the phrase “cybercrud,” the veil of confusion, unnecessary jargon, and complex framing programmers purposefully use to keep computers as inaccessible to the ordinary user as possible, and fought against this kind of thought. We see in the genius of Nelson a foresight of optimism of how computers would shape the world and how that movement would look. The same often mythologized social movements of so-called “post war era” share this hope and eye towards the future. Compounded more so than anything else, we see both of these movements crescendo in the official narrative of Steve Jobs; counter-culture and computer work in capitalism to formulate a new kind of product that “thinks differently” and breaks free the chains of oppression.

If I can deviate slightly from McPherson’s analogy, I think this is also the moment where everything begins to fall apart for both tracings. We see the Western rise of neoliberalism and the proprietary computer arms race shatter the original zeitgeist of both these movements. It’s not so much a modularity mentality as it is a capitalistic one—whomever can gain financially within their given sphere will also use this to oppress the advancements of others. Apple, after borrowing heavily from UNIX and others quickly stymies anyone from borrowing from them. The post-war social movements lose traction and fall within the expanding globalization neoliberal powers. Collaboration no longer guides the digital, and marginalized voices remain, despite our better intentions, marginalized.

I make this overly-simplified metaphor only to highlight the importance of how some of these readings are working against both of these established frameworks. Sayers’s, McGrail’s, and Gil‘s essays in Minimal Computing all centralize open source and accessibility, expanding upon the nature of how things should work and what we can create when we function in collaboration (and please forgive me for lumping these three distinct works as one—each should really be examined on their own merits). The usefulness of re-tooling our tools with minimalist approaches in order to increase access works to correct the consumerist takeover that shaped the rise of the personal computer and bore the spine of neoliberalism, even in the almost ironic (but not really, you know?) framing of advancing technology by removing some of the superfluous tools of technology.

This is a purposeful scaleback that aims to work against established systems of power and recontextualize creative thought while still maintaining the core of what consists of the humanities. We see a reiteration of Nelson’s original concepts of an open learning and growing digital community of scholarship that allows access to anyone who wants to contribute.

In the transformative value of re-shaping our view(s) of the humanities through the lens of digital scholarship, we see the unique creativity and connectedness in these works. In more than just a cursory nod to alterity, we see real, applicable ways of inclusive and collaborative learning that openly works to stretch beyond the hegemonic and create open learning spaces. For those of us who occupy Composition and Rhetoric, the implications of this are especially exciting as our digital practices intersect in every way with the work of this presentation.

Some questions in advance of our meeting:

  1. There’s no dearth of innovation in the humanities, and this is especially so (at least I like to think) in the digital. Even with digital works, we see scholars and makers move around in the academic or digital world or shift focus to other projects. When we look at works like the GO:DH, Ed, and The Open Syllabus Project, what kind of sustainability can we see or hope to see once the initial excitement has dissipated a little? Once a project like this has moved beyond the stage of the original creators? How could these projects maintain or re-purpose their roles in order to generate more diversity?
  2. As we’ve seen within the humanities, alterity is a priority in scholarship but a lot of times not in reality within the actual voices of the scholars. Beyond collaboration, (re)introducing erased historical texts into the cannon, and increasing access of marginalize voices into places of conversation, how else do we counter the traditional-tradition thought of white hegemonic scholarship that still makes up the backbone of the humanities? As Gallon notes, even with a forefront of black issues in humanist conversation, there’s still a framework of “black voices vs the hegemonic” or black voices included as a footnote to the canon. [On second pass of this question: I know this is impossible to answer but I’d be interested on any insight at all]

Finally, this is amazing.


In each article preceding the discussion of DH, Race, and Alterity, I found one major theme peeping through:  Accessibility.  Not just accessibility of reading content, but accessibility in understanding and creating content.  If DH is going to be universal and accomplish the goals of delivering high quality resources to all, DH needs to give more than just access to viewing content.

Is DH making its material accessible to as many people as possible, and if so, how is this being done? In The User, the Learner, and the Machines We Make Alex Gil forwards the idea that minimal computing is a way towards accessibility.  Citing Google’s search box, which is quite minimal until one looks at the massive amounts of code used to run this one box.  Yet, is minimal computing a good starting off place for accessibility?  More precisely, is everyone speaking the same minimal computing, as Seyers’s article expands the minimal computing definition. McGrail’s Open Source in Open Access Environments touches on this question of overcomplication in community college settings.  Is minimal computing helping community college students or developing a difficult entry point to DH?  How can DH ideas reach more people and truly be accessible without collapsing the integrity of the work studied?

Central to this issue of accessibility are race, gender, and international DH work.  In The (Digital) Library of Babel Alex Gil states, “a humanities gone digital brings not the future, but a new past.”  Digital Humanities can create new understandings by bringing together populations from culturally and socially disparate backgrounds to create new and interesting discussions about the world.  Yet Alex Gil states we need to take care of our own tents first.  The United States has its own struggles representing both gender and race equally within Digital Humanities.  Focusing on how to support our local tent is necessary to developing DH both at home and internationally.  Perhaps this local approach can be developed within collegiate frameworks of DH.  Yet still to be answered is the question:  How do we make DH accessible to all?

Houston Symphony Orchestra has a massive mission statement:  In 2025, the Houston Symphony will be America’s most relevant and accessible top-ten orchestra.  Yet when Mark Hanson became executive director of the symphony in 2010, he noticed that the majority of the symphony audience was white.  In Houston, which is 33% Anglo, 41% Hispanic, 18% African American, and 8% Asian, not engaging with multiple cultures means not being relevant or accessible.  In The Houston Symphony Diversity and Inclusion Case Study Mark Hanson states, “The Symphony can be as welcoming and as open as humanly possible but without intentional and deliberate strategies that address this feeling experienced by many from the African-American community, our organization and more importantly our art form will continue to remain unintentionally exclusive.” To become more inclusive they went straight to the source and developed three leadership councils filled with people from the communities they were trying to reach. The Houston Symphony Orchestra has since developed bilingual concerts, an African American chorus to perform for orchestra concerts, a Spanish composer series, free community tango concerts, and more to engage with their community.  Though it is solely a musical organization, the Houston Symphony Orchestra is dealing with the same issues as Digital Humanities.  The Houston Symphony Orchestra believes searching for answers at a community level will help them succeed in becoming one of the top 10 nationally recognized orchestras, but for DH, which is often thought of as a more national/international endeavor, would a local focus be acceptable?  Since the potential for DH is so expansive, should inclusivity in DH begin at a local or meta level?

DH, Race, and Alterity

Friday, April 21, 1:30-2:45 pm
Williams 013 (“Common Room,” basement level)

Building A “Republic of Letters” Beyond Anglocentrism: A Conversation with Alex Gil

Digital Scholars is pleased to welcome Alex Gil for its final meeting of the semester. Gil joins us via videoconference from Columbia University, where he is Digital Scholarship Coordinator for the Butler Humanities and History Division of the University Libraries (with affiliate status in the Department of English and Comparative Literature, and in the Department of Latin American and Iberian Cultures). Informed by his specializations in twentieth-century Caribbean literature and textual studies, Gil’s own postcolonialist fantasies have spawned large-scale projects that attempt to re/discover the multilingual and multinational scope of DH work, including the Global Outlook:Digital Humanities (GO:DH) initiative, and “Around DH in 80 Days,” launched in 2014 to “address[] the challenge of multi-directional and reciprocal visibility in an asymmetric field.”

“Around DH …” began as a Scalar-based, crowd-sourced mapping project, and ultimately featured hundreds of submissions from scholars around the globe. These and other of Gil’s projects simultaneously stem from and support three goals: (1) building digital platforms that support “minimal” editions of literary texts; (2) fostering open-source platforms to support postcolonial translation and pedagogy; and (3) making pathways for digital humanists to contend with a diverse intellectual kósmos.

Participants are invited to read the following in advance of our meeting:

and to browse the following projects:

For additional context or related conversations, participants are also invited to read:

All are welcome! We hope you can join us,


Knowing, Being, Mapping: Dr. Craft and GIS

The digital scholars meeting this month with guest lecturer and classics fellow Dr. Sarah Craft brought up fascinating questions on how we engage with traditional humanities methodologies when using digital technologies like geographic information systems (GIS). In this post, I’d like to briefly list the four major questions I saw arising from Craft’s research and the related readings and then address how Craft responds to these.

Perhaps the most straightforward question was on the nature of GIS: Is GIS a tool or technology? In Knowles et al.’s article “Inductive Visualization: A Humanistic Alternative to GIS,” the authors share how GIS has been considered both a tool for analysis and a technology worthy of academic study in its own right. Each perspective brings with it underlying assumptions on the relationship between researcher and program. Craft addressed these questions within her own research in Serbia, where GIS is used as a “compilation tool.” She and her undergraduate student research assistant used the program to “iteratively explore and visualize” the landscape before their upcoming field research. It allowed them to “integrate different data sets” and pinpoint areas for further fieldwork by filtering using factors like proximity to water and elevation. Such high-resolution data allows Craft to understand the physical terrain in a macro scale and run broad analysis that may not have been possible without GIS.

However, it is not without challenges; Craft describes the challenges that arise when working with published data. In his post, Jim asks a great question related to this problem and to the increasingly economized nature of GIS and similar digital programs: “Are we limiting our data collection to the immediate research or are we collecting enough data so that future scholars can ask new questions?” In her presentation, Craft describes the limitations that accompany using data not specifically gathered for her ends. The ways in which GIS is built might better facilitate specific kinds of analysis over others.

The second question arising from these readings and research is more epistemological. How do we “know” as researchers? Llobera argues against the traditional perspectives in archaeology where “the source of knowledge about prehistoric landscapes can only be obtained through the body of the archeologist” (499). He finds this perspective limiting and an unnecessary privileging of so-called “passive records” (499). Considering static images as being free from the “technological determinism” that troubles some archeologists about GIS is ultimately a fallacy. He questions whether the claims made from a physical study are intrinsically different from similar insights gleaned from digital mapping software. Instead of supposing physical experience as direct knowledge versus the mediated knowledge gleaned from representations, this perspective understands knowledge as being formed through embodied experiences and digital mapping programs.

Dr. Craft takes a similar approach to the issue; when she describes her work with GIS and her field surveys, she portrays them as complementary components of forming knowledge. Her lecture on her use of GIS reflects Knowles et al.’s claim that “cartography is a form of semiotics” (237). The mapping allows her to come to new places of insight; it is generative and symbolic. Her perspective reflects Llobera’s description of the “agential capacity of landscapes” and the way meaning is co-constituted through interaction between researcher, technology, and material world.

The third question I saw as integrally connected with Craft’s work and our discussion was ontological. How might we internalize concepts from theories to develop methodologies and interpretive frameworks? If one of the arguments against GIS is its tendency to shape our methods for research, then this question is of critical importance. I think our discussion following Craft’s presentation hit on this issue the most, but it’s difficult to tease out the implications of such dialogue.

The fourth and final question(s) in my understanding is about the relationship between GIS software and the material world when composing analytical maps. In what ways does GIS affirm, break, or problematize the perceived “direct correspondence” between software and material world? What happens when researchers try to map affective realities as in Knowles et al? What about when time is mapped as in Craft’s diachronic project on pilgrimage?  Craft described how she layered the chronological and spatial progression of her dissertation project, but also described her work as contradicting the move towards geographic visualization proposed by Gupta et al and Knowles et al.

Ultimately, the questions  accompanying this research are not necessarily new questions for researchers in the digital humanities, but they do represent new possibilities — what Guldi describes as the spatial turn’s “impulse to position these new tools against old questions” (n.p.)

GIS, Dr. Craft’s Work, and Future Discovery

GIS research and development was a proud selling point for the Arts & Sciences department at the university where I previously worked, so I have had a little exposure to geovisual analytics. Interestingly, I have also seen 3D virtual reality visualizations of excavation sites the university was a part of, though I didn’t quite make the connection to their utility until our readings for this week. I’ve also had trouble mentally expanding the traditional concept of geographic mapping into what GIS adds. Moving into Dr. Craft’s presentation, I was curious about some of the implications for utilizing it and the different systems or uses it could be applied to, especially in light of her work with ancient settlements.

In discussing the implications of GIS in her work, Dr. Craft noted that each project site called for unique purposes as “GIS lends itself to different data sets.” We saw this in how her first project, focused around antiquity Byzantine, explored scale and landscapes as they related to how people migrated in the area. She noted she underused GIS as only a visual mapping tool with this work. We saw this in her next project, as GIS was used more for “landscape analysis” in Romuliana, Serbia where she searched for “what came before, after, and during” the existence of the palace. Focusing on how the landscape was shaped, she used existing records to (for lack of a better word) triangulate the activity surrounding roads, settlement locations that existed prior to the Roman expansion, and mineral deposit records with GIS data. Instead of having GIS present visualizations for discovery, she used it to create connections between the previous data and build into more meaning-making. Dr. Craft noted the particular usefulness of the act of discovery being encouraged when more data was present.

Part of her process involves formulating research goals and seeing how those develop into more areas of research as projects advance. As Dr. Craft and our readings referred to “the spatial humanities” in this instance of discovery, I originally struggled with how this site of visualization differed from how we traditionally approach research. When she discussed her issues with access to some of these areas, I made the connection to what roadblocks this type of historical study may face. Instead of how we normally think of “access” in the humanities, Dr. Craft was literally meaning physical access to sites. In making this connection, I realized the extent of what her work represents and how she was rewriting traditional historical implications. This kind of discovery, moreso than the actual data derived form GIS, is what makes Dr. Craft’s work so exciting. She raises questions about how we conceptualize historical data and what limits we’ve falsely assumed. Paired with the optimistic pessimism of Gupta and Devillers’s claims of scholars’ tendency to work down for “inadequate” tools, we may be near a tipping-point in how we formulate our historic conceptions of humanity.

One of the implications that Dr. Craft only briefly discussed was how her work with data and GIS can create “predictive modeling” with both mapping and “spatial representations.” This opens up so many questions about where research like this can lead. How can we use this to predict or better global weather patterns and the migration of species and humans in the wake of climate change? What data can we cultivate to suggest future farming or agricultural spaces in the wake of swelling populations and national border disputes? What behavioral patterns of our past are predictive of movements in a globalist age?

Reflections on GIS, Archaeology, and the Spatial

Thinking ahead to our meeting and conversation with Dr. Craft, I’d like to consider a theme across the readings and some resulting questions that occurred while I tried to take in and make sense of an immense amount of research on a topic that I admittedly know little about.

The prominent theme woven across most of the texts was the notion that GIS, as traditionally conceived, constrains the options of those representing not only the data, but also the lived realities behind that data. The prominent response seems to be that those in archeology and geography should reconsider the role and means of visualization, whether that be related to virtual reality, geovisual analytics systems, or an approach such as inductive visualization.

In terms of dealing with questions and representations of space, GIS is useful, and Jo Guldi provides a list of concerns common to the “softer” disciplines, including “spatial questions about nations and their boundaries, states and surveillance, private property, and the perception of landscape, all of which fell into contestation during the nineteenth century.” While GIS offers itself for aggregating data and analyzing it, it contains limitations when attempting to address the embodied nature of places.

Gary Lock discusses a shift beyond the phenomenological in VR, noting that there has been a shift “from observational representation toward a representation of inhabitation, a dissolving of the subject/object dichotomy”—yet VR hasn’t been able to fully productively embrace the affordances of phenomenology because the technology still reifies the “detached gaze” of the observer (98). Instead, he argues for theories that continue to push at what it means to represent, theories in which “[t]he focus falls on how life takes shape and gains expression in shared experiences, everyday routines, fleeting encounters, embodied movements, precognitive triggers, practical skills, affective intensities, enduring urges, unexceptional interactions and sensuous dispositions” (Lorimer qtd. 99).

Lock highlights approaches to non-representational mapping using digital technologies that allow for the layering of data and images and improved searchability as well as for increased collaboration among users—these tools allow for inhabitation, for human experience and activity, in maps. Regarding collaboration, too, I found a post on the Antiquity À-la-carte blog noteworthy for its emphasis on the Creative Commons license and commercial uses of content without additional purposes, which seems to illustrate, in part, Lock’s point.

Gupta and Devillers are sensitive to a similar problem. Although GIS has allowed researchers to bring together data, analysis, and representation, its information-centric nature stifles the messiness intrinsic to depicting places where humans have lived and continue to live. The authors note that GIS fails in this regard “because [the] tools are often inadequate in facilitating an understanding of complex real-world processes and events. […] [Consequently,] archeologists too often reduce phenomena is size and complexity to match the capabilities of existing tools.”

To combat this effect, they turn their attention to advanced analytical geovisual, or geographic visualization, approaches that foreground researchers’ own “cognitive abilities (rather than equations and algorithms) to process information and generate new knowledge.” Interestingly, these efforts don’t only help return lived experience to maps—they also return human cognition and experience to research methods. Is this also a benefit of a shift towards more human-oriented methodologies and more embodied methods?

Anne Knowles et al.’s inductive visualization serves a similar purpose to Gupta and Devillers’ advanced geovisual methods. Considering the inherent shortcomings of methodologies supporting technology such as GIS, they find fault with GIS for “the loss of meaning or the invention of meaning” when representational approaches have to contend with qualitative data (236). As such, they offer an approach that they term inductive visualization, in which researchers’ perceptions and intuitions suggest the most productive method for sorting through, analyzing, and representing data: it is a “creative, experimental exploration of the structure, content, and meaning of source material” (244).

Among their visuals in the article, one that seemed especially helpful to me in reflecting what the authors are arguing for is figure five, Erik Steiner’s graphic representation using grouped letters to show “relatively how much the women said in relation to the places and stages of their traumatic journey” (246). Yet I wonder then about the role of such visualizations in research (and I’m thinking here of scholars like Johanna Drucker and N. Katherine Hayles): what kind of weight do we give to more innovative representations when they appear, for instance, in work being reviewed for tenure? In rhet/comp, online publications are often valued less than print publications—is this a comparable phenomenon that we see in light of digital technologies’ effects on research?

Each of these approaches seems, to a degree, to answer Llobera’s assertion that an interpretive methodology makes room for rich, messy relationships and situations more fruitfully than the less flexible GIS-based methodologies, and I think we see the results of more integrative, “expressive” approaches in examples like Pleiades and ORBIS. Pleiades I find interesting for how it distinguishes between “places” and “locations” (even though the links direct to the same page, intentionally or accidentally), which might speak to how those in archeology, geography, etc. differentiate between cultural sites and movement and physical spaces.

Yet, I have to wonder about the interactivity of the site, as it struck me as less user-friendly than I would have liked. When considering the design of humanist GIS efforts, how important is ease or intuitiveness of use? And do we ask that specialists like archeologists also function as programmers and designers? While Pleiades is intriguing, ORBIS seems more capacious in its features, and one feature of ORBIS that surprised me but, the more I worked with it, struck me as exceedingly useful, is the ability to define the month or season of the route. Thinking of layering, searchability, and inhabitation over representation, I think ORBIS most exemplifies an effective humanist map—the consideration for how expensive a route would be, for example, gestures to the humans and culture of the time and place.

A fun post-script: Sunday night, the History Channel aired two one-hour shows back-to-back about the mystery of the lost Roanoke colony. At one point, one of the specialists consulted was a geospatial archeologist who (very briefly) demonstrated how he used satellite topographical data in combination with data about the native tribes and data about copper mines to speculate about possible locations for copper ore—it seemed a useful and relevant example of what a humanist mapping of historical human activity looks like.

GIS – Through the Archaeological Lens

Spatial analysis in archaeology today encompasses a wide range of experiential, fieldwork-based, and deterministic approaches that vary considerably in their intended purpose and theoretical underpinnings.  The rapid uptake of computational methods such as Geographical Information Systems (GIS) and related methods in archaeology from the late 1980s and early 1990s marks a disciplinary change, for enthusiasts and critics alike, increasing by an order of magnitude the quantity of spatial data that could be managed and analyzed, especially for those working at the scale of entire archaeological landscapes.

Over the past few decades there has been a large debate between two strands of archaeological theory.  A very brief summary of the argument follows.  Proponents of post-processual, qualitative, experientialist, or phenomenological landscape theory in archaeology have argued that quantitative or empirical techniques, which include GIS-based mapping methods and predictive techniques, effectively dehumanize and distort the past through an ethnocentric gaze.  In response, strong criticisms have been raised about the validity of evidence presented in the qualitative, experiential, or phenomenological frameworks, especially research methods that are characterized as highly subjective attempts to empathize with the lives of long-dead human beings.

Several of these readings illustrate the gap in theories.  As I reflect upon this divergence of theories, GIS becomes collateral damage.  The underlying argument is not about GIS, it is about the interpretation of the data that is evaluated.  GIS, by definition, is a system that keeps track of where events happen or exist and when.  It is a platform for creating and maintaining maps and a tool for querying, editing, and analyzing spatial data.  I sense in the arguments and in the preliminary articles leading up to discussion, that the keyword “analyze” is the culprit for GIS to bear the responsibility of interpretation.  While analysis is the groundwork for interpretation, data collection is the groundwork for analysis.  Any visualization of data is always dependent on the underlying data collected.

Virtually every attempt to economize process—with GIS or not—presents certain challenges to interpretation and knowledge production, and thus all attempts should be analyzed critically in terms of their methodological or interpretive efficacy.  As scholars we must ask: are we getting the right data; are we asking the right questions?  As archaeologists, we must not only consider our immediate questions, but we must be mindful of the entirety of the data collection process.  Are we limiting our data collection to the immediate research or are we collecting enough data so that future scholars can ask new questions?  This is especially true of excavation sites.  Once excavated we have changed the site and cannot restore it to its undisturbed state.

I look forward to the chance to learn about Dr. Craft’s project.

GIS and Archaeology

Thursday, March 30, 2:30-3:45 pm
Strozier 107K [map]

Spatial Patterns, Spatial Evidence: GIS and Archaeology

With the promotion of what spatial humanists call “deep maps,” historians are provided tools for charting what is amendable and excluded from any geographic purview, allowing them to look beyond what is memorable and concrete (Bodenhamer et al. 2015; Bodenhamer et al. 2013; Guldi 2014). Advanced spatial technologies afforded by multilayered geographic information systems (GIS) are growing in popularity, not only enabling the animated reproduction of ancient sites but also allowing complex maps to show cultural reflexivity through the representation of “personalities, emotions, values, and poetics, the visible and invisible aspects of a place” (Bodenhamer et al. 2013, 172). Ideally, what results are historical narratives that are more fluid than finite,  reflecting complex events or actions at any scale.

Yet the convergence of GIS with specific kinds of historical activities creates a representational challenge of humanistic proportions. Beyond the questions of cultural precision and representational accuracy, how can using certain GIS technologies do more than validate a single research agenda? How does geovisualization enable or constrain our ability to interrogate its appropriateness for intellectual work? What assumptions does GIS-enabled archaeology make about the viability of locational data, and about how historians should access or interpret it? Digital Scholars is pleased to welcome Dr. Sarah Craft, postdoctoral fellow in Classics at FSU, to facilitate discussion on these questions and to present on her work. Since 2013, Dr. Craft has been actively proposing and developing landscape archaeology projects in different regions of the world, with a special eye toward methodological critique.

Participants are invited to read the following in advance of our meeting:

and to browse the following projects:

All are welcome! We hope you can join us,