The Privileges of Data in DH

I complained (lightly in self-deprecation) last semester about the nebulousness of the common threads among our readings for last semester’s Digital Humanities selections, without yet fully realizing the diverse nature of writing about the Digital Humanities as a collective, or the difficulty of a meaningful inspection of intersectionality. I was obviously wrong in my initial take in both the nature of the complaint and in the response of complaining in front of a nuanced critical analysis—I will not make that mistake this week. My first thought upon reading these selections was very similar to my reaction last semester, though: what a diverse selection!

Filtering the readings through McPherson’s text probably creates the best lens for me to approach this theme as a unifier here: How are we roadblocking alterity?

As this week’s selections teach us, attempting to narrow our perspective (as I do with filtering these readings through the lens of alterity in the Digital Humanities) also demonstrates our privilege.

Through Tsing’s discussion of scalability, we see the common hegemonic practice of simplifying for the sake of clarity in expression—that is to say, there’s a lot that gets ignored once non-convenient data is swept aside, never considered, or misplaced for the sake of convenience. As Tsing notes, when we contextualize data, we have a very focused filter of what we can understand and compare. Data that falls out of our view is reduced in importance or ignored. Systematically, we process information in ways we’ve been taught or shown; often, this happens without regard for how those larger frameworks compartmentalize and exclude.

As my ignorance demonstrated from last semester, comprehending only those knowledge systems works to falsely reinforce our (mis)understandings of data. As Rawson and Muñoz succinctly note, “this reductiveness can feel intellectually impoverishing to scholars who have spent their careers working through particular kinds of historical and cultural complexity.” Though not always apparent within our own work, we should be aware of how our own perspectives can be intellectual dampeners while also reinforcing our own privileges; what we sometimes see as clarity also creates adjacent distortions (as we see with ULAN’s database not recognizing gender as a spectrum and short-handed representations of visual spaces). As developing scholars, these unseen and non-representative knowledges sustain a daunting influx of ignorance we have to actively practice awareness of.

In discussing UNIX last semester—a personal scholarly interest—I used McPherson as a springboard to link UNIX’s original community’s ideology to that of access and representation. Until now, I never considered what this mythologized narrative flattens: What non-Western approaches to digital access were steamrolled by the language or system barriers established by UNIX-running systems? Whose work disappeared or went uncredited in establishing open source databases? What aspects of UNIX coding fits into frameworks that favor masculine input and privileges hegemonic processes?

Even in projects with altruistic intentions, the majority of now-recognized pre-Silicon Valley programmers were white males.

Some questions to consider ahead of Stanley’s presentation:

  1. In thinking about our specific fields, interests, and research, what are the frameworks and taxonomies we deal with but rarely consider alternative approaches of? What do they downplay, hide, or misrepresent? What knowledges do they frustrate? More importantly, how can we respond to this?
  2. Even within the context of these articles (and my post), binary framings are centerpoints (i.e., voices of alterity v. hegemonic; flat v. widened views; close v. distant readings; inclusive databases v. exclusivity). What are these articles also missing in their representations and how can we respond to what they do not discuss?

In/Visibility and Exclusion in Creating DH Taxonomies

Friday, September 22, 1:30-2:45 pm
Strozier Library TADS Commons  (ground level, past the quiet study area)

Aspects of Visibility: Reckoning with the Taxonomizing Impulse of the Digital Humanities

Digital Scholars is pleased to welcome Sarah Stanley, DH Specialist and Librarian at Florida State University, to help usher in this semester’s discussions of what can occur at the methodological intersections of DH, race, and alterity. Stanley asks us to consider and interrogate various attitudes toward building taxonomies that undergird a majority of DH projects, whether those taxonomies seek to render multiple phenomena in “same-as” relationships rather than critically distant ones (Drucker, 2011), or whether they seek to articulate phenomena as a hierarchical ordering of relationships that function on a measurable scale (Tsing, 2012).

For those who work in and around network, digital, or visual studies, such a call to rethink taxonomies seems not unfamiliar. In her prologue to Graphesis (2014), for example, Johanna Drucker differentiates between a diagrammatic image that “produces the knowledge it draws” and a digitally rendered image of Web traffic that “only displays information” (1, italics original), arguing that our rendered images—like our networks and queries—are situated and thus in need of nuanced distinctions between those visualized representations that construct information vs. those visualizations that merely re-present. For those who work with data—especially with the mining, construction, or interpretation of indigenous or culturally sensitive data sets—such a call to rethink taxonomies is especially salient to avoid recreating ontological dilemmas that flatten or erase difference.

Yet, what practices (or impulses) might we put in their place? Moreover, with what aspects of visibility should we be willing to contend? Finally, at what cost to particular notions of the “digital” or the “humanities” should these contentions occur? To help us work through these questions, participants are invited to read the following in advance of our meeting:

For additional context or related conversations, participants are also invited to browse, skim, or reread any of the following:

All are welcome! We hope you can join us,


Issues and Debates at the Intersection(s) of DH, Race, and Alterity

Friday, September 8, 12:00-1:15 pm [cancelled due to FSU closures]
WMS 317 (Williams Building, 3rd floor, L off elevators)

Alex Gil‘s virtual visit to our reading group and collaboratory last April was memorable, not only for his dogged persistence in modeling ways of de-colonizing the digital humanities, but also for his honest admission of the challenges we face when starting and sustaining epistemic collaborations among cultural groups. Our meetings during the Fall 2017 semester will continue, if not heighten, discussions on these challenges and collaborations.

This semester, most of our discussions will occur at various intersection(s) of DH, race, and alterity — some of them approaching the intersection via aesthetics and critique, others looking humanistically at mechanisms or methodologies, and still others interrogating morality and access.

While the September 8 meeting is primarily for graduate students enrolled in or regularly attending the group, all Digital Scholars participants are welcome to read and join us for conversation on the following:

Looking forward to it,


200 Fingers Listening: The New Discipline and Matters of Alterity

Last Friday, we had the privilege of hearing Alex Gil speak, and while participants expected he would be speaking to the race and alterity issues present in our readings, I was a little surprised when he did not. I kept thinking, “He’s going to talk about race and alterity now.” He mentioned it a few times, and when our conversation via Google hangouts ended, I was scratching my head. Retrospectively, though, much of what he said relates to race and alterity, and I hope to tease out some of those things through the course of this blog post.

During his talk on April 21, 2017, Gil advocated for a new discipline that existed outside of digital humanities whose purpose was to “understand the creation, distribution, and location of all scholarship in all languages at all times.” He also noted that such a discipline was very materialist, based on empirical observation, and that the documents such disciplinarians were dealing with were both finite in number and could exist in analog (pottery shard, paper manuscript) or as bytes (electronic journal article). We are still unsure of what this discipline will be called, but for the sake of this blog post, I will refer to it as the new discipline.

Such a discipline has, according to Gil, three notable characteristics: knowledge architecture, cultural analytics, and a jack-of-all-trades workforce where each member has multiple analytical and technological literacies, making such workers capable not only of gathering and creating knowledge but also building the databases and platforms that host that information, storing it on non-institutional hardware like jump drives, publishing the data in journals, and analyzing the data there. Gil’s discussion dealt heavily with the survivability of materials and how we should be thinking about this materially. He gave an example of a database that could be placed on a jump drive.

Among all of this talk of disciplinarity, materiality, and access, I see faint but definite connections to alterity woven through these diverse issues. In her 1999 CCCC address, Cynthia Selfe notes that generally, marginalized groups, specifically African Americans and lower-class citizens, lack access to technology and therefore cannot gain the literacies to succeed. In his talk, Gil noted that likely community colleges and smaller institutions would be the root of this new discipline. In relation to the readings we did for this week, Sayers advocates for minimal barriers to access, and McGrail notes that universities and community colleges have power differences that are “mask[ed]” by their “share[d] disciplinary affinities.” Finally, Gallon discusses a “technology of recovery” through with African Americans and other marginalized groups can reclaim certain knowledges and epistemologies.

In a way, the new discipline seeks to do just that: it seeks to reclaim from those in power the rights to data (freely given) and to move away from data (which must be taken). While somewhat idealistic in its constructs and despite doubts of success, to understand all knowledges in all languages, the new discipline’s greatest asset might be diversity. If the 200 fingers listening that Gil spoke of are all diverse in background, skillsets, and demographics, then perhaps that would further contribute to the bodies of knowledge, to the new disciplines ability to know everything in every language.

In Which we Talked about Systems, Publishing, and Open Access: Or the Theme was Alterity and Race in the Digital Humanities

Alex Gil was kind enough to join us for an afternoon in a Google Hangout to talk about data, publishing, systems, open access, and creating more accessible digital programs and scholarship. His talk focused on the need to create a digital revolution, one that used the humanities unique abilities to be good distant readers, knowledge architects, and cultural analyzers, combined with a digital consciousness towards access and labor, to create and encourage new epistemologies. He argued for a new paradigm of thought that was technologically literate, resources conscious, open, and aware of labor. He positioned the Digital Humanities as a place where this sort of translingual and transnational work can be cultivated.

I found his call for a new discipline for the 21st century academy intriguing. This discipline would work to “understand the creation, distribution, and location of all scholarship, in all languages, at all times” (Gil). There is a sense of speaking truth to power through a questioning and unsettling the underlying assumptions of the labor and material practices of modern scholarship within the academy, and the associated businesses. Gil detailed the pervasive issues with the ways in which knowledge, in its various material forms, is housed, sold, distributed, and used. There are clear needs to look not only at what we produce, but how, where, and for who. Underlying these issues are concerns with labor practices, intellectual property, ownership, and expertise for digital circulation. Between Gil’s talk, our questions, and the readings, it seems that there is a need for developing within the digital humanities a critical mass to lead to a more fair, global, tranlingual, and socially just academy. We need to encourage the development of critical technical ideologies and practices in our work, scholarship, writing, and teaching.

These all bring important rhetorical concerns: how does work circulate, who owns it, how does or can an audience interact with it, and what effects does this have on the ultimate labor that knowledge does? In an open access world, we would do our best to account for the variety of technological barriers and literacies needed for effective and just knowledge making and sharing. His positioning of work, from within and without digital humanities and libraries, serves as a potential rallying cause for accessing new epistemologies and circulating them through new, digitally aware, texts and compositions. The de-centralized, data conscious, and knowledge centric views that he discussed were, in many ways, both inspiring and daunting.

Alex Gil’s various projects, Around DH in 80 Days, “The Digital Library of Babel,” “The User, the Learner and the Machines W Make,” and his talk all provide us with potentials for the sort of scholarship that access conscious, Digital Humanities, can or could do. They also bring attention to the knowledge, linguistic, and access gaps that exists in scholarship, and that the Digital Humanities perpetuate through unquestioned assumptions and differences in resources. I found the readings on Global Outlook::Digital Humanities (GO::DH), and their work to utilize the transligingual, transnational, and technological abilities of the digital humanities, as a vital and important project. Additionally, tools like Jekyll, and a call for becoming proficient (and perhaps even expert teachers) in the technologies we use, so we can be more critical and purposeful in our choices, and to better be able to describe our reasoning. This would allow us to exert resistance on the hegemonic structures that so often control scholarship.

I was worried by how logocentric this work tended to be, at least as presented to us. I know that text, in the material sense, is cheap. It is easy, it flows very quickly with very little bandwidth. This is also true if you think of labor, cost, and time differences between black and white, text only layout, versus producing something in color, or historically typesetting versus engraving. However, I think that it misses many of the epistemologies that the digital humanities can bring attention to and study. There is much to be gained from the study and production of visual, digital, and/or multimodal texts. There needs to continue to be a careful consideration of the many different ways in which we intersect and interact with technologies and knowledge.

Ultimately, I would like to join Alex Gil’s rebel force. I see a value in the open access, and for allowing epistemologies to negotiate and work with one another. This also gets to important questions of access to technology, social justice, and the connections between knowledge and production. I also believe that we can become more aware of the rhetorical nature of not only our compositions, but also the webs in which they circulate. Thinking about our work in a broader networked and material sense is important to engaging with the Digital Humanities. There is also a value in thinking about the material, embodied, and situated nature of the tools and technologies we use, and how to best and most equitably engage with them.

I found his talk and our discussion productive, and I look forward to seeing where else we can draw from, and what new knowledge can arise out of these technological confluences.

DH, Alterity, & Cybercrud

The “rules” of UNIX (and moreso in Richard Stallman’s open source stewardship of GNU in 1983) are the following: modularity, clarity, transparency, and simplicity all work to establish a unity in programming and a connectedness in work to come. There’s an inherent body of collaboration—maybe out of necessity and maybe out of spirit, depending on who is framing it—that runs through this UNIX narrative. Others will need to see, understand, and most importantly access this information later if the core of UNIX is going to segue into more usable programs in the future.

To extend Tara McPherson’s brilliant analogy of the UNIX timeline and the cultural movements of the ’60s, and please forgive the reductive nature of my poor generalization, I can’t help but focus on the zeitgeist of both branches of her timeline. As McPherson notes, the nature of UNIX’s establishment was both a response to the widening field of what programmers could work with and also an establishment of what they thought would soon be possible. There was a recognized kairos, and we see optimism and ingenuity emerge in response.

In 1972, Ted Nelson coined the phrase “cybercrud,” the veil of confusion, unnecessary jargon, and complex framing programmers purposefully use to keep computers as inaccessible to the ordinary user as possible, and fought against this kind of thought. We see in the genius of Nelson a foresight of optimism of how computers would shape the world and how that movement would look. The same often mythologized social movements of so-called “post war era” share this hope and eye towards the future. Compounded more so than anything else, we see both of these movements crescendo in the official narrative of Steve Jobs; counter-culture and computer work in capitalism to formulate a new kind of product that “thinks differently” and breaks free the chains of oppression.

If I can deviate slightly from McPherson’s analogy, I think this is also the moment where everything begins to fall apart for both tracings. We see the Western rise of neoliberalism and the proprietary computer arms race shatter the original zeitgeist of both these movements. It’s not so much a modularity mentality as it is a capitalistic one—whomever can gain financially within their given sphere will also use this to oppress the advancements of others. Apple, after borrowing heavily from UNIX and others quickly stymies anyone from borrowing from them. The post-war social movements lose traction and fall within the expanding globalization neoliberal powers. Collaboration no longer guides the digital, and marginalized voices remain, despite our better intentions, marginalized.

I make this overly-simplified metaphor only to highlight the importance of how some of these readings are working against both of these established frameworks. Sayers’s, McGrail’s, and Gil‘s essays in Minimal Computing all centralize open source and accessibility, expanding upon the nature of how things should work and what we can create when we function in collaboration (and please forgive me for lumping these three distinct works as one—each should really be examined on their own merits). The usefulness of re-tooling our tools with minimalist approaches in order to increase access works to correct the consumerist takeover that shaped the rise of the personal computer and bore the spine of neoliberalism, even in the almost ironic (but not really, you know?) framing of advancing technology by removing some of the superfluous tools of technology.

This is a purposeful scaleback that aims to work against established systems of power and recontextualize creative thought while still maintaining the core of what consists of the humanities. We see a reiteration of Nelson’s original concepts of an open learning and growing digital community of scholarship that allows access to anyone who wants to contribute.

In the transformative value of re-shaping our view(s) of the humanities through the lens of digital scholarship, we see the unique creativity and connectedness in these works. In more than just a cursory nod to alterity, we see real, applicable ways of inclusive and collaborative learning that openly works to stretch beyond the hegemonic and create open learning spaces. For those of us who occupy Composition and Rhetoric, the implications of this are especially exciting as our digital practices intersect in every way with the work of this presentation.

Some questions in advance of our meeting:

  1. There’s no dearth of innovation in the humanities, and this is especially so (at least I like to think) in the digital. Even with digital works, we see scholars and makers move around in the academic or digital world or shift focus to other projects. When we look at works like the GO:DH, Ed, and The Open Syllabus Project, what kind of sustainability can we see or hope to see once the initial excitement has dissipated a little? Once a project like this has moved beyond the stage of the original creators? How could these projects maintain or re-purpose their roles in order to generate more diversity?
  2. As we’ve seen within the humanities, alterity is a priority in scholarship but a lot of times not in reality within the actual voices of the scholars. Beyond collaboration, (re)introducing erased historical texts into the cannon, and increasing access of marginalize voices into places of conversation, how else do we counter the traditional-tradition thought of white hegemonic scholarship that still makes up the backbone of the humanities? As Gallon notes, even with a forefront of black issues in humanist conversation, there’s still a framework of “black voices vs the hegemonic” or black voices included as a footnote to the canon. [On second pass of this question: I know this is impossible to answer but I’d be interested on any insight at all]

Finally, this is amazing.


In each article preceding the discussion of DH, Race, and Alterity, I found one major theme peeping through:  Accessibility.  Not just accessibility of reading content, but accessibility in understanding and creating content.  If DH is going to be universal and accomplish the goals of delivering high quality resources to all, DH needs to give more than just access to viewing content.

Is DH making its material accessible to as many people as possible, and if so, how is this being done? In The User, the Learner, and the Machines We Make Alex Gil forwards the idea that minimal computing is a way towards accessibility.  Citing Google’s search box, which is quite minimal until one looks at the massive amounts of code used to run this one box.  Yet, is minimal computing a good starting off place for accessibility?  More precisely, is everyone speaking the same minimal computing, as Seyers’s article expands the minimal computing definition. McGrail’s Open Source in Open Access Environments touches on this question of overcomplication in community college settings.  Is minimal computing helping community college students or developing a difficult entry point to DH?  How can DH ideas reach more people and truly be accessible without collapsing the integrity of the work studied?

Central to this issue of accessibility are race, gender, and international DH work.  In The (Digital) Library of Babel Alex Gil states, “a humanities gone digital brings not the future, but a new past.”  Digital Humanities can create new understandings by bringing together populations from culturally and socially disparate backgrounds to create new and interesting discussions about the world.  Yet Alex Gil states we need to take care of our own tents first.  The United States has its own struggles representing both gender and race equally within Digital Humanities.  Focusing on how to support our local tent is necessary to developing DH both at home and internationally.  Perhaps this local approach can be developed within collegiate frameworks of DH.  Yet still to be answered is the question:  How do we make DH accessible to all?

Houston Symphony Orchestra has a massive mission statement:  In 2025, the Houston Symphony will be America’s most relevant and accessible top-ten orchestra.  Yet when Mark Hanson became executive director of the symphony in 2010, he noticed that the majority of the symphony audience was white.  In Houston, which is 33% Anglo, 41% Hispanic, 18% African American, and 8% Asian, not engaging with multiple cultures means not being relevant or accessible.  In The Houston Symphony Diversity and Inclusion Case Study Mark Hanson states, “The Symphony can be as welcoming and as open as humanly possible but without intentional and deliberate strategies that address this feeling experienced by many from the African-American community, our organization and more importantly our art form will continue to remain unintentionally exclusive.” To become more inclusive they went straight to the source and developed three leadership councils filled with people from the communities they were trying to reach. The Houston Symphony Orchestra has since developed bilingual concerts, an African American chorus to perform for orchestra concerts, a Spanish composer series, free community tango concerts, and more to engage with their community.  Though it is solely a musical organization, the Houston Symphony Orchestra is dealing with the same issues as Digital Humanities.  The Houston Symphony Orchestra believes searching for answers at a community level will help them succeed in becoming one of the top 10 nationally recognized orchestras, but for DH, which is often thought of as a more national/international endeavor, would a local focus be acceptable?  Since the potential for DH is so expansive, should inclusivity in DH begin at a local or meta level?

DH, Race, and Alterity

Friday, April 21, 1:30-2:45 pm
Williams 013 (“Common Room,” basement level)

Building A “Republic of Letters” Beyond Anglocentrism: A Conversation with Alex Gil

Digital Scholars is pleased to welcome Alex Gil for its final meeting of the semester. Gil joins us via videoconference from Columbia University, where he is Digital Scholarship Coordinator for the Butler Humanities and History Division of the University Libraries (with affiliate status in the Department of English and Comparative Literature, and in the Department of Latin American and Iberian Cultures). Informed by his specializations in twentieth-century Caribbean literature and textual studies, Gil’s own postcolonialist fantasies have spawned large-scale projects that attempt to re/discover the multilingual and multinational scope of DH work, including the Global Outlook:Digital Humanities (GO:DH) initiative, and “Around DH in 80 Days,” launched in 2014 to “address[] the challenge of multi-directional and reciprocal visibility in an asymmetric field.”

“Around DH …” began as a Scalar-based, crowd-sourced mapping project, and ultimately featured hundreds of submissions from scholars around the globe. These and other of Gil’s projects simultaneously stem from and support three goals: (1) building digital platforms that support “minimal” editions of literary texts; (2) fostering open-source platforms to support postcolonial translation and pedagogy; and (3) making pathways for digital humanists to contend with a diverse intellectual kósmos.

Participants are invited to read the following in advance of our meeting:

and to browse the following projects:

For additional context or related conversations, participants are also invited to read:

All are welcome! We hope you can join us,


Knowing, Being, Mapping: Dr. Craft and GIS

The digital scholars meeting this month with guest lecturer and classics fellow Dr. Sarah Craft brought up fascinating questions on how we engage with traditional humanities methodologies when using digital technologies like geographic information systems (GIS). In this post, I’d like to briefly list the four major questions I saw arising from Craft’s research and the related readings and then address how Craft responds to these.

Perhaps the most straightforward question was on the nature of GIS: Is GIS a tool or technology? In Knowles et al.’s article “Inductive Visualization: A Humanistic Alternative to GIS,” the authors share how GIS has been considered both a tool for analysis and a technology worthy of academic study in its own right. Each perspective brings with it underlying assumptions on the relationship between researcher and program. Craft addressed these questions within her own research in Serbia, where GIS is used as a “compilation tool.” She and her undergraduate student research assistant used the program to “iteratively explore and visualize” the landscape before their upcoming field research. It allowed them to “integrate different data sets” and pinpoint areas for further fieldwork by filtering using factors like proximity to water and elevation. Such high-resolution data allows Craft to understand the physical terrain in a macro scale and run broad analysis that may not have been possible without GIS.

However, it is not without challenges; Craft describes the challenges that arise when working with published data. In his post, Jim asks a great question related to this problem and to the increasingly economized nature of GIS and similar digital programs: “Are we limiting our data collection to the immediate research or are we collecting enough data so that future scholars can ask new questions?” In her presentation, Craft describes the limitations that accompany using data not specifically gathered for her ends. The ways in which GIS is built might better facilitate specific kinds of analysis over others.

The second question arising from these readings and research is more epistemological. How do we “know” as researchers? Llobera argues against the traditional perspectives in archaeology where “the source of knowledge about prehistoric landscapes can only be obtained through the body of the archeologist” (499). He finds this perspective limiting and an unnecessary privileging of so-called “passive records” (499). Considering static images as being free from the “technological determinism” that troubles some archeologists about GIS is ultimately a fallacy. He questions whether the claims made from a physical study are intrinsically different from similar insights gleaned from digital mapping software. Instead of supposing physical experience as direct knowledge versus the mediated knowledge gleaned from representations, this perspective understands knowledge as being formed through embodied experiences and digital mapping programs.

Dr. Craft takes a similar approach to the issue; when she describes her work with GIS and her field surveys, she portrays them as complementary components of forming knowledge. Her lecture on her use of GIS reflects Knowles et al.’s claim that “cartography is a form of semiotics” (237). The mapping allows her to come to new places of insight; it is generative and symbolic. Her perspective reflects Llobera’s description of the “agential capacity of landscapes” and the way meaning is co-constituted through interaction between researcher, technology, and material world.

The third question I saw as integrally connected with Craft’s work and our discussion was ontological. How might we internalize concepts from theories to develop methodologies and interpretive frameworks? If one of the arguments against GIS is its tendency to shape our methods for research, then this question is of critical importance. I think our discussion following Craft’s presentation hit on this issue the most, but it’s difficult to tease out the implications of such dialogue.

The fourth and final question(s) in my understanding is about the relationship between GIS software and the material world when composing analytical maps. In what ways does GIS affirm, break, or problematize the perceived “direct correspondence” between software and material world? What happens when researchers try to map affective realities as in Knowles et al? What about when time is mapped as in Craft’s diachronic project on pilgrimage?  Craft described how she layered the chronological and spatial progression of her dissertation project, but also described her work as contradicting the move towards geographic visualization proposed by Gupta et al and Knowles et al.

Ultimately, the questions  accompanying this research are not necessarily new questions for researchers in the digital humanities, but they do represent new possibilities — what Guldi describes as the spatial turn’s “impulse to position these new tools against old questions” (n.p.)

GIS, Dr. Craft’s Work, and Future Discovery

GIS research and development was a proud selling point for the Arts & Sciences department at the university where I previously worked, so I have had a little exposure to geovisual analytics. Interestingly, I have also seen 3D virtual reality visualizations of excavation sites the university was a part of, though I didn’t quite make the connection to their utility until our readings for this week. I’ve also had trouble mentally expanding the traditional concept of geographic mapping into what GIS adds. Moving into Dr. Craft’s presentation, I was curious about some of the implications for utilizing it and the different systems or uses it could be applied to, especially in light of her work with ancient settlements.

In discussing the implications of GIS in her work, Dr. Craft noted that each project site called for unique purposes as “GIS lends itself to different data sets.” We saw this in how her first project, focused around antiquity Byzantine, explored scale and landscapes as they related to how people migrated in the area. She noted she underused GIS as only a visual mapping tool with this work. We saw this in her next project, as GIS was used more for “landscape analysis” in Romuliana, Serbia where she searched for “what came before, after, and during” the existence of the palace. Focusing on how the landscape was shaped, she used existing records to (for lack of a better word) triangulate the activity surrounding roads, settlement locations that existed prior to the Roman expansion, and mineral deposit records with GIS data. Instead of having GIS present visualizations for discovery, she used it to create connections between the previous data and build into more meaning-making. Dr. Craft noted the particular usefulness of the act of discovery being encouraged when more data was present.

Part of her process involves formulating research goals and seeing how those develop into more areas of research as projects advance. As Dr. Craft and our readings referred to “the spatial humanities” in this instance of discovery, I originally struggled with how this site of visualization differed from how we traditionally approach research. When she discussed her issues with access to some of these areas, I made the connection to what roadblocks this type of historical study may face. Instead of how we normally think of “access” in the humanities, Dr. Craft was literally meaning physical access to sites. In making this connection, I realized the extent of what her work represents and how she was rewriting traditional historical implications. This kind of discovery, moreso than the actual data derived form GIS, is what makes Dr. Craft’s work so exciting. She raises questions about how we conceptualize historical data and what limits we’ve falsely assumed. Paired with the optimistic pessimism of Gupta and Devillers’s claims of scholars’ tendency to work down for “inadequate” tools, we may be near a tipping-point in how we formulate our historic conceptions of humanity.

One of the implications that Dr. Craft only briefly discussed was how her work with data and GIS can create “predictive modeling” with both mapping and “spatial representations.” This opens up so many questions about where research like this can lead. How can we use this to predict or better global weather patterns and the migration of species and humans in the wake of climate change? What data can we cultivate to suggest future farming or agricultural spaces in the wake of swelling populations and national border disputes? What behavioral patterns of our past are predictive of movements in a globalist age?