Making as Breaking

In his recent talk (From Lab to Classroom: Live Methods and Prototyping in the Arts and Humanities), Jentery Sayers traces the contextual origination, methodological orientation, and pedagogical thrust of his work at the University of Victoria’s Maker Lab in the Humanities (or MLab). While MLab does represent an academic variation upon the “makerspace model,” it also works against prescriptive teleologies/ontologies of “making” as popularly conceived within tech culture. Attempting to balance “immersion with critical distance,” Sayers proposes an ethic of speculative iteration upon the practice of communal “artefacting” toward divulgence of a central research question: “what can people learn from prototyping technologies that are broken, lost, missing, or no longer in circulation?” Through the application of prototype as heuristic/hermeneutic, Sayers hopes (and, indeed, has already begun) to break down reductive distinctions that exist within DH and related disciplines, ultimately collapsing (or at least, troubling) the difference between “making” and “breaking.”

This semantic matter is taken up—explicitly or implicitly—in much of the surrounding literature. Instead of attempting to reclaim the designation at hand, Chachra rejects it altogether in her 2015 article “Why I Am Not a Maker,” teasing out the assumptions inherent in the label. While the maker movement is often portrayed as countercultural, Chachra contends that it is merely reinscribing “familiar values, in slightly different form: that artifacts are important, and people are not.”

In “The Author Function,” Chan also challenges these problematic binaries, advocating greater recognition of and appreciation for “indeterminacy, contradictions, and possibilities”; however, she here focuses more upon technological affordances and applications than terminological self-identification. Like Sayers, Chan pushes for speculative engagement with the plausibility/preferability of certain technological futures, specifically as relates to neural networking and corpus linguistics.

Theorizing more generally this notion of “Speculative Computing,” Drucker identifies the fundamental premise: “a work is constituted in an interpretation enacted by an interpreter.” Once again, we see this same notion reflected in Sayer’s work at MLab—a shift from the “procedural and mechanistic” toward the “dynamic and constitutive.” Beyond computing, Eliot et al. posit in “New Old Things” that matter itself represents a “new medium” for historical research and humanistic endeavor. They point to hacker, maker, and DIY communities as potential models for new kinds of experimental, experiential projects across “digitized and materialized forms.”

Before this sort of synthesis can be achieved, however, it can be useful to isolate domains, as Sayers points out in “Dropping the Digital.” Promoting a temporary “ruination,” or procedural de-rhetorizing, of digital humanities toward identification of that which “makes them compelling in the first place,” Sayers argues that there are “computational skews” and exclusions within the general economy of DH that go unnoticed without careful examination of underlying metrics and terminologies.

In “Prototyping the Past,” he proceeds to explain how speculative crafting can be instrumentalized toward inclusionary ends, offering potential routes of inquiry/advancement into “entanglements of culture, materials, and design.” Eschewing historical fetishization in favor of conjectural contingency, this ethos of prototype accommodates and embraces potentially anachronistic “breaking” within a communal-conjectural frame of “making” whereby the liminal spaces “between bits and atoms” can be explored, interfaced, and perhaps even—in a sense—closed.

Works Cited


Editorial Practice in Digital Spaces

In anticipation of Cheryl Ball’s upcoming talk (“Rigorous Peer-Review in OA Publishing Environments”), this post explores the formal, developmental, and technological implications/complications of editorial practice across digital spaces. As Director of the Digital Publishing Collaborative at Wayne State University Libraries, project director for Vega, and editor of Kairos: A Journal of Rhetoric, Technology, and Pedagogy, Ball stands at the forefront of the open-access (OA) movement, offering unique insights into the production, evaluation, and dissemination of webtextual scholarship.

Although principles of transparency and cooperation are ostensible cornerstones of the humanistic tradition, this “ethos of openness” has arguably been obscured and impeded by an outdated and lugubrious infrastructure (Fitzpatrick and Avi 4). Proposing OA not as “radically new” but as potentially restorative, advocates gesture toward a future of cross-disciplinary engagement, collaborative dynamism, and academic diversity (Fitzpatrick and Avi 4).

Amidst this lofty rhetoric, Ball acknowledges the many practical challenges associated with systemic implementation. For example, the prevalence of “voiceovers or headshots” in scholarly multimedia renders “double-blind or anonymous review”—an important part of print-based editorial practice—virtually impossible (Ball and Douglas). Quoting Kuhn, she notes the difficulty of “strik[ing] a balance between convention and innovation” especially in the context of digital proliferation and recombination. Given the ever-shifting nature of the field, there are often terminological or definitional misunderstandings. What characterizes/constitutes “webtext”? Is it dynamic delivery, multimodal augmentation, or something more?

To assist in the cultivation and evaluation of “multimedia-rich, digital, screen-based” artifacts that not only supplement but also enact “an author’s scholarly argument” (Ball and Douglas), Ball suggests the use of flexible assessment frameworks—loose “rubrics” that can be adapted to the needs of the material at hand. On a basic level, she expects online journal submissions to exhibit topical suitability, technological functionality/interoperability, and critical development (“Assessing” 75). Beneath these self-evident criteria, however, lie the fundamental characteristics of all well-formed critical products: a conceptual core, a research component, form/content, creative realization, a clear audience, and timeliness (Ball “Assessing” 75).

Of particular importance here is the matter of form and/as content: “The trick of [this] category is that it cannot be assessed separately from the purpose, or conceptual core, of a piece” (Ball “Assessing” 68). Failing to recognize this connection, newcomers will sometimes attempt to expedite the editorial process by plucking “written content from its design,” and, in doing so, “introduce hundreds of small errors that must be undone” (Ball and Douglas). Digital editors/makers are thus encouraged to approach/articulate “design choices (form=content relationship) as rhetorical, aesthetic, technological, and other choices” (Ball “Assessing” 70).

With the constraints and affordances of digitality in mind, it becomes clear that webtextual scholarship requires a set of individualized, adaptive workflows. The anonymity, asynchronicity, and linearity of the traditional process (submission, independent review, acceptance/rejection, copy-editing, layout, printing/distribution) fails—at points—to accommodate the need for editorial iteration and real-time collaboration. Google Hangouts, Skype, and similar tools may occasionally prove useful here; however, they lack the infrastructural scalability and extensibility that a larger entity like Kairos might demand. How then does one systematize a dynamic process?

Enter Vega, an soon-to-be-released academic publishing system “made from a series of application programming interfaces (APIs)—modular and reusable programming tools that specify how software components should interact when combined” (Ball “Building” 109). With features ranging from development tools to peer review tracking, Vega is designed to “guide authors, editors, and publishers through a set of best-practice processes for publishing scholarly multimedia” (Ball “Building” 110).

Exciting as these new technology-enabled spaces for theorization and application may be, Ball cautions against wholesale adoption of any one methodology, framework, or toolset: “my values system for assessing webtexts may not, cannot, will not necessarily be yours” (“Assessing” 68). One must engage in constant evaluation of “community goals and needs,” maintain awareness of “unintended consequences,” and—of course—strive toward transparency throughout the ongoing process of implementation and remediation (Fitzpatrick and Avi 4; Ansolabehere et al. 4). It is only at this juncture of elasticity and structure, rigor and play, self-reflection and collective action, that editorial practice emerges as truly OA-compliant.

Works Consulted

Ansolabehere, Karina; Ball, Cheryl [lead author]; Devare, Medha; Guidotti, Tee; Priedhorsky, Bill; van der Stelt, Wim; Taylor, Mike; Veldsman, Susan; & Willinsky, John. (2016). The moral dimensions of open [access/scholarship/data]. Open Scholarship Initiative Proceedings, Vol. 1.

Ball, Cheryl E. (2012). Assessing scholarly multimedia: A rhetorical genre studies approach. Technical Communication Quarterly, 21 [Special issue: Making the implicit explicit in assessing multimodal composition], 61–77.

Ball, Cheryl E. (2017). Building a Scholarly Multimedia Publishing Infrastructure. Journal of Scholarly Publishing, 48(2): 99-115. DOI: 10.3138/jsp.48.2.99 [access at FSU]

Ball, Cheryl E., & Eyman, Douglas. (2015). Editorial workflows for multimedia-rich scholarship. Journal of Electronic Publishing, 18(4).

Ball, Cheryl E. (2013, January 28). The kairotic nature of online scholarly community building. mediaCommons: a digital scholarly network. nature-online-sc

Fitzpatrick, Kathleen, & Santo, Avi. (2012). Open Review: A Study of Contexts and Practices [white paper].

Exclusionary Logics in the Digital Humanities

In preparation for our upcoming discussion around Mercado’s Black at Bryn Mawr, we explored the aforementioned project alongside a number of similar efforts, all of which strive toward the preservation, publication, and promotion of suppressed narratives. While many hail the rise of DH as an emancipatory force that facilitates these sorts of cross-cultural collaborations, others contend that the institutional parameters of the field (digital though they may be) still exist along exclusionary lines. As this week’s assigned readings suggest, only through the acknowledgement and deconstruction of gendered, racialized logics can we hope to truly achieve the diversity and inclusivity long touted by proponents of our discipline.

Although few would disagree that certain internalized biases remain embedded within prevailing infrastructures, there seems to be some debate over the exact nature and scope of the central problem at hand. In “Black Studies and Digital Humanities: Perils and Promise,” Terman posits that the primary challenge facing scholars within Black Studies is determining “how to produce quality content that is centralized enough to provide a cumulative critical apparatus.” Preoccupied less with content and more with material conditions surrounding its production, Earhart and Taylor suggest that attention be turned to “inequitable distributions of digital humanities resources and labor.” For Wernimont, however, attempts to impose a strictly economic or consumptive framework upon DH undermine its fundamentally qualitative concerns: “a celebration of plentitude reproduces certain commercial metrics — notably production as value and information as capital — of which there is significant feminist critique.” Finally, Gallon sidesteps the structural conversation around academic racialization, focusing instead on the perpetuative source of these hegemonic formations, namely, our collective refusal to engage them.

As you might expect, proposed responses to these issues vary just as widely as articulations of the issues themselves. While Terman advocates the celebration and centralization of scholarly work occurring at the intersection of DH and Black Studies, Earhart and Taylor promote a model of dispersal, “one designed to decenter traditional power structures by shifting power centers, eliminating funding needs, and reducing the necessity for advanced technical knowledge.” Working toward a feminist critique of the digital archive specifically (and DH more broadly), Wernimont calls for an indexical, rather than enumerative, approach to the “proliferation of projects” that Earhart and Taylor envision. As for Gallon, she believes that the answer lies not in the projection of a digital future but in recovery of a human past: “‘How can digital tools and processes such as text mining and distant reading be justified when there is so much to do in reconstructing what it means to be human?’”

Demonstrating a potential synthesis of digital functionality and humanistic restoration, Black at Bryn Mawr is an extensive research project that “explores the experiences of Black students, faculty, and staff at the College,” thereby raising awareness of “racial power dynamics inside and outside of the classroom” (“About the Project”). The result of student collaboration under the guidance of Monica Mercado and Sharon Ullman, Black at Bryn Mawr encompasses a series of blog posts, walking campus tours, and an interactive digital map, all of which support the project’s ultimate objective: “to build institutional memory of the College’s engagement with race and racism, enabling future students to hold both themselves and the College community to higher standards of…accountability” (“About the Project”). The Amistad Digital Resource for Teaching African American History and the Florida Memory Black History photograph exhibit are two related projects, similarly awareness-driven. Looking forward, these efforts can serve as inspiration—if not definitive models—for future attempts to break down exclusionary logics, using our digital tools not to suppress, but rather to promote difference within DH.

Works Consulted


Ontologizing the Taxonomy

During her talk on Friday, FSU Digital Humanities Librarian Sarah Stanley put forward several challenging questions about (and responses to) the ontological assumptions and infrastructural formations that underlie taxonomic methodology. As humanists tasked with the preservation, publication, and evaluation of artifacts in an increasingly heterogeneous environment, can we—in good conscience—continue to work within traditional academic architectures? Are these systems capable of repair or must they be rebuilt entirely? Joining other thinkers and practitioners in the field, Stanley proposes renewed interrogation of prevailing methods with an eye toward new and more complex descriptive practice.

Early in her talk, Stanley acknowledged that any attempt at analysis necessarily entails a degree of reduction, particularly when large amounts of data are involved. This “flattening” can be (and has been) productively employed toward the recovery of erased narratives and population of archival silence. However, some have asserted that the taxonomic impulse is fundamentally colonial, thus tainting even the most well-intentioned of projects. By this logic, many of the current channels of academic communication—libraries, conferences, journals—are compromised.

A core principle of many data-driven efforts in DH is scalability, the capacity for stable interchange between units of analysis across a given research frame. In theory, a scalable system can accommodate indefinite growth without loss of semantic functionality. While this sort of framework can be useful for macroscopic research, the lack of granularity poses serious issues for qualitative, humanistic endeavors. According to Tsing, professor of anthropology at the University of California, Santa Cruz, scalability “disguises such divisions by blocking our ability to notice the heterogeneity of the world; by its design, scalability allows us to see only uniform blocks, ready for further expansion.”

Similarly, data cleaning—a common practice in the “hard” sciences—facilitates manipulation of information on a large scale. But once again, important differences are lost in the process of sanitizing datasets, differences which distinguish, and perhaps even define, our discipline: “the humanities cannot import paradigms and practices whole from other fields, whether from ‘technoscience’ or the nearer ‘social’ sciences, without risking the foreclosure of specific and valuable humanistic modes of producing knowledge” (Rawson and Munoz).

Beneath these various critiques of the taxonomic impulse lies a recognition of subjectivity as essential to and inherent in all methodology. Subjectivity is not simply, according to Drucker, “individual inflection or mere idiosyncracy,” but rather “codependent relations of observer and phenomena,” relations which shape and inform even our descriptions thereof. The taxonomy, then, is a not a neutral apparatus; we must “stop acting as though the data models for identity are containers to be filled in order to produce meaning, and understand that these structures themselves constitute data” (Posner).

Charting a path forward, Stanley pointed to three DH projects—TaDiRAH, the Orlando Project, and the Early Caribbean Digital Archive—that have successfully resisted (or at least attempted to resist) the allure of hierarchical models in favor of data ontologies, which are better suited to the representation of “awkward, fuzzy translations and disjunctures” (Tsing). Several practices have been proposed and applied in the effort to “ontologize” digital taxonomies. In place of scalable nesting, for instance, we can embrace the principles of nonscalability theory. Rather than cleaning data, we might use indexing as an alternate information structure “designed to serve as a system of pointers between two bodies of information” (Rawson and Munoz). It will take time to imagine and implement a robust, ontology-driven infrastructure, but through continued critique of old knowledge architectures and iterative engagement with new methodologies, we can hope to improve our collective effectiveness as humanists in a digital age.

Works Cited

  • Drucker, Johanna. “Humanities Approaches to Graphical Display.” 5.1 (2011): Digital Humanities Quarterly (DHQ).
  • Posner, Miriam. “What’s Next: The Radical, Unrealized Potential of Digital Humanities.” 27 July 2015 Post on Posner Blog.
  • Rawson, Katie, and Trevor Muñoz. “Against Cleaning.” 6 July 2016 Post on Curating Menus.
  • Tsing, Anna Lowenhaupt. “On Nonscalability: The Living World Is Not Amenable to Precision-Nested Scales.” Common Knowledge3 (2012): 505-24.