How Private is Private?

By Ellie Marvin

Today, I opened the Twitter app and was greeted with a small banner notifying me of upcoming changes to Twitter’s Terms and Conditions. An updated version of their terms will go into effect on January 1, 2020. I quickly dismissed the banner, swiping away to see the content I opened the app to see. After watching the most recent Digital Scholars webinar, however, I decided to investigate further.

During the webinar, Yuwei Lin discussed a recent project in which she asked her students to record themselves asking if people have read Terms and Conditions for many of the apps and devices they use every day. Unsurprisingly, most people confessed they had not read these often long and jargon-filled documents. Anais Nony later brought up the idea of the ubiquitous and deceptive “feeling of consent” which we tend to engage in as a society. We allow ourselves to feel as if we’ve consented to certain kinds of surveillance without fully considering the consequences and how far-reaching that surveillance may be. This blind and blissful ignorance lulls us into a false sense of feeling as though we have control over our data, despite rarely actually looking into where it goes and who owns it.

Twitter has historically been an important social media platform for the growth and development of digital humanities. Twitter is often used in a digital humanities context to spread important academic information, and also to rapidly and collaboratively disseminate and create knowledge. Since Twitter is such an important tool in my field, I feel compelled to use it—even if only to browse other users’ tweets—and should understand what data the app is tracking.

Thus, I decided to read Twitter’s new Terms and Conditions. The terms were easy to find and displayed in large text. There’s an air of openness to Twitter’s Terms and Conditions and its Privacy Policy. Twitter’s Privacy Policy boasts in a large font, “We believe you should always know what data we collect from you and how we use it, and that you should have meaningful control over both.” However, when one delves a bit deeper, it seems clear that there is, in fact, no real privacy on Twitter—which, I suppose, should not come as a shock.

I was a bit upset (yet, still not surprised) to learn about how much data Twitter takes from me and all of its users. I do not like that it claims absolutely no responsibility for content its users post or any fallout from that content. I also do not appreciate the fact that, while Twitter takes no responsibility for this content, it is also able to remove content. Not only that, but Twitter retains a “worldwide, non-exclusive, royalty-free license (with the right to sublicense) to use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute” any content posted on their site. This is a scary thought and an unpleasant one to have to consider.

One nice thing about Twitter, I will say, is its openness about advertising and the data which it will receive. I discovered a page which each logged-in user can access. The page will show users what data Twitter has gathered from them and what kind of advertisements have been tailored to them. The best part about this feature is that users have the option to turn it off. At any point, I can decide I would not like to have targeted ads and can simply subscribe to the same ads every other generic Twitter use could see.

It seems obvious to me, having now read through Twitter’s rules, terms and conditions, and privacy policy that nothing on Twitter is either private or protected. Therefore, should digital humanists migrate to a new social media platform? Should we refrain from Twitter altogether in the search for something more private? Or is privacy simply a right which we have to allow ourselves to give up in order to engage with a global community?

Using the Humanist’s Tools: Spring 2020 Digital Scholars


Dear Friends of Digital Scholars,

I’m pleased to announce our schedule of topics and speakers for the culminating semester of Digital Scholars, on “using the humanist’s tools,” with all sessions inviting hands-on participation or offering a look into the architecture of particular projects. Please mark your calendars for the following dates:

Friday, Jan. 31, 2020
Medieval translation app., “The Tremulator,” with David Johnson
12:00-1:15 p.m. (location TBA)

Wednesday, Feb. 12, 2020
Digitized newspaper corpora and networks, “Oceanic Exchanges,” with Jana Keck and Paul Fyfe [via Zoom]
12:00-1:15 p.m. (location TBA)

Wednesday, Mar. 11, 2020
Data cleaning for the humanities, “Dirty” OCR Analysis, with Allen Romano
12:00-1:15 p.m. (location TBA)

Friday, Apr. 3, 2020
Crowd-sourcing cultural citings/sightings, “Dante Today,” with Beth Coggeshall
12:00-1:15 p.m. (location TBA)

More announcements will follow. We hope you can join us for one or more of these discussions in the spring.

Webinar: Data Surveillance

With an upsurge in attention toward veillance and transparency practices since Edward Snowden’s 2013 interviews published by The Guardian, public conversations of data surveillance have lately centered on racist and cultural critique. Please join us for our final webinar in the continuing series on “People in Data II,” open to any members of the FSU, FAMU, and TCC communities, as well as greater Tallahassee, the state of Florida, and beyond. This discussion will focus on several aspects of surveillance, from sousveillance alternatives (Steve Mann, 2005) to technological supremacy.


WEBINAR: Friday, November 22 – 12:00-1:30 p.m. EST
“Data Surveillance” featuring

  • Yuwei Lin, University of Roehampton [website; blog]
  • Anaïs Nony, University of Fort Hare [website]

Advanced Reading or Browsing
Participants are invited to read the following:

and to browse the following in advance:

All participants are requested to register at

Attending and Connecting
Webinar participants in Tallahassee are welcome to join us in person in the R&D Commons, basement level of Strozier Library, or to connect remotely via LiveStorm. Through the interactive features of our LiveStorm platform, all participants will have the opportunity to submit questions and participate in group chat.

Connection Requirements
Remote participants should ensure or secure the following:

  • Web browser (Edge, Chrome, Firefox, Safari version 10 or greater)
  • Adobe Flash Player version 10.1 or greater
  • Internal or external speaker
  • (recommended: headsets or earbuds for optimum sound)

Connection Troubleshooting
If your email host runs Proofpoint, you may experience some difficulty with the email-based link/button that Livestorm sends you to access the webinar. Should this happen, you can still access the webinar by copying/pasting the webinar url into your web browser, rather than clicking the link/button.

This webinar is made possible through the generous support of FSU’s Office of Research.

We hope you can join us,
— Tarez Graban

(Augmented) Reality

[Ellie Marvin is a master’s student enrolled in the Digital Scholars reading group this semester.]

In their chapter entitled “Augmented Realities,” Casey Boyle and Nathaniel A. Rivers write about their definition of the term ‘augmented’: “The language of “augmented realities” reflects the very etymology of the word augment (augmentare), which suggests an increase, not an addition. To augment, then, does not simply entail supplementing some base—a priori ontological substrate—but rather increasing, as in elaborating the real, increasing its dis/ connectivity” (88). They place this definition in relation to their conception of augmented publics. They go on to write, “Augmentation is not simply more, but instead the qualitative activity of tuning, of activating certain channels, certain broadcasts” (89) and “How can we understand, or better yet, come to know such qualitative change that augmentation (as an increasing activity) provides?” (90), further complicating their definition of augmented reality.

During Friday’s meeting, we split into groups again to discuss the reading. Our group was very concerned with the definition of augmented, the definition of reality, and where those two terms meet to create augmented reality as a digital tool. We grappled with the idea that Boyle and Rivers present of increasing reality. How is it done? In what context? We did not have enough time to come to concrete conclusions, so I would like to explore this idea more in this blog post.

Does augmented reality offer an increased reality? Boyle and Rivers also write, “We often think of the augmentation of physical space via digital overlays or augmented reality (AR) as supplements or additions to that physical space. For example, in widely available online dictionaries, augmentation (in the augmented reality definitions) often refers to “technology” that “’augments’ ( = adds to) that real-world image with extra layers of digital information” (“Augmented Reality” 2010)” (88). These ‘extra layers’ then provide more information—but is that an increased reality?

Boyle and Rivers used three case studies of locative augmented reality tools, including Pokémon GO and Google Maps. Pokémon GO is a popular app which allows users to catch Pokémon, 3D digitally rendered creatures, in an AR environment. Users played on maps which reflected their own real spaces and gave the app access to the camera in order to situate Pokémon into the real environments around users. Google Maps is a frequently used location tool which offers users maps of places and businesses. Google Maps has three modes: map view, satellite view, and street view (Fig. 1). Map view displays a typical cartographic view of the surrounding area; satellite view shows the same map but enhanced with satellite imagery; and street view places the user on the street and shows them the area around them from the perspective of a person walking along the street.

Fig. 1: Screencaps from of map view, satellite view, and street view, respectively

Our group was unable to reach a consensus of which of these three modes of Google Maps is augmented reality, which is reality, and which is a representation of reality. In terms of the definition which the authors deny, that augmented reality simply adds to reality, all three modes of Google Maps offer an augmented reality in that they all offer information about places and businesses in the area, something which is invisible without the aid of technology. (Pokémon GO also fits into this definition with its addition of Pokémon, Pokéstops, Pokéballs, and other features.) However, in terms of the definition which the authors offer (reiterated in the first paragraph of this post), it is unclear which of these technologies, if any, truly “increase” reality.

Some members of the group argued that the only mode of Google Maps which attempts to be augmented reality is the street view, as it places the user into what is typically viewed as an augmented reality environment. Yet both street view and satellite view, some argued, present reality more clearly because of their inclusion of photographs to create their digital landscape. Some claimed that map view is the only mode of Google Maps which does not augment reality, and that it is not even a representation of reality because it does not attempt to replicate the natural surroundings of an area in the same way that street view and satellite view do. I disagree with this stance. All three modes of Google Maps, I believe, augment reality, and all three are representations of reality. None of them attempt to replicate reality exactly, not in the same way that many augmented reality and virtual reality environments and technologies attempt.

Fundamentally, it is difficult to come to a deep understanding of Boyle and Rivers’ definition of augmented reality because they offer only a (albeit substantial) definition of augment, but not reality. I feel their case studies of Google Maps, Pokémon GO, and Ingress would have benefited from a clearer definition, though I understand that their primary focus was on augmented publics and not necessarily on defining augmented reality. Nevertheless, their working definition of augmented reality is hindered by their lack of an attempt to define reality and thoroughly explain how it can be augmented in their terms of increasing and “elaborating the real.” Our group would have been much better equipped to come to a conclusion if the authors were clearer on some of their terminology.

What’s At Stake In Privacy?

[Ellie Marvin is a master’s student enrolled in the Digital Scholars reading group this semester.]

I wrote my last blog post about threats to data privacy within data capitalism. This week, I want to take an in-depth look at what exactly it means to have privacy in the 21st century. I think it’s important to recognize that the value of privacy has dramatically changed since the advent of the smart phone, the ubiquitous device which is constantly listening, watching, and tracking a great majority of its users.

Whenever a conversation turns towards privacy and protecting our data, I always get a bit uncomfortable. I feel as if I have already relinquished my right to privacy on so many platforms that I can never have privacy again. Google has been tracking my web browsing history for years. The Amazon Echo Dot in my living room is constantly listening to what is happening in my home, even if it’s not necessarily recording what it hears. I have been submitting papers through Turnitin since I began my college career. My iPhone has several apps that are constantly tracking my location or have access to my camera and/or microphone.

I am not convinced that I need to end my usage of services like those listed above. I like communicating with apps that send pictures or videos. I enjoy the convenience of asking Alexa what time my favorite hockey team will play. I feel safe knowing that, if something terrible happened, certain trusted friends and family members would be able to track my phone to find out where I am. I enjoy these modern luxuries and comforts, and I am not wholly ready to give them up and (attempt to) pull all of my information off of the Internet.

However, even I have certain information I would like to keep private. Recently, I learned that Square card readers have access to users’ email addresses just from swiping a credit card. I was displeased to find my inbox flooded with digital receipts despite never having given out my email address. I also dislike the relentless targeted ads on Facebook for everything from engagement rings to clothes I viewed on Amazon earlier to concerts in my area. (Admittedly, some of these ads have been effective.)

My biggest question in conversations of privacy is: what’s at stake? What exactly am I giving up in order to communicate with friends and family members on Facebook, for instance? At what point should I no longer be willing to give up my privacy for certain affordances? Further, is the damage already done? I have had a Facebook profile for years. Is my information already out there, unprotected? Is there anything I can do to “get it back”?

It’s difficult for me to rectify my position as a consumer who enjoys privacy with my position as a digital humanist who would like to take full advantage of all of the attractive features that the Internet has to offer. I would like to use (and hopefully create) augmented reality and virtual reality, but I also know that those technologies require access to cameras and locations. As a teacher, as well, I would like to use these technologies with my students, but I am unsure of the ethical implications of asking them to potentially give up their privacy. I assume that this is just something I will have to grapple with over the course of my life, and right now all I can do is be sure to be more aware of this issue moving forward and thoroughly consider my digital actions.

The Participatory Turn

Friday, November 1, 12:00-1:30 pm
PIH Digital Humanities Lab (Diffenbaugh 421)

On “The Participatory Turn”

In the opening pages to The Participatory Condition, Barney et al invoke Louis Althusser’s concept of “interpellation” to describe the various acts of “hailing and hearing” in which we — in the contemporary West — willingly participate through our interaction with media systems, both on- and offline. They further invoke Bernard Stiegler’s pharmakon to align this participation with “both [the] poison and [the] remedy, … [the] promise of emancipation as well as a form of subjection” that they understand as consequent to all mediated activity (x). At the next Digital Scholars meeting, we hope to consider the strength of these metaphors — weighing the viability of their arguments for a liberal democratic society, and looking more closely at what they understand to be the historical preconditions for such large-scale media liberalism. When did the era of technical media necessarily become an era of passive consent, dividuation, or domination? What are the opportunities for mediated participation beyond propagandized involvement? Where might we make room for alternative views? And how do answers to these questions invoke, in turn, salient discussions of people in data? Participants are welcome to read and join us for conversation on any of the following:

  • Boyle, Casey, and Nathaniel A. Rivers. “Augmented Publics.” In Writing, Rhetoric, Circulation, edited by Laurie E. Gries and Collin Gifford Brooke. Utah State UP, 2018, pp. 83-101. [stable copy in Canvas]
  • The Participatory Condition, edited by Darin Barney, Gabriella Coleman, Christine Ross, Jonathan Sterne, and Tamar Tembeck. Editors’ “Introduction” (pp. vii-xxxix), and Cohen’s chapter on “The Surveillance-Innovation Complex” (pp. 207-226). [stable copy in Canvas]

and to browse any of the following projects or tools in advance:

Participants are encouraged to bring laptops or tablets. We hope you can join us.

Internet-Mediated Mutual Cooperation Practices through the Lens of Digital (Re)productive Labor

[Gabriela Diaz Guerrero is a master’s student enrolled in the Digital Scholars reading group this semester.]

In our last discussion meeting on digital reproductive labor, we discussed both Bart Cammaerts’ “Internet Mediated Mutual Cooperation Practices: The Sharing of Material and Immaterial Resources” and Karen Dewart McEwen’s “Self-Tracking Practices and Digital Reproductive Labor.” Towards the end of our meeting, we started to (try to) consider the takeaways these articles should leave with us: what did the authors intend for us, as readers, to do and know at the end of the day? We now know in much more detail, even if we weren’t already nebulously aware, of just how much data of ours is kept in our usage of such self-tracking apps and services like Fitbits, moodPanda, and even menstrual tracking apps like Clue. We now know that mutual cooperation mediated by Internet tech and networks, while sometimes indeed being geared toward more collective goals rather than individualistic motivations, still “all operate[s] squarely within capitalism and its rules of engagement” (Cammaerts 163). But where do we go from here?

In our last meeting, as Ellie has also discussed here, we started to consider this very question in terms of what, if anything, we should be doing with what we know about our data’s privacy. The suggestion of a kind of crowdsourced, collective resource for understanding data privacy fine print in regulations and offering instructions for opting out of such data collection where possible, for maintaining conscious and cognizant control over one’s data, was briefly mentioned.  Dr. Romano suggested going even further, that what was needed was not just instruction and more knowledge, but “building tools to expose the black box” of data that so many companies keep of and from their users.

The imagery—and the ultimate desired effect—would ideally be more active, clearly. Education alone will, obviously, not be enough, and expertise should be deployed towards making tools that actively protect the privacy and data of users/producers. Anything less might leave in some of the same position as these articles did in part: we gain knowledge, but remain unsure of ways to act on it, or overwhelmed with the prospect of what we could do, if only we had the time and energy to spend unchecking every single one of more than 300 sliders with the knowledge that failing to uncheck even one will make us have to repeat the whole process of trying to stop data collection on a single website at a time. 

When McEwen defines digital reproductive labor as “residing in both the private and public realms—and, indeed, as troubling the boundary between the two” (237) and as a sort of clear iteration of reproductive labor research’s insights that “paid labor always requires unpaid labor to support and reproduce it…the exploitation of unpaid labor is legitimized through social roles and relationships” (239), it is easy to see how this might also begin to apply to our tentative solutions in part, I think. Where self-tracking supports the social factory’s fabric, exposing the black box so to speak—while allowing us to dismantle some of that unpaid labor supporting paid labor, it is still unpaid labor that works to try and allow us to have a clearer distinction between our “work” and “private” lives.

Exposure and even a sort of crowdsourced free use blocking tool still does not move dismantle or significantly disrupt the structures underlying them, but more to manage them in more humanist ways. The reproductive labor might not look, at this point, like taking ten minutes to remember each that that we are one person and try to be mindful of ourselves for a fleeting moment to manage our day-to-day stresses, but it still makes us feel better about our work lives by trying to assure us that the divide is still maintainable even though we are the ones working so hard to rebuild that divide as it is constantly eroded. And when we work to maintain it, we are still—as Cammaerts discusses—shaping and reshaping squarely in the sandbox of the dominant systems at play.

Cammaerts’ examples of mutual cooperation, and specifically his consideration of how such cooperation is being “reduc[ed]…to alternative forms of market relations, plus a bit of charity” (163), seem to mesh in particularly with McEwen’s discussion of digital reproductive labor helping to maintain paid labor structures. The framing of file-to-file sharing, for example, may often be primarily motivated by a desire to save or not spend money, as Cammaerts mentions—the idea of subverting hegemonies is wholly secondary to the primary concern of being able to access something you as an individual want but you as an individual do not have the resources to pay for at the moment. The system is not being subverted out of a belief in greater collective ideals and open access but supplanted out of necessity—the idea of working within the system (actually buying something) is not thrown out, it is simply not as individually advantageous at a given moment.

But this does not mean that there isn’t value in Cammaerts’ assertion of real potential, even if it did seem a little weakened to me after an extensive rundown of the many ways mutual cooperation was actually not so mutual and collaborative and a few ways it was more mutual but still working under capitalism. Take Broadway bootleggers, for example. The primary motivation is, in part, not being able to pay to watch a Broadway show in person. But a very prominent and often discussed motivation is also the idea that Broadway/live theatre by extension will always be a rather exclusive and inaccessible experience for many people, that professional recordings provided to larger audiences are a need and, after a certain amount of time, should be made free to the public for the purpose of sharing art in the world, and that for this reason moral arguments against bootlegging are weak or null (as are, they mention, the arguments against bootleggers trying to make some kind of modest profit before offering up recordings on a trade basis—the risk is so high, the labor put into it so intensive, that it should be compensated in some way). It is clear that their motivations are tied up in capitalism, but many of the protocols and ideals of joining trading communities rather than just trying to buy and/or download, or trading within such communities (after a set time to recoup some costs and profit from the risky operation of recording a bootlegging), or constantly pushing for professional recordings and making more shows freely available in mainstream ways and pushing against the inherent inaccessibility of Broadway seem to sway more in the direction of Cammaerts’ idea of sharing and collaborating for communal goals rather than merely working within capitalist systems.

If our last webinar was reaffirming in some of the best ways about hope in digital and archival activist endeavors, this discussion on digital reproductive labors has been to me, in large part, about ways to see that momentum carrying through into our other discussions of digital humanities work and concerns. And this week, though it’s a bit challenging, but I think that the “real potential” that Cammaerts highlights is real and perhaps important to latch onto, both as we continue to work through potential solutions to data privacy incursions and to consider mutual cooperation in the digital age—activism and activation, as we have discussed, are recursive, constantly engaging and evolving practices that require continuous attention to be effective—why would data privacy solutions/discussions of pushing mutual cooperation on the Internet towards more collaborative goals need to be all-encompassing or fully subverting, by contrast, to have value? Reframing the ways that our agency might work in these scenarios might be a necessary reorientation, and a more productive one, to work through our takeaways in an ongoing fashion.