Webinar: Data Surveillance

Featured

With an upsurge in attention toward veillance and transparency practices since Edward Snowden’s 2013 interviews published by The Guardian, public conversations of data surveillance have lately centered on racist and cultural critique. Please join us for our final webinar in the continuing series on “People in Data II,” open to any members of the FSU, FAMU, and TCC communities, as well as greater Tallahassee, the state of Florida, and beyond. This discussion will focus on several aspects of surveillance, from sousveillance alternatives (Steve Mann, 2005) to technological supremacy.

********

WEBINAR: Friday, November 22 – 12:00-1:30 p.m. EST
“Data Surveillance” featuring

  • Yuwei Lin, University of Roehampton [website; blog]
  • Anaïs Nony, University of Fort Hare [website]

Advanced Reading or Browsing
Participants are invited to read the following:

and to browse the following in advance:

Registration
All participants are requested to register at https://app.livestorm.co/florida-state-university-2.

Attending and Connecting
Webinar participants in Tallahassee are welcome to join us in person in the R&D Commons, basement level of Strozier Library, or to connect remotely via LiveStorm. Through the interactive features of our LiveStorm platform, all participants will have the opportunity to submit questions and participate in group chat.

Connection Requirements
Remote participants should ensure or secure the following:

  • Web browser (Edge, Chrome, Firefox, Safari version 10 or greater)
  • Adobe Flash Player version 10.1 or greater
  • Internal or external speaker
  • (recommended: headsets or earbuds for optimum sound)

Connection Troubleshooting
If your email host runs Proofpoint, you may experience some difficulty with the email-based link/button that Livestorm sends you to access the webinar. Should this happen, you can still access the webinar by copying/pasting the webinar url into your web browser, rather than clicking the link/button.

This webinar is made possible through the generous support of FSU’s Office of Research.

We hope you can join us,
— Tarez Graban

(Augmented) Reality

[Ellie Marvin is a master’s student enrolled in the Digital Scholars reading group this semester.]

In their chapter entitled “Augmented Realities,” Casey Boyle and Nathaniel A. Rivers write about their definition of the term ‘augmented’: “The language of “augmented realities” reflects the very etymology of the word augment (augmentare), which suggests an increase, not an addition. To augment, then, does not simply entail supplementing some base—a priori ontological substrate—but rather increasing, as in elaborating the real, increasing its dis/ connectivity” (88). They place this definition in relation to their conception of augmented publics. They go on to write, “Augmentation is not simply more, but instead the qualitative activity of tuning, of activating certain channels, certain broadcasts” (89) and “How can we understand, or better yet, come to know such qualitative change that augmentation (as an increasing activity) provides?” (90), further complicating their definition of augmented reality.

During Friday’s meeting, we split into groups again to discuss the reading. Our group was very concerned with the definition of augmented, the definition of reality, and where those two terms meet to create augmented reality as a digital tool. We grappled with the idea that Boyle and Rivers present of increasing reality. How is it done? In what context? We did not have enough time to come to concrete conclusions, so I would like to explore this idea more in this blog post.

Does augmented reality offer an increased reality? Boyle and Rivers also write, “We often think of the augmentation of physical space via digital overlays or augmented reality (AR) as supplements or additions to that physical space. For example, in widely available online dictionaries, augmentation (in the augmented reality definitions) often refers to “technology” that “’augments’ ( = adds to) that real-world image with extra layers of digital information” (“Augmented Reality” 2010)” (88). These ‘extra layers’ then provide more information—but is that an increased reality?

Boyle and Rivers used three case studies of locative augmented reality tools, including Pokémon GO and Google Maps. Pokémon GO is a popular app which allows users to catch Pokémon, 3D digitally rendered creatures, in an AR environment. Users played on maps which reflected their own real spaces and gave the app access to the camera in order to situate Pokémon into the real environments around users. Google Maps is a frequently used location tool which offers users maps of places and businesses. Google Maps has three modes: map view, satellite view, and street view (Fig. 1). Map view displays a typical cartographic view of the surrounding area; satellite view shows the same map but enhanced with satellite imagery; and street view places the user on the street and shows them the area around them from the perspective of a person walking along the street.

Fig. 1: Screencaps from http://google.com/maps of map view, satellite view, and street view, respectively

Our group was unable to reach a consensus of which of these three modes of Google Maps is augmented reality, which is reality, and which is a representation of reality. In terms of the definition which the authors deny, that augmented reality simply adds to reality, all three modes of Google Maps offer an augmented reality in that they all offer information about places and businesses in the area, something which is invisible without the aid of technology. (Pokémon GO also fits into this definition with its addition of Pokémon, Pokéstops, Pokéballs, and other features.) However, in terms of the definition which the authors offer (reiterated in the first paragraph of this post), it is unclear which of these technologies, if any, truly “increase” reality.

Some members of the group argued that the only mode of Google Maps which attempts to be augmented reality is the street view, as it places the user into what is typically viewed as an augmented reality environment. Yet both street view and satellite view, some argued, present reality more clearly because of their inclusion of photographs to create their digital landscape. Some claimed that map view is the only mode of Google Maps which does not augment reality, and that it is not even a representation of reality because it does not attempt to replicate the natural surroundings of an area in the same way that street view and satellite view do. I disagree with this stance. All three modes of Google Maps, I believe, augment reality, and all three are representations of reality. None of them attempt to replicate reality exactly, not in the same way that many augmented reality and virtual reality environments and technologies attempt.

Fundamentally, it is difficult to come to a deep understanding of Boyle and Rivers’ definition of augmented reality because they offer only a (albeit substantial) definition of augment, but not reality. I feel their case studies of Google Maps, Pokémon GO, and Ingress would have benefited from a clearer definition, though I understand that their primary focus was on augmented publics and not necessarily on defining augmented reality. Nevertheless, their working definition of augmented reality is hindered by their lack of an attempt to define reality and thoroughly explain how it can be augmented in their terms of increasing and “elaborating the real.” Our group would have been much better equipped to come to a conclusion if the authors were clearer on some of their terminology.

What’s At Stake In Privacy?

[Ellie Marvin is a master’s student enrolled in the Digital Scholars reading group this semester.]

I wrote my last blog post about threats to data privacy within data capitalism. This week, I want to take an in-depth look at what exactly it means to have privacy in the 21st century. I think it’s important to recognize that the value of privacy has dramatically changed since the advent of the smart phone, the ubiquitous device which is constantly listening, watching, and tracking a great majority of its users.

Whenever a conversation turns towards privacy and protecting our data, I always get a bit uncomfortable. I feel as if I have already relinquished my right to privacy on so many platforms that I can never have privacy again. Google has been tracking my web browsing history for years. The Amazon Echo Dot in my living room is constantly listening to what is happening in my home, even if it’s not necessarily recording what it hears. I have been submitting papers through Turnitin since I began my college career. My iPhone has several apps that are constantly tracking my location or have access to my camera and/or microphone.

I am not convinced that I need to end my usage of services like those listed above. I like communicating with apps that send pictures or videos. I enjoy the convenience of asking Alexa what time my favorite hockey team will play. I feel safe knowing that, if something terrible happened, certain trusted friends and family members would be able to track my phone to find out where I am. I enjoy these modern luxuries and comforts, and I am not wholly ready to give them up and (attempt to) pull all of my information off of the Internet.

However, even I have certain information I would like to keep private. Recently, I learned that Square card readers have access to users’ email addresses just from swiping a credit card. I was displeased to find my inbox flooded with digital receipts despite never having given out my email address. I also dislike the relentless targeted ads on Facebook for everything from engagement rings to clothes I viewed on Amazon earlier to concerts in my area. (Admittedly, some of these ads have been effective.)

My biggest question in conversations of privacy is: what’s at stake? What exactly am I giving up in order to communicate with friends and family members on Facebook, for instance? At what point should I no longer be willing to give up my privacy for certain affordances? Further, is the damage already done? I have had a Facebook profile for years. Is my information already out there, unprotected? Is there anything I can do to “get it back”?

It’s difficult for me to rectify my position as a consumer who enjoys privacy with my position as a digital humanist who would like to take full advantage of all of the attractive features that the Internet has to offer. I would like to use (and hopefully create) augmented reality and virtual reality, but I also know that those technologies require access to cameras and locations. As a teacher, as well, I would like to use these technologies with my students, but I am unsure of the ethical implications of asking them to potentially give up their privacy. I assume that this is just something I will have to grapple with over the course of my life, and right now all I can do is be sure to be more aware of this issue moving forward and thoroughly consider my digital actions.

The Participatory Turn

Friday, November 1, 12:00-1:30 pm
PIH Digital Humanities Lab (Diffenbaugh 421)

On “The Participatory Turn”

In the opening pages to The Participatory Condition, Barney et al invoke Louis Althusser’s concept of “interpellation” to describe the various acts of “hailing and hearing” in which we — in the contemporary West — willingly participate through our interaction with media systems, both on- and offline. They further invoke Bernard Stiegler’s pharmakon to align this participation with “both [the] poison and [the] remedy, … [the] promise of emancipation as well as a form of subjection” that they understand as consequent to all mediated activity (x). At the next Digital Scholars meeting, we hope to consider the strength of these metaphors — weighing the viability of their arguments for a liberal democratic society, and looking more closely at what they understand to be the historical preconditions for such large-scale media liberalism. When did the era of technical media necessarily become an era of passive consent, dividuation, or domination? What are the opportunities for mediated participation beyond propagandized involvement? Where might we make room for alternative views? And how do answers to these questions invoke, in turn, salient discussions of people in data? Participants are welcome to read and join us for conversation on any of the following:

  • Boyle, Casey, and Nathaniel A. Rivers. “Augmented Publics.” In Writing, Rhetoric, Circulation, edited by Laurie E. Gries and Collin Gifford Brooke. Utah State UP, 2018, pp. 83-101. [stable copy in Canvas]
  • The Participatory Condition, edited by Darin Barney, Gabriella Coleman, Christine Ross, Jonathan Sterne, and Tamar Tembeck. Editors’ “Introduction” (pp. vii-xxxix), and Cohen’s chapter on “The Surveillance-Innovation Complex” (pp. 207-226). [stable copy in Canvas]

and to browse any of the following projects or tools in advance:

Participants are encouraged to bring laptops or tablets. We hope you can join us.
-TSG

Internet-Mediated Mutual Cooperation Practices through the Lens of Digital (Re)productive Labor

[Gabriela Diaz Guerrero is a master’s student enrolled in the Digital Scholars reading group this semester.]

In our last discussion meeting on digital reproductive labor, we discussed both Bart Cammaerts’ “Internet Mediated Mutual Cooperation Practices: The Sharing of Material and Immaterial Resources” and Karen Dewart McEwen’s “Self-Tracking Practices and Digital Reproductive Labor.” Towards the end of our meeting, we started to (try to) consider the takeaways these articles should leave with us: what did the authors intend for us, as readers, to do and know at the end of the day? We now know in much more detail, even if we weren’t already nebulously aware, of just how much data of ours is kept in our usage of such self-tracking apps and services like Fitbits, moodPanda, and even menstrual tracking apps like Clue. We now know that mutual cooperation mediated by Internet tech and networks, while sometimes indeed being geared toward more collective goals rather than individualistic motivations, still “all operate[s] squarely within capitalism and its rules of engagement” (Cammaerts 163). But where do we go from here?

In our last meeting, as Ellie has also discussed here, we started to consider this very question in terms of what, if anything, we should be doing with what we know about our data’s privacy. The suggestion of a kind of crowdsourced, collective resource for understanding data privacy fine print in regulations and offering instructions for opting out of such data collection where possible, for maintaining conscious and cognizant control over one’s data, was briefly mentioned.  Dr. Romano suggested going even further, that what was needed was not just instruction and more knowledge, but “building tools to expose the black box” of data that so many companies keep of and from their users.

The imagery—and the ultimate desired effect—would ideally be more active, clearly. Education alone will, obviously, not be enough, and expertise should be deployed towards making tools that actively protect the privacy and data of users/producers. Anything less might leave in some of the same position as these articles did in part: we gain knowledge, but remain unsure of ways to act on it, or overwhelmed with the prospect of what we could do, if only we had the time and energy to spend unchecking every single one of more than 300 sliders with the knowledge that failing to uncheck even one will make us have to repeat the whole process of trying to stop data collection on a single website at a time. 

When McEwen defines digital reproductive labor as “residing in both the private and public realms—and, indeed, as troubling the boundary between the two” (237) and as a sort of clear iteration of reproductive labor research’s insights that “paid labor always requires unpaid labor to support and reproduce it…the exploitation of unpaid labor is legitimized through social roles and relationships” (239), it is easy to see how this might also begin to apply to our tentative solutions in part, I think. Where self-tracking supports the social factory’s fabric, exposing the black box so to speak—while allowing us to dismantle some of that unpaid labor supporting paid labor, it is still unpaid labor that works to try and allow us to have a clearer distinction between our “work” and “private” lives.

Exposure and even a sort of crowdsourced free use blocking tool still does not move dismantle or significantly disrupt the structures underlying them, but more to manage them in more humanist ways. The reproductive labor might not look, at this point, like taking ten minutes to remember each that that we are one person and try to be mindful of ourselves for a fleeting moment to manage our day-to-day stresses, but it still makes us feel better about our work lives by trying to assure us that the divide is still maintainable even though we are the ones working so hard to rebuild that divide as it is constantly eroded. And when we work to maintain it, we are still—as Cammaerts discusses—shaping and reshaping squarely in the sandbox of the dominant systems at play.

Cammaerts’ examples of mutual cooperation, and specifically his consideration of how such cooperation is being “reduc[ed]…to alternative forms of market relations, plus a bit of charity” (163), seem to mesh in particularly with McEwen’s discussion of digital reproductive labor helping to maintain paid labor structures. The framing of file-to-file sharing, for example, may often be primarily motivated by a desire to save or not spend money, as Cammaerts mentions—the idea of subverting hegemonies is wholly secondary to the primary concern of being able to access something you as an individual want but you as an individual do not have the resources to pay for at the moment. The system is not being subverted out of a belief in greater collective ideals and open access but supplanted out of necessity—the idea of working within the system (actually buying something) is not thrown out, it is simply not as individually advantageous at a given moment.

But this does not mean that there isn’t value in Cammaerts’ assertion of real potential, even if it did seem a little weakened to me after an extensive rundown of the many ways mutual cooperation was actually not so mutual and collaborative and a few ways it was more mutual but still working under capitalism. Take Broadway bootleggers, for example. The primary motivation is, in part, not being able to pay to watch a Broadway show in person. But a very prominent and often discussed motivation is also the idea that Broadway/live theatre by extension will always be a rather exclusive and inaccessible experience for many people, that professional recordings provided to larger audiences are a need and, after a certain amount of time, should be made free to the public for the purpose of sharing art in the world, and that for this reason moral arguments against bootlegging are weak or null (as are, they mention, the arguments against bootleggers trying to make some kind of modest profit before offering up recordings on a trade basis—the risk is so high, the labor put into it so intensive, that it should be compensated in some way). It is clear that their motivations are tied up in capitalism, but many of the protocols and ideals of joining trading communities rather than just trying to buy and/or download, or trading within such communities (after a set time to recoup some costs and profit from the risky operation of recording a bootlegging), or constantly pushing for professional recordings and making more shows freely available in mainstream ways and pushing against the inherent inaccessibility of Broadway seem to sway more in the direction of Cammaerts’ idea of sharing and collaborating for communal goals rather than merely working within capitalist systems.

If our last webinar was reaffirming in some of the best ways about hope in digital and archival activist endeavors, this discussion on digital reproductive labors has been to me, in large part, about ways to see that momentum carrying through into our other discussions of digital humanities work and concerns. And this week, though it’s a bit challenging, but I think that the “real potential” that Cammaerts highlights is real and perhaps important to latch onto, both as we continue to work through potential solutions to data privacy incursions and to consider mutual cooperation in the digital age—activism and activation, as we have discussed, are recursive, constantly engaging and evolving practices that require continuous attention to be effective—why would data privacy solutions/discussions of pushing mutual cooperation on the Internet towards more collaborative goals need to be all-encompassing or fully subverting, by contrast, to have value? Reframing the ways that our agency might work in these scenarios might be a necessary reorientation, and a more productive one, to work through our takeaways in an ongoing fashion.

Privacy within Data Capitalism

By Ellie Marvin

In our Digital Scholars meeting this week, we discussed two readings, “Self-Tracking Practices and Digital (Re)productive Labour” by Karen Dewart McEwen and “Internet-Mediated Mutual Cooperation Practices: The Sharing of Material and Immaterial Resources” by Bart Cammaerts. We split into two groups. I was in the group which discussed the McEwen article, which analyzes the various (and often nefarious) ways that self-tracking apps steal unpaid labor and personal data from their users.

In our group we discussed (the severe lack of) data privacy and the role of the consumer/producer in modern data capitalism. We tackled several large issues, some of which were presented in the article and some of which we theorized independently. Our group was very concerned with the rapidly declining rate of data privacy, which contrasts with the massive value of data, be it monetarily, socially, or otherwise. It seems like this problem is almost too big to tackle. How do you protect your data when nearly every app, device, or website today requires its users to give up their rights and privacy in order to access it?

There are some options. Of course, there is the most obvious answer: to completely unplug yourself from modern society, to live “off the grid.” We could also take the route of extremism in the form of anarchy. The best way to eliminate data capitalism is surely to eliminate capitalism altogether, no? This is an interesting idea, but perhaps not a wise one on which to linger.

But McEwen surely isn’t calling for everyone to radically change their lifestyles in order to preserve their data privacy. I believe the more available call to action here is simply to better understand where your data is going, who will be collecting it, how will they be collecting it, what would they stand to gain from collecting it, and why would they need to collect it at all. Furthermore, if, for instance, App X collects your data, to whom might they be interested in selling it? Why would Company Y and Company Z be in a bidding war for your data? It’s important to answer all of these questions with every new site or app that requires you to give up personal information and your right to keeping such information private.

The major issue in finding answers to these questions is transparency. Many companies are forthcoming about what they do with the data they gain from their users and the steps they take to guard their users’ privacy; many, however, are not. Take, for instance, the great scandal of Facebook and Cambridge Analytica. Essentially, Facebook confronted Cambridge Analytica about their misuse of users’ private data. Cambridge Analytica said they would no longer use the illegally obtained data. In a shocking turn of events, they continued to use the data and made an incredible amount of money by continuing to use it (illegally). When the scandal broke, people around the world were shocked at just how public their data really is. When Netflix made a documentary about the case, an even larger audience became aware of the malevolent use of data and led to millions of Internet users suddenly questioning where their data goes and who may already have a hold of it. However, since this case—and even before—Facebook has been widely viewed as just one of many “evil” media conglomerates that hungrily sucks as much data out of its users and greedily sells it to whoever may pay the most, regardless of intention.

Perhaps the most useful and most immediate solution to this huge problem of data privacy is for us as digital humanists to create a kind of data privacy guide identifying which sites/apps use data responsibly and which ones do not. If we assume that the root of the problem is that users do not know where their data is going, and companies are withholding that information, then this seems like the best solution. If we can create a generation of more informed consumers, we can better protect our data moving forward. If we can become data activists rather than passive users, we could even take back our data.

“From the Margins to the Mainstream”: History as Activation & Activism

[Ashleah Wimberly is a doctoral student enrolled in the Digital Scholars reading group this semester.]

Last week we had the pleasure of meeting with Hamilton Budaza and Gordon Metz. In this discussion, what I was most struck by was our discussion of moving voices that have been lost or suppressed in history from the margins of museum exhibits to the mainstream. Engaging in this project allows historians such as Budaza and Metz to bring forward voices that better represent the whole of South Africa – it creates a space where voices who have been repressed to speak and be heard. Metz pointed out that one of the challenges of this project is that the museums and historical record are associated with the institution, which South African peoples are justifiably wary of and tend to distrust. However, through their work with itinerant museums, which move through different environments and are constantly in flux and enriched, they are able to actively allow for new voices and perspectives to be represented. A large part of their project is to create archives of previously unheard voices.

Metz argues for memory as a powerful term when working with traumatic legacies – everyone has memories and those memories and lived experiences are valid. Collecting the memories of others who suffered through Apartheid becomes a way for Metz and Budaza to validate the experiences of a multitude of people and through their history as activism work, they’ve made people more aware of how the traumatic legacies of the past continue to shape our present and our futures. Budaza added that through collecting all of these memories, they are creating an archive that will allow future generations to understand what happened in South Africa and simultaneously validating and celebrating the lives of those who had previously been forgotten or intentionally erased.

Technology brings a wealth of possibilities to their projects, but as Metz pointed out, technology is not without its problems. We should be wary of seeing technology as an answer. In particular, he remains concerned about the application of VR technology, which he argues divorces people from the emotional impact a physical place can hold. In particular, he offered concerns about people becoming too disconnected from traumatic histories and the mistakes of the past that are still resonating in our present. His concern resonated with me, as I considered the way that I felt the first time I entered a concentration camp in real life. I’m not sure that entering one via VR technology would convey the heaviness of the air or the intense sorrow felt all around the space. I think that perhaps much of my personal concern stems from VR technology currently being a very individualized experience, whereas when physically visiting a space you have others around you who are all feeling their own feelings about that space and the violence that happened there.