On “Bitstreams: Locating the Literary in the Media Archive”

The Digital Scholars group’s last session to this date, conducted by Matthew Kirschenbaum’s talk “Bitstreams: Locating the Literary in the Media Archive”, covered a wide variety of subjects within a clear guiding thread: on the one hand, the materiality and specificity of the new (with this notion of newness properly problematized) information storage formats and codes; on the other hand, the need to better understand the potentialities of such mediums to complement and enrich our approach to the ways whereby we generate, consume and store information.

The notion of the archive, with its edges, fevers and anxieties monopolized great part of the session. We live in a moment when data management is omnipresent in public discourse, through sensitive and burning issues such as the right to be forgotten and mass surveillance. Thus, there is an underlying anxiety about dealing with the enormous repository of information that is being generated; hence the importance of making data available through adequate metadata. Reflecting on the topic of media simulation, the talk moved towards access and interfaces and the ways through which the relationship between users and archive materializes: what users are invited to do, and what they are not.

Regarding Wendy Chun’s article, “The enduring ephemeral, or the future is a memory” and its interesting distinction between memory and storage, the internet very often puts the focus in the preservation of memory. Kirschenbaum brought up the example of the digital library Internet Archive as a repository of cultural artifacts, dedicated to prevent the possibility of the so-called “digital dark age“. The Internet Archive, however, provides the user with surrogates like videogame emulators from previously existing formats/devices, failing to replicate an otherwise irreproducible gaming experience. On a separate issue, and regarding the key question of storage, processing and access to born digital texts, it was noted that an interdisciplinary approach is much needed, comprising a collaborative work between digital archivists and humanists.

One of the ideas that Kirschenbaum tried to contest was precisely the notion of electronic records as surrogates of physical, paper records, which are frequently considered primary. It is clear that at this moment electronic records are most of the time born digital: there is an innumerable list of cultural artifacts that lacks a continuous, physical version preceding the discreet, electronic one. At an instrumental, operative level, Wolfgang Ernst’s chapter, “Media Archaeography: Method and Machine versus the History and Narrative of Media” is relevant to these arguments due to its proposal of a media archaeology; paraphrasing the author’s words, media archaeology would be a kind of reverse engineering that does not seek to articulate a prehistory of the mass media (at least not in the historical sense), but to unravel its epistemological configuration. The idea is to explore mass media as the non-discursive entities that they are, understanding their belonging to a temporal regime other than the historical-narrative, and to overcome “screen essentialism” and go beyond the mere interface in order to find out how the hardware works. To Kirschenbaum, the point of practical intersection of media archaeology with digital humanities is precisely what he coined as the practice of digital forensics: securing and maintaining the digital cultural legacy through preservation, extraction, documentation, and interpretation.

In this same line of intersections, Kirschenbaum referred to the book Notebook, from Annesas Appel, “a project based on mapping the inside of a notebook [computer]”. The project proposes a sort of deconstruction of the device, together with a transition from tridimensional to a bi-dimensional perception. In it, different components and pieces are presented separated and in series. One of the book’s keys is that these components are progressively less recognizable as computer components; there is a detachment from their original function and a transformation towards a script, an isolation and atomization that Kirschenbaum described as media archaeological splendor and that makes evident an archive fever. Through this inventory-like atomization and disposition in series of computer components, and due to their immediateness and simultaneity, we seem to enter an order that is alternative to the historical-narrative: a kind of lost code that reveals itself.

To conclude, and moving back to the title of the session, one of the categories necessarily shaken at this juncture is that of the literary: what were physical manuscripts, traces of the writing process are now born digital files, with the generalized use of the computer as the preferred writing tool. Again, the convergence of media as binary, ones and zeros, makes us wonder if it still makes sense to distinguish what is and is not literary.

On “the archive” and “the ephemeral”– a follow up to Matthew Kirschenbaum’s “Bitstreams”

In his talk titled, “Bitstreams: Locating the Literary in the Media Archive,” Matthew Kirschenbaum interrogated the term “archive”. His talk was partially in response to two pieces of scholarship in the digital humanities, Wendy Hui Kyun Chun’s “The Enduring Ephemeral, or the Future is Memory”, which discusses how that which was once ephemeral is not due to media technologies, and also Wolfgang Ernst’s text titled, “Media Archaeography: Method and Machine Versus the History and Narrative of Media” which discusses how scholars can take an archaeological approach to technology or how we can consider what cultural factors allow media to be transcribed and the significance of those transcriptions.

To return to Kirschenbaum, he began his talk by noting that Derrida used the term “archive” to point to origins and memory. On the pages of The American Archivist, it was also noted that archival practice is increasingly institutionalized, that it growing at an exponential volume, that many records are simply missing and many archivalists struggle with the sheer number of authors and potential technological complications that come with contemporary archival work. What will be obama_1118825ado with President Obama’s Blackberry or our own family photos on Instagram? There is anxiety that surrounds this sort of abundance. Whereas an archive used to be a noun, now it is a verb—“to archive” means to back up data, to move something from more accessible to less accessible at another time.

In response to this abundance and these concerns, Kirschenbaum pointed toward the emerging discourse of “media archaeology.” For examarchive2-300x289ple, the Internet Archive is an archive of the internet on the internet, where bygone websites, games, and images can be explored. The IA also includes executable software that may not be available to experience elsewhere. The only problem with viewing these .exe files on a computer is that the bitstreams are not compatible and the browser may flatten the effect of these files.

Is any media processed through digital technology truly ephemeral? What is the changing nature of the archive in the face of the kinds of widespread digitization and in the face of the digital anxiety about data? The digital and the print aspects of the literary are combined in a contemporary context. A book is created using digital tools. When we archive the works of literary figures, we can consider how digital artifacts will be combined with that archival process.

The questioning of the ephemeral in this context brought up questions of trauma. Are there things that one has the right to forget? Should Facebook have the right to showcase our memories? This questionable ephemerality also pointed to the “screenshot economy” where questionable media events are impossible to erase because of users taking screenshots. An example of this “economy” would be when celebrities create inflammatory tweets and then attempt to remove them, but traces of the tweet exist because of users who screenshot the offense such as the recent event with Donald Trump tweeting about Ted Cruz.

Additional discussion centered on the archive itself. What is an archive able to do? What is it expected to do? In the age of increased digitization, there is a desire for the archive to simulate an experience of the past as well as preserve the data from that past event. It may be that experience of a medium in full authenticity that is ephemeral. We can use a bygone software, but can we recreate the experience of using that software on its original platform?

Kirschenbaum pointed us towards possible conversations that could be taking place between archival communities, archaeological communities and literary communities. With increasing awareness of the significance of materiality and the increasing number of digital collections, more discourse will definitely take place.

Preparing “Messy Data” with OpenRefine

Thursday, February 18, 12:30-1:45 pm
Strozier 107A (Main Floor Instructional Lab) [Map]

Preparing “Messy Data” with OpenRefine: A Workshop

The fourth meeting of  Digital Scholars for Spring 2016 will be conducted as a workshop, led by Dr. Richard J. Urban of FSU’s School of Information, who will walk us through two tutorials on how to use this tool for digital humanities scholarship–both for gathering and for interpreting unread data sets. Formerly a Google tool for data management, OpenRefine has recently been optimized for understanding, manipulating and transforming data of any kind, combining extant data sets (i.e., such as those that researchers have compiled in Excel spreadsheets) with open data, attained through web services and other external links. From large-scale repositories and networks to small-scale archives and visualizations, most projects constructed or used by digital scholars have benefited from data management with OpenRefine, or similar tools.

Participants are encouraged to browse the following resources in advance:

and to read the following for background:

Access to OpenRefine will be provided in the Strozier Library Learning Lab; thus, registration is helpful (though not required) so that we can gauge attendance. Participants are welcome to bring their own devices and install OpenRefine during the session. While Dr. Urban will mostly focus these tutorials, participants are also welcome to bring datasets that they would like to discuss or explore.

We hope you can join us,

-TSG

Bitstreams: Locating the Literary in the Media Archive

Thursday, February 4, 3:30-4:45 pm
Williams Building 013 (English Common Room, basement level)

Bitstreams: Locating the Literary in the Media Archive

Please join us for the third meeting of the Digital Scholars reading and discussion group for Spring 2016, featuring Matthew Kirschenbaum, Associate Professor of English at the University of Maryland and Associate Director of the Maryland Institute for Technology in the Humanities (MITH) as well as teaching faculty at UVa’s Rare Book School, who will talk with us via videoconference about crossing over domains in digital work. While Kirschenbaum’s work ranges from looking materially at writing practices to looking historically at our media mindsets, this particular presentation will examine the condition of both the contemporary archive and what we construct as “the literary.”

Fundamentally, a “bitstream” acts as a conduit — a communication channel for bits or units of information that express coordinates in terms of binary relationships. For Kirschenbaum, however, the more interesting critical information carried by a bitstream is expressed in its physical inscription that, in turn, points to the multiple heritages characterizing a single data form. In many of his publications and through much of his blogging, Kirschenbaum argues persuasively for the need to consider digital forensics on archival documents as a vital preservation practice. In this presentation, however, he may ask us to make a reciprocal move by reading more from the data themselves. In light of emerging critical discourses around media archaeology, as well as practical techniques for preserving, accessing, and analyzing legacy data and obsolescent media formats, the reciprocal conversation may be overdue.

Participants are encouraged to read the following in advance:

  • Wolfgang Ernst. “Media Archaeography.” In Digital Memory and the Archive (ed. Jussi Parikka). U Minnesota P, 2013. 55-73. E-book link [stable copy in Bb org site]
  • Wendy Hui Kyong Chun. “The Enduring Ephemeral, or the Future Is a Memory.” Critical Inquiry 35 (Autumn 2008): 148-71. Electronic access [stable copy in Bb org site]

And to browse:

We hope you can join us,

-TSG

Digital tools: facilitating the analysis of glossed medieval manuscripts

The readings for our last session, “Visualizing Signs of Use in Medieval Manuscripts,” focused on the analysis of the handwriting, glosses and annotations of a 13th century scribe known as the Tremulous Hand of Worcester. In this case, digital technology—paleographic tools such as DigiPal and the Tremulator—facilitates the availability and handling of the manuscripts. The two articles try to answer questions coming from very different fields, based on the same object of study: the proliferous work of this scribe.

Dr. David Johnson’s article, “Who Read Gregory’s Dialogues in Old English?” is concerned with the reasons why the Dialogues, written by Pope Gregory I, were translated into Old English, and with its readership. Looking at some prominent Anglo-Saxon men who read the texts, such as King Alfred and Aelfric of Eynsham, Johnson illustrates their popularity and significance for a wide audience, as part of Alfred’s educational reform, and particularly as a source for vernacular sermons. The close study of the annotations that the Tremulous Hand performed on the Old English manuscripts offers more insight on the use and the readers of the Dialogues. Looking at how the Hand altered the punctuation—an interesting example is the punctus elevatus, used to indicate a pause when the sense is complete but the sentence is not—Johnson suggests that these passages were indeed intended for oral delivery (197). Although the scribe’s original interest in the texts was perhaps lexical, his work seems to have the intention to make them available to others.

The medieval scribe is known as the Tremulous Hand due to a tremor, which is the object of study of another article. It is surprising to learn that a medieval patient could be diagnosed by closely examining his annotations eight centuries later. Dr. Thorpe and Dr. Alty analyze his handwriting from a neurological and historical perspective: their aim is to determine what kind of tremor he suffered. Therefore, theirs is the first analysis of essential tremor in a medieval context. The authors look at the passages he marked, how the tremor developed as shown in his handwriting, literature on the diseases in the medieval period, etc. They conclude that the evidence suggest he suffered from essential tremor.

The research needed for these studies can be exhausting and time-consuming. Luckily, digital technologies have been able to facilitate this endeavor. DigiPal, for example, provides photos of medieval handwriting, with information and several ways of examining the data. You can search for a particular letter or graph, the work of a scribe, a collection, etc. Dr. Johnson and his son have developed another web-based tool called the Tremulator to analyze the Tremulous Hand’s writing, which made this work more manageable. In our meeting we had the opportunity to have a demonstration of how this tool works. Each user can examine the manuscripts, record data, and share it. Once we click on the character or graph we want to work with, a menu shows up, allowing us to mark if we consider it Tremulous’ work or not, its function, characteristics, and—interestingly—the level of certainty we have in our decision. The speed is another benefit of the Tremulator: this is a fast process, and the menu saves the last setting, making it faster to just click another one and apply the same settings. The user can filter out a search, and configure the app according to particular specifications.

We ended this meeting with a discussion on how the Tremulator can be improved, and also used for other projects. The participants suggested having examples of the marks and good images of them to guide the user. The opportunity for collaboration was highlighted as one of its benefits: the work of each of the scholars could be identified with a color to track the contributions. It could also be used to foster students’ scholarship, in which case a method for revising the work must be implemented.

Building upon the idea of collaboration, I think about the functionality e-books bring today and how we can take it further. Even though physical books have not lost their charm and many of us prefer to feel the pages in our hands, we cannot deny the increasing use of e-books and some of their benefits. When researching, I prefer the digital version of a book on theory, as it allows me to search for words easily and make comments I can later erase, etc. A tool like the Tremulator can be also used to work with books/images that are not easily available (for example, in my field of specialization: Cuban contemporary literature) in such a way that promotes collaboration in class, research projects, dissertations, and facilitate transnational research.

Visualizing Signs of Use in Medieval Manuscripts

Friday, January 22, 12:00-1:15 pm
Strozier Library 107K [map]

From Concept to App: Visualizing Signs of Use in Medieval Manuscripts

Digital Scholars is pleased to welcome Dr. David Johnson to discuss a new web-based data collection tool that makes the forensic “layering” of glossed manuscripts (such as those produced by the “Tremulous Hand of Worcester” in 13th-century England) more visible. This tool, nicknamed “The Tremulator,” offers solutions both historical and historiographic. Firstly, readers of medieval manuscripts left all kinds of traces of their interest in the contents of the books they read, including marginal and interlinear annotations, glosses, translations, corrections, and various aids for readers who came after them. Yet keeping track and making sense of this wide variety of signs has often proven difficult, until a collaboration between paleography and digital technology inspired this particular tool using a touch-screen device. Secondly, whereas other digital paleographic tools (such as DigiPal) do facilitate the often tedious task of collecting data, “Tremulator” makes it possible to catalogue, visualize, and share that data in useful and interesting ways, making the inscription practices of medieval texts more viable for cross-disciplinary study in neurological science, computer informatics, and manuscript genetics, among other areas.

Dr. Johnson will discuss its inception and development from a concept to an app. Archivists, digital historians, and scholars and teachers of any period, practice, genre, or tradition should find this discussion useful, as it bears on other recent discussions about how much of a field’s technological identification can (or should) reasonably rest in perceptions about a manuscript’s “signs of use.”

Participants are encouraged to bring electronic tablets or laptops, and to browse the following resources in advance:

  • Johnson, David F. “Who Read Gregory’s Dialogues in Old English?” The Power of Words: Anglo-Saxon Studies Presented to Donald G. Scragg on his Seventieth Birthday, ed. Hugh Magennis and Jonathan Wilcox (Morgantown: 2006), 173-206. [in Bb org site]
  • Thorpe, Deborah E., and Jane E. Alty. “What type of tremor did the medieval ‘Tremulous Hand of Worcester’ have?” Brain: A Journal of Neurology10 (Oct 2015): 3123-27. (open-access at Oxford Journals http://brain.oxfordjournals.org/content/138/10/3123)

Participants are especially encouraged to explore the home page for “DigiPal” [http://www.digipal.eu], as well as the “Introduction to DigiPal’s Framework” [http://www.digipal.eu/blog/a-quick-introduction-to-the-digipal-framework/], where they can find an intricate (and interactive) description of how some online tools model and read the outputs of England’s various 11th-century scribes.

We hope you can join us,

-TSG

Cyberconialism, Collaboration and Private vs. Public Interests — Notes from the 1st meeting of FSU Digital Scholars, Spring 2016

During the first meeting of FSU Digital Scholars, we started discussion by attempting to define the nature of cybercolonialism, we continued discussion by trying to articulate the nature of collaboration in both private product-based endeavors and public project-based endeavors and towards the tail end of our discussion, we utilized evaluation of the private social network Academia.edu as a way to discuss the need for more effective tools for the sharing of scholarship across the disciplines.

To back up a little, the discussion of cybercolonialism was spurred by our reading of Mary Leigh Morbey’s “Killing a Culture Softly: Corporate partnership with a Russian museum,” which was published in Museum Management and Curatorship in 2006. In addition to Morbey’s article, group members had also read this post by Juliane Nyhan and Oliver Duke Williams, this post by Dominico Fiormonte, and viewed this animation of 17th century London by Josh Jones. In Morbey’s article, she describes a partnership between Global IBM and the State Hermitage Museum in St. Petersburg Russian, wherein IBM offered information communications technology (ICT) services to the museum. Global IBM designed a sophisticated website for the Hermitage Museum but as Morbey argues, IBM controlled the design and the implementation of the website in a way that exemplifies “cybercolonialism.” Morbey proposes that “cyberglocalization” or ICT practices that focus on incorporating local influences be embraced going forward.

The question was posed in regards to what more specific examples of “cybercolonialism” look like or in the case of Morbey’s article, what was decidedly “un-Russian” about what IBM was doing? Group members articulated that Morbey was pointing to the control of access and literacy regarding the technologies more so than smaller level design choices. The notion that IBM would create an ICT property and then charge those that are ill-equipped with the usage of the technology could be problematic.

The conversation turned towards the nature of collaboration. Group members questioned the boundaries of collaboration in private product-based endeavors and public project-based endeavors. The question was raised, “How do we think about the labor index?” especially when some projects require thousands of hours of tedious labor. Also, in terms of collaborative balance, “What is or should be our architecture?” One characteristic of digital humanities scholarship that was re-iterated was the need for cross-disciplinary collaboration, both in terms of scholarship, but also in terms of labor. Collaboration is constrained by the need to pay those across disciplines who are often out of the pay-scale for many publically funded projects.

In regards to collaboration tools, a discussion of Academia.edu began that was inspired by these a post by Kathleen Fitzpatrick titled, “Academia not edu,” and by this forum discussion. The group established the need for tools for DH scholars to share information across disciplines, but voiced concerns about the private business interests behind Academia.edu, but the group also voiced concerns that other tools might not be utilized enough to create adequate viability.

Time ran before the group could get to discussions of the “bell curve” of hype, but it was clear that questions of cybercolonialism, collaboration and private vs public interests in the digital humanities will continue throughout the semester.

2015 Retrospective

Friday, January 15, 12:00-1:15 pm
Williams Building (WMS) 415

Opening Discussion: Academia.edu, Cybercolonialism, and the Bell Curve of Hype

The organizational meeting for Spring 2016 Digital Scholars will be dedicated to a brief retrospective of discussions featured in DigitalHumanities Now in 2015 — including how to valuate (or why some wish to boycott) metric-driven academic social networking sites such as academia.edu and researchgate; and how to delineate between useful/multicultural collaborations and DH approaches that may be at risk of putting subjects or laborers under erasure.

All are welcome for this discussion, and participants are invited to browse the following in advance:

 

We hope you can join us,

-TSG

Theorizing Models in the Digital Age

The articles that prepare us for Dr. Richard Urban’s talk on Friday, November 6, ask questions about what a model is and/or can do. While we may think of our models as transparent reflections of what is being modeled, Julia Flanders and Fotis Jannidis observe that it is not enough to have the database or model be a theory in itself—a practice that fully justifies and explains itself through its use. While this pragmatic approach can be sufficient up to a certain point, theories of modeling help us reflect on our praxis. As Flanders and Jannidis write, “Theory is usually the theory of something, trying to spell out the basic concepts relevant in the praxis of doing something . . . a theory of digital humanities cannot simply coincide with its praxis” (2-3).

To this end, I found Willard McCarty’s essay, “Modeling: A Study in Words and Meanings,” particularly helpful in thinking about the different sides to an understanding of models. The core, I think, of McCarty’s essay lies here, as he’s introducing the different synonyms to “model” that he’s going to consider (analogy, representation, experiment, etc.): “But perhaps the most important lesson we learn from seeing the word in the context of its synonym set is not the range and variety of its meanings; rather, again, its strongly dynamic potential.” Theorizing modeling as a dynamic tool in digital humanities helps us avoid some of the blind spots that might occur otherwise.

Arianna Ciulu and Øyvind Eide point towards this dynamic quality as well, stating, “In digital humanities we do not only create models as fixed structures of knowledge, but also as a way to investigate a series of temporary states in a process of coming to know. The point of this kind of a modeling exercise lies in the process, not in the model as a product” (37). For me, this emphasizes two points in regards to TEI coding.

First, the modeling of a text is an ongoing process that requires interpretation, judgement, and observation/perspective—all partially subjective elements of textual coding. For example: if there are typos in a manuscript, a coder can make a judgment and indicate a spelling/typing error, even if the author of the manuscript gave no indication (did not cross out the word or correct it in any way). The XML tags the coder uses let the reader know that they are perceiving a spelling error so that it is not mistaken as a correction made by the manuscript author. We can easily imagine a case of mistaken judgment, however, if the coder perceives an error that was fully intentional. Perhaps the author meant to spell the word this way for whatever reason.

Another example would be if the coder overlooked something that someone else deems important to code. In the Beckett Digital Manuscript Project, you can search the coded manuscript for both gaps in the text and the doodles that Beckett often drew on his pages. These are textual elements that we may at first be inclined to ignore because they are not part of the “text” as we traditionally conceive it. But of course these are part of the manuscript and have been shown to have significance in relation to the other parts of the page.

I give these examples to show why coding, as a form of modeling, cannot be seen as a fully cut-and-dry process that ends as a product as soon as the text is coded by a competent coder. How well a model represents its object is often up for dispute and revision. Ideally, a digital archive would allow for feedback and suggestions to improve or revise the model, open to new ways of representing the original manuscript. This openness to revision keeps both the original and its model incomplete in terms of knowledge. The “temporary states in a process of coming to know” generate “structures of knowledge,” but these structures are not fixed. They are tentative wholes that help us understand the heterogeneous parts of a given text.

This brings me to my promised second point, which is in regards to the modeling tool itself, i.e. TEI. Similar to what I said above, TEI as a standard is also not fixed; while it may not change as frequently as some would like, the guidelines have not stayed the same since its inception decades ago. It’s true that, as Ciulu and Eide say, “Even by abstracting away the text itself, the stripped out XML tree constitutes basically a model of one way of seeing the text structure: the place name is part of a sentence which is part of a paragraph which is part of a chapter and so on” (40). Thus, our model intrinsically comes with a set of assumptions about the originals we’re trying to represent.

Nevertheless—and I say this with the small knowledge I currently have on XML coding—there seems to be a good amount of flexibility in XML coding to adapt it to particularities. In other words, there will never be a “universal grammar” of modeling, in which the particular cases are always subordinate and fully represented by the current modeling tools. As we continue to experiment and think about new and better ways to represent data, likely based on a case-by-case basis, our tools will continue to change and adapt.

“What is a model, and what can it do?”

In preparation for Dr. Richard Urban’s discussion and lecture on Nov. 6,  Julia Flanders asks a question most pertinent in setting the tone for this week: “What is a model and what can it do?”  Reflections on two of the articles presented will speak to this question.

Ariaan Ciula and Oyvind Eide, in “Reflection on Cultural Heritage and Digital Humanities,” compare two modelling traditions.  This comparison focuses on pragmatic concerns and presents the comparison at an abstract level. The two selected are CIDOC-CRM and TEI.  The former is used in cultural heritage programs and the latter in modelling in the digital humanities.  The authors present strong emphasis on the dominant differences in the two modelling standards.  TEI, used to represent textual features, is created in research conducted in 1987.  This standard then evolved into the Consortium Structure established in 2001.  CIDOC-CRM provides a semantic framework.  Cultural heritage information is mapped by museums, universities, private companies and research centers.

Ciula and Eide state that strong epistemic values of the models reside in the fact that, while being dependent upon theory, the models themselves transcend theory.  Modelling characterizes the dynamic and heuristic.  Discussion is presented on the implementation of the CIDOC-CRM model in the Norwegian Documentation Project in the 1990s.  The purpose of the two modelling systems is reiterated by the authors: TEI encoding appears as textual context and CIDOC-CRM is based upon a specified model of the world.  Though two quite differed modelling standards are presented, the authors provide in-depth comparison, model structures and uses.  Purpose and utility promoted within the article is summed up succinctly by the authors:

While this exercise is interesting in itself as an investigation into modelling strategies, it also has a pragmatic aim of raising our own awareness about the choices made in certain modelling practices.  Rather than being seen as a divider between communities and traditions, such awareness enables a certain freedom. The different approaches combined can help envisaging imaginary constructs which can be used to model cultural artefacts and their interpretations in new ways.  (p. 40)

Offering another framework, J. Iivari’s “A Paradigmatic Analysis of Contemporary Schools of IS Development” outlines seven major schools: software engineering, decision support systems, infological approaches, database management, implementation of research, management information systems and sociotechnical approach.  These Schools also present seven similar assumptions:

  1. view of data/information is a descriptive fact
  2. information systems are a technical artifact with social implications
  3. technology is viewed as a matter of human choice
  4. a predominately structural view
  5. values of IS research reflect organic and economic qualities
  6. minds-end-oriented view of IS science
  7. a positivistic epistemology

Three diagrams included in the article compliment the text and deal with these subjects:  the major contributions of contemporary schools of IS development, presenting in framework of paradigmatic analysis, and the epistemological assumption and research methods.  Findings and results promoted in this article proved support for the assumption of the existence of an identifiable ‘orthodoxy.’  This is expressed with the exception given that there are certain variations between all the schools discussed. The authors’ study provides positive support for the work of Hirscheim and Klein (1999).