On June 9, 2010, Wired.com ran a story announcing the intention of DARPA, the experimental research arm of the United States Department of Defense, to create “mission planning software” based on the popular tax-filing software, Turbotax.
What fascinated the DoD was that Turbotax “encoded” a high level of knowledge expertise into its software allowing people with “limited knowledge of [the] tax code” to negotiate successfully the complex tax-filing process that “would otherwise require an expert-level” of training (Shachtman). DARPA wanted to bring the power of complex “mission planning” to the average solider who might not have enough time/expertise to make the best decision possible for the mission.
I start with this example to show that arcane realms of expertise, such as the U.S. Tax Code, can be made accessible to the general public through sound interface design and careful planning. This is especially pertinent to Digital Humanities scholars who do not always have the computer-science training of other disciplines but still rely on databases, repositories, and other computer-mediated environments to do their work. This usually means that humanities scholars spend hours having awkward phone conversations with technical support or avoid computer-mediated environments altogether.
With the arrival of new fields like Periodical Studies, however, humanities scholars must rely on databases and repositories for taxonomy and study. As Robert Scholes and Clifford Wulfman note in Modernism in the Magazines, the field of periodical studies is so vast that editors of print editions have had to make difficult choices in the past as to what information to convey since it would be prohibitively expensive to document all information about a given periodical (especially since periodicals tended to change dramatically over the course of their runs). Online environments have no such limitations and thus provide an ideal way of collecting and presenting large amounts of information. Indeed, Scholes and Wulfman call for “a comprehensive set of data on magazines that can be searched in various ways and organized so as to allow grouping by common features, or sets of common features” (54).
What DARPA and Turbotax realize is that computer-mediated environments can force submission compliance with existing “best practices” in order to capitalize on the uneven expertise levels of the general population. Wulfman and Scholes call for the creation of a modernist periodical database where modernist scholars can work together and map the field of periodical studies according to agreed upon standards of scholarship. By designing a repository on a Turbotax model of submission compliance, the dream of community-generated periodical database that conforms to shared bibliographic standards is readily attainable.
Because of the vastness of its subject matter, Periodical Studies is inherently a collaborative discipline—no one scholar has the capacity to know everything about every periodical (or everything about one magazine for that matter). Thus, the creation of periodical database is necessary to map the field and gather hard data about modernist periodical production. The problem is that not every periodical scholar has the computer expertise to create or even navigate the complexities of database/repository systems. Nor does every scholar know how to follow the best metadata and preservation practices of archival libraries. We are now at a point where we can utilize the interests and expertise of humanities by creating a repository that forces proper “input” along the lines of Turbotax.
Challenge
I use the example of periodical studies to challenge the greater field of Digital Humanities. Our discipline has now reached a mature age, and think we can all agree that the battle between “Humanities Computing” and “Digital Humanities” should be put to rest as we move into the next phase of the field: designing user-friendly interfaces based on a Turbotax model of user input. For example, even at this stage of Digital Humanities, there doesn’t appear to be a web-based TEI editor that can link with open repositories like Fedora Commons. In fact, the best (and most stable) markup tool I’ve used thus far is Martin Holmes’s Image Markup Tool at the University of Victoria. Even this useful bit of software is tied to the Windows OS, and it operates independently of repository systems. That means a certain level of expertise is needed to export the IMT files to a project’s repository system. That is, the process of marking up the text is not intuitive for a project that wishes to harness the power of the many in marking up texts (by far, the most time-consuming process of creating a digital edition). Why not create a Digital-Humanities environment that once installed on a server, walks a user through the editing process, much like Turbotax walks a user through his/her taxes? I used to work as an editor for the James Joyce Quarterly. I experienced many things there, but the most important thing I learned is that there is a large community of people (slightly insane), who are willing to dedicate hours of their time dissecting and analyzing Joyce. Imagine what a user-generated Ulysses would look like with all of that input! (we would, of course, have to ban Stephen Joyce from using it–or at least not tell him about it).
Digital Humanities Ecosystems
The story of Digital Humanities is one littered with failed or incomplete tools. I suspect, save for the few stalwarts working under labs like Martin Holmes, or our colleagues in Virginia and Georgia, and elsewhere, that tools are dependent on stubborn coders with enough time to do their work. I find this to be a very inefficient way of designing tools and a system too dependent on personalities. I know of a handful of projects right now attempting to design a web-based TEI editor, but I’m not holding my breath for any one of them to be finished soon (goals change, after all). Instead of thinking of Digital Humanities development in these piecemeal terms, I think we need to come together as a federation to design ECOSYTEMS of DH work–much like Turbotax walks one through the entire process of filing taxes.
I think the closest thing we have to this right now is OMEKA, which through its user-base grows daily. What if we took Omeka’s ease-of-use for publishing material online and made into a full ingestion and publication engine? We don’t need to reinvent the wheel after all: Librarians have already shown us how we should store our material according to Open Archival Standards. There is even an open repository system in Fedora Commons. We even know what type of markup we should be using: TEI and maybe RDF. And Omeka has shown us how beautiful publication can be on the web.
Now, Digital Humanists, it is our time to take this knowledge and create archives/databases based on the Turbotax model of doing DH work: We need to create living ecosystems where each step of digitizing a work is clearly provided by a front end to the repository. Discovery Garden is working on such an ecosystem right now with the Islandora framework (a Fedora Commons backend with a Drupal front end), and I hope it will truly provide the first “easy-to-use” system that once installed on a server will allow all members of a humanist community to partake in digital humanities work. If I’m training students to encode TEI, why can’t I do so online actually encoding TEI for NINES or other projects? I’ve been in this business for years now, and even I get twitchy running shell scripts—my colleagues and students are even more nervous. So let’s build something for them, so we they can participate in the digital humanities as well. Everyone has something to gain.
I am attempting to harness the power of the crowd with “the Database of Modernist Periodicals,” to be announced this summer. I’ll let you know how it goes.
I end with this caveat: We need to prepare for the day when the “digital” humanities will simply be “the humanities,” and that means democratizing the digital (especially in our tools). Even I was able to file my taxes this year.
I believe Islandora has a TEI editor; Islandora, of course, uses Fedora Commons as its back end.
Hello Matt,
This is an intriguing idea, especially in light of discussions that have been going on recently on Humanist, about the extent to which the computer can be conceptualized as “just a tool”. If a Turbotax-like front end has the power to change the way scholars work together, then it becomes part of the infrastructure that scaffolds critical practice and that shapes the way knowledge is constructed (which are things that are usually quite difficult to recuperate). So, another example of how far digital technology exceeds the idea of being “just a tool”. I can also imagine people objecting strenuously to–and deliberately resisting–the ideas of others about what constitutes “proper input” for a digital repository or digital edition, so the technical side of things might actually be simpler than getting the wetware to work …
(On Omeka: Is it possible to hope for a little demo in the EMiC strand at DHSI this June? I would love to use Omeka for student projects, and can imagine that it might lend itself nicely to showcasing various bits of EMiC data without creating impassable technical barriers to entry. But I realize there is only a limited amount of time in that course, so it may not be possible.)
Anouk
@Dorothea: Yes, there is a “work in progress” TEI editor in Islandora. I hear they are working on making it much more robust too. I’m really looking forward to seeing what the Islandora crew develops over the next year. I think they can become a major player in the “user-friendly” Digital Humanities realm.
@Anouk: Yes, I’ve been following this “just a tool” conversation on Humanist. The conversation seems misguided to me though. I do believe this type of theorization is important, but it must be based on practice. The problem right now is that not enough humanists know how to program, so we must first build them tools so we can observe what they do with them. I think the “Spatial Humanities” (http://spatial.scholarslab.org/) project at Scholars Lab is a great example of this.
The problem with the “just a tool” discussion is that we are human beings who have a tendency to become emotionally involved with our tools. It is the pathetic fallacy of toolship, so-to-speak. Of course the computer is just a tool–the complicated part is our emotional and intellectual involvement with it. Apple Computer knows this. This is why they make such beautiful products. They make “tools” that make us forget we are actually using tools. They design things in which the hardware gets out of the way so that our imaginations may take over.
Your comment on the “proper input” is good. What I mean by proper input is that librarians have already figured out storage methods and practices. We do not need to reinvent the wheel there. What I mean by proper input is that the input will conform to those archival standards widely accepted by librarians. We need to think of ourselves as cultural producers, and the only way our productions can last is if we practice the safest archival procedures possible. Using Fedora Commons instead of a closed, proprietary system is just one example I could give here. Thank you so much for your thoughts. I look forward to continuing this discussion here and at DHSI!