Editing Modernism in Canada

Archives

Archive for July, 2010


July 23, 2010


CFP: Scholarly Editing: The Annual of the Association for Documentary Editing

Just thought I’d pass along a relevant CFP from the Center for Digital Research in the Humanities:

See the original post at http://cdrh.unl.edu/opportunities/docediting_call.php

Background
Since 1979, Documentary Editing has been a premier journal in the field of documentary and textual editing. Beginning with the 2012 issue (to be published in late 2011), Documentary Editing will be renamed Scholarly Editing: The Annual of the Association for Documentary Editing and will become an open-access, digital publication. While retaining the familiar content of the print journal, including peer-reviewed essays about editorial theory and practice, the 2012 issue of Scholarly Editing will be the first to publish peer-reviewed editions.

CALL FOR EDITIONS
Even as interest in digital editing grows, potential editors have not found many opportunities to publish editions that fall outside the scope of a large scholarly edition. We believe that many scholars have discovered fascinating texts that deserve to be edited and published, and we offer a venue to turn these discoveries into sustainable, peer-reviewed publications that will enrich the digital record of our cultural heritage.

If you are interested in editing a small-scale digital edition of a single document or a collection of documents, we want to hear from you.

Proposals
We invite proposals for rigorously edited digital small-scale editions. Proposals should be approximately 1000 words long and should include the following information:

1) A description of content, scope, and approach. Please describe the materials you will edit and how you will approach editing and commenting on them. We anticipate that a well-researched apparatus (an introduction, annotations, etc.) will be key to most successful proposals.
2) A statement of significance. Please briefly explain how this edition will contribute to your field.
3) Approximate length.
4) Indication of technical proficiency. With only rare exceptions, any edition published by Scholarly Editing must be in XML (Extensible Markup Language) that complies with TEI (Text Encoding Initiative) Guidelines, which have been widely accepted as the de facto standard for digital textual editing. Please indicate your facility with TEI.
5) A brief description of how you imagine the materials should be visually represented. Scholarly Editing will provide support to display images and text in an attractive house style. If you wish to create a highly customized display, please describe it and indicate what technologies you plan to use to build it.

All contributors to Scholarly Editing are strongly encouraged to be members of the Association for Documentary Editing, an organization dedicated to the theory and practice of documentary and textual editing. To become a member, go to www.documentaryediting.org.

Please send proposals as Rich Text Format (RTF), MS Word, or PDF to the editors via email (agailey2@unlnotes.unl.edu, ajewell@unlnotes.unl.edu) no later than August 1, 2010 for consideration for the 2012 issue. After August 1, proposals will be considered for future issues. Feel free to contact us if you have questions.

CALL FOR ARTICLES
Scholarly Editing welcomes submissions of articles discussing any aspect of the theory or practice of editing, print or digital. Please send submissions via email to the editors (agailey2@unlnotes.unl.edu, ajewell@unlnotes.unl.edu) and include the following information in the body of your email:

Names, contact information, and institutional affiliations of all authors
Title of the article
Filename of article
Please omit all identifying information from the article itself. Send proposals as Rich Text Format (RTF), MS Word, or PDF; If you wish to include image files or other addenda, please send all as a single zip archive. Submissions must be received by February 1, 2011 for consideration for the 2012 issue. Please, no simultaneous submissions.

Thank you,

Amanda Gailey
Department of English
Center for Digital Research in the Humanities
University of Nebraska-Lincoln
agailey2@unlnotes.unl.edu

Andrew Jewell
University Libraries
Center for Digital Research in the Humanities
University of Nebraska-Lincoln
ajewell@unlnotes.unl.edu


July 12, 2010


TEI @ Oxford Summer School: Intro to TEI

Thanks to the EMiC project, I am very fortunate to be at the TEI @ Oxford Summer School for the next three days, under the tutelage of TEI gurus including Lou Burnard, James Cummings, Sebastian Rahtz, and C. M. Sperberg-McQueen. While I’m here, I’ll be providing an overview of the course via the blog. The slides for the workshop are available on the TEI @ Oxford Summer School Website.

In the morning, we were welcomed to the workshop by Lou Burnard, who is clearly incredibly passionate about the Text Encoding Initiative, and is a joy to listen to. He started us off with a brief introduction to TEI and its development from 1987 through to the present (his presentation material is available here). In particular, he discussed the relevance to the TEI to digital humanities, and its facilitation of the interchange, integration, and preservation of resources (between people and machines and between different media types in different technical contexts). He argues that the TEI makes good “business sense” for the following reasons:

  • re-usability and repurposing of resources
  • modular software development
  • lower training costs
  • ‘frequently answered questions’ — common technical solutions for different application areas
  • As a learning exercise, we will be encoding for the Imaginary Punch Project, working through an issue of Punch magazine from 1914. We’ll be marking up both texts and images over the course of the 3-day workshop.

    After Lou’s comprehensive summary of some of the most important aspects of TEI, we moved into the first of the day’s exercises: an introduction to oXygen. While I’m already quite familiar with the software, it is always nice to have a refresher, and to observe different encoding workflows. For example, when I encode a line of poetry, I almost always just highlight the line, press cmd-e, and then type a lower case “L”. It’s a quick and dirty way to breeze through the tedious task of marking-up lines. In our exercise, we were asked to use the “split element” feature (Document –> XML Refactoring/Split Element). While I still find my way more efficient for me, the latter also works quite nicely, especially if you’re using the shortcut key (visible when you select XML Refactoring in the menu bar).

    Customizing the TEI
    In the second half of the morning session, Sebastian provided an explanation of the TEI guidelines and showed us how to create and customize schemas using the ROMA tool (see his presentation materials). Sebastian explained that TEI encoding schemes consist of a number of modules, and each module contains element specifications. See the WC3 school’s definition of an XML element.

    How to Use the TEI Guidelines
    You can view any of these element specifications in the TEI Guidelines under “Appendix C: Elements“. The guidelines are very helpful once you know your way around them. Let’s look at the the TEI element, <author>, as an example. If you look at the specification for <author>, you will see a table with a number of different headers, including:

    <author>
    the name of and description of the element

    Module
    lists in which modules the element is located

    Used By
    notes the parent element(s) in which you will find <author>, such as in <analytic>:

    <analytic>
    <author>Chesnutt, David</author>
    <title>Historical Editions in the States</title>
    </analytic>

    May contain
    lists the child element(s) for <author>, such as “persName”:

    <author persName=”Elizabeth Smart”>Elizabeth Smart</author>

    Declaration
    A list of classes to which the element belongs (see below for a description of classes).

    Example and Notes
    Shows some accepted uses of the element in TEI and any pertinent notes on the element. On the bottom right-hand side of the Example box, you can click “show all” to see every example of the use of <author> in the guidelines. This can be particularly useful if you’re trying to decide whether or not to use a particular element.

    TEI Modules
    Elements are contained within modules. The standard modules include TEI, header, and core. You create a schema by selecting various modules that are suited to your purpose, using the ODD (One Document Does it all) source format. You can also customize modules by adding and removing elements. For EMiC, we will employ a customized—and standardized—schema, so you won’t have to worry too much about generating your own, but we will welcome suggestions during the process. If you’re interested in the inner workings of the TEI schema, I recommend playing around with the customization builder, ROMA. I won’t provide a tutorial here, but please email me if you have any questions.

    TEI Classes
    Sebastian also covered the TEI Class System. For a good explanation what is meant by a “class”, see this helpful tutorial on programming classes (from Oracle), as well as Sebastian’s presentation notes. The TEI contains over 500 elements, which fall into two categories of classes: Attributes and Models. The most important class is att.global, which includes the following elements, among others:

    @xml:id
    @xml:lang
    @n
    @rend

    All new elements are members of att.global by default. In the Model class, elements can appear in the same place, and are often semantically related (for example, model.pPart class comprises elements that appear within paragraphs, and the model.pLike class comprises elements that “behave like” paragraphs).

    We ended with an exercise on creating a customized schema. In the afternoon, I attended a session on Document Modelling and Analysis.

    If you’re interested in learning more about TEI, you should also check out the TEI by Example project.

    Please email me or post to the comments if you have any questions.


    July 7, 2010


    THATCamp London: Day 2

    **Cross-posted from my blog.**

    We’re back up and running for day 2 of THATCamp London. After yesterday’s rather haphazard note-taking, I’ll try to be a bit more coherent today. That being said, I’m still writing “on-the-fly,” so please forgive any grammatical errors or shifts in verb tense.

    I’m so excited that this is really still just the beginning of the Digital Humanities extravaganza! The main conference starts this afternoon, and the programme is jam-packed with interesting sessions. I’m presenting on Friday in a panel entitled, “Understanding the ‘Capacity’ of the Digital Humanities: The Canadian Experience, Generalised” with Ray Siemens, Michael Eberle-Sinatra, Lynne Siemens, Stéfan Sinclair, Susan Brown, and Geoffrey Rockwell (you can view the abstract here).

    But back to today’s festivities. I am very happy to be sitting in on two “social web” sessions.

    Session 1: Critical Mass in Social DH Applications
    A fascinating and stimulating discussion on how to achieve critical mass in social applications and how to build DH communities. We began by looking at some of successful projects (namely Zotero, which has roughly ~1.5 million users; 300,000 daily users, and–in the commercial realm–Facebook and Twitter). As academics, we don’t think about marketing, but maybe it is something we need to learn. Our discussion ranged from finding an audience, getting people to use our applications, and getting input / encouraging participation from a large community. We discussed the importance of openness, and the necessity for aggregation of tools and services (which seems to be an ongoing theme at #thatcamp, at least in the sessions that I am attending).

    Our main question was how we achieve critical mass in Digital Humanities. We determined that there are a few important factors in even starting to build a community, including:
    – small group sharing
    – low barriers
    – carefully choosing a platform that will support the community

    I talked a bit about our experience with the EMiC Online Community. While we didn’t have a lot of success with the first iteration of the social network (using Drupal), we learned from our mistakes and the new site with the wordpress back-end is working beautifully (thanks to @jcmeloni for all of her help getting things up and running, and to our participants who are blogging up a storm!).

    See the Google Doc for the session.

    Session 2: Outreach and Engagement
    This session dove-tailed nicely with the previous one. Dan begins by discussing the 9/11 Digital Archive, a site for the cultural-social history of the day (~35, 000 contributors) that includes digital photographs, stories, video. He talks about the success of crowdsourcing as well as some interesting usage patterns for the project (including students who used the 9/11 archive for school projects, a general audience, a scholarly audience: historians, unsurprisingly, but also linguists who were studying teenage slang in the year 2000). Sometimes your unanticipated audience becomes your most powerful user group.

    The session focused mostly on the importance of being aware of your users and how one goes about establishing user needs. I provided the example of our EMiC group at the Digital Humanities Summer Institute (DHSI) this past June. On the Sunday before the DHSI, the EMiC participants gathered together for a pre-institute meeting. We set everyone up with user accounts and encouraged them to blog on the EMiC website and tweet (@emic_project; hashtag: #emic) as much as possible during the institute. On the final day, we held a lunch meeting and got feedback. We came out with some fantastic ideas about how to promote the community space, and I think that the EMiC user testing serves as a great example for how we might enable and empower, on a small-scale at least, DH communities.

    I think that usability testing is a crucial part of the outreach process, but as a group we agreed that it isn’t done as much as it should be. There are a number of ways to perform user testing, and if you don’t have a handful of testers at your fingertips, you can still get it done using professional services, such as those provided at usertesting.com, trymyui.com, or with the Silverback app. We also discussed the tension between a simple (or dumbed-down) interface and high-level functionality. Looking back to our previous session, it emerged that we should adhere to the principle of, as Dan said, “low walls, high ceilings.” I think that Google provides a great example of this (a topic for another post).

    So, here are some of the take-home messages for building outreach and engagement into DH projects and applications:
    – Interface: fast and simple, at least to start. A unified point of search is important. [Again: Google model]. This links back to our discussion in the previous session.
    – We discussed how laypeople may not understand the potential of the data for computational methods. Dan suggested that the provision of a “recipe book” (tutorial) might help users discover higher-level functionality.*
    – Importance of anticipating user needs (and building in a plan for unanticipated needs).
    – Start small with an easy, entry point. Build outwards.
    – Be critical, be proactive: Why do we want to do outreach? Who is our audience? Remember there will be an intended and unintended audience. The key is knowing what users need.
    – Possible outreach solution: tap projects into public school curriculum objectives; provide lessons plans for teachers as part of your project.

    We finished with a short discussion on the topic of training users / scholars. Again, I think this is something that EMiC does really well.

    Other projects we discussed, at one point or another:
    Library of Congress Flickr Stream
    BAMBOO
    DHSI

    *I think the recipe book, in particular, is a great idea, and I invite EMiC participants, as well as other editors, to write their own research recipe (as a blog post to our EMiC Online Community. To sign up for a blog account, please email me).

    Day 2 Wrap Up:
    I really enjoyed the discussions at THATCamp today. Now it’s time to move from the pre-conference conference to the conference proper. I’m very much looking forward to the next few days!

    See my THATCamp: Day 1 Report


    July 6, 2010


    THATCamp London: Day 1

    **Cross-posted from my blog.**

    Today is the first day of THATCamp London, and I can already feel my inner geek singing with joy to be back with the DH crowd. In the pre-un-conference coffee room, I met up with some friends from DHSI (hello Anouk and Matteo!). Here are my “written on the fly” conference notes (to borrow from Geoffrey Rockwell’s methodology for his DHSI conference report):

    – We begin in a beautiful lecture hall, the KCL Anatomy Theatre and Museum. I already feel the intellectual juices flowing.
    – Dan Cohen provides introductions and a history of THATCamp. Notes that unstructured un-conferences can be incredibly productive. (We are creating, synthesizing, thinking). He recalls that the first THATCamp was controlled chaos.
    – Dan setting some ground rules. He is adamant, “It’s okay to have fun at THATCamp!”  (Examples: A group at one THATCamp who played ARG with GPS, another created robotic clothing!)
    – We are asked to provide a 30 second to 1 minute summary of the proposals before we vote. Other sessions are proposed as well. Looks like a great roundup.

    Sessions related to my own research that I am interested in attending:
    social tools to bring researchers and practitioners together
    living digital archives
    Participatory, Interdisciplinary, and Digital
    critical mass in social DH applications
    – visualization

    In my mind, the winner of the best topic/session title is “Herding Archivists.”

    The beta schedule of the conference is now up: http://thatcamplondon.org/schedule/

    Session 1: Data for Social Networking
    The main questions and ideas we consider:
    – What kind of methods/tools are people using for analysing data?
    – Ethical issues in data collection and gathering?
    – How do you store ‘ephemeral’ digital content
    – What do we want to find out from our social network data?
    – What Tools for Social Network Interrogation and Visualization?
    – Our wishlist for working with social network data …

    You can also check out the comprehensive Google Doc for the session.

    Session 2: Stories, Comics, Narratives
    – Major issues: 1) Standards, 2) Annotation, 3) Visualization
    – narratives and semantic technologies
    – difficulty of marking up complex texts such as comic books, tv shows
    – Dan Cohen, how might we go about standardizing or making available different documents? Is markup always the answer?
    – One participant asks, does it matter what format the document is in as long as the content is there?
    – Once again, standardization is a key question. Once the data is collected, shouldn’t it be made available?
    – Question of IP and copyright is also raised, and generates some heated discussion.
    – “Semantic Narratives” and the BBC’s Mythology Engine.

    Session 3: Digital Scholarly Editions
    – A productive round-table on the future of the digital scholarly edition.
    – Major issues: standardization, resources, audience

    For discussion notes, please see the Google Doc for the session.

    Session 4: Using Social Tools for Participatory Research Bringing Researchers and Practitioners Together
    – Framework for academics to connect
    -Finding connections, drawing on enthusiasm and community: http://en.logilogi.org/#/do/logis and http://www.londonlives.com.
    – We need tools that collate information and resources

    See the Google Doc for the session.

    All in all, it was a very productive day.

    And just for fun: Doctor Who Subtitle Search (Thanks, Anouk!)