Archive for October, 2008

Participation Art Online

Wednesday, October 22nd, 2008

Last Friday’s studio talk by Amber Frid-Jimenez was both inspiring and informative. We had a lively question and answer session as well, in which we talked about questions ranging from surveillance to the commercial art world to the dilemma of ending an online community.

In the talk, Amber positioned her work, “participation art online,” as an intersection of performance art and early-networked communication. She began by citing Ed Ruscha’s work, whose collection of writing “Leave Any Noise at the Signal” inspired the title of this talk as well as her thesis (S.M. 2007) for the Media Lab. She covered ground from Fluxus and the Situationists to early BBS communities and Alternative Reality Games, just to name a few. Viewing her work in the light of these earlier artistic and cultural movements, one can see the implicit political and artistic power present in online interaction and collaborative creation.

There is so much to explore and think about! Below, are some links to Amber’s work as well as some of the artists she mentioned. In the comments, please feel free to add links of your own.

Zones of Emergency: http://www.zonesofemergency.net/

Reflect Delay: http://plw.media.mit.edu/people/amber/public/mistydawn/
PLWire Telephone Tag: http://plw.media.mit.edu
Emma On Relationships Call-In Show http://www.amberfj.com/emma
OPENSTUDIO: http://www.amberfj.com/openstudio
Creative Browser: http://www.amberfj.com/highlights/browser.html

Frank, Ze. The Show. http://www.zefrank.com/theshow/
etoy Corporation: http://www.etoy.com
i love bees: http://www.ilovebees.com
Google Will Eat Itself: http://gwei.org

Social Media Classroom launches

Monday, October 20th, 2008

The Social Media Classroom and Collaboratory, a project spearheaded by Howard Rheingold and funded by the MacArthur Foundation, launches this month. I’ll let them explain:

The Social Media Classroom (we’ll call it SMC) includes a free and open-source (Drupal-based) web service that provides teachers and learners with an integrated set of social media that each course can use for its own purposes—integrated forum, blog, comment, wiki, chat, social bookmarking, RSS, microblogging, widgets , and video commenting are the first set of tools.  The Classroom also includes curricular material: syllabi, lesson plans, resource repositories, screencasts and videos.  The Collaboratory (or Colab), is what we call just the web service part of it.  Educators are encouraged to use the Colab and SMB materials freely, and we host your Colab communities if you don’t want to install your own.

As Sarah Perez at Read Write Web explains, “students need a classroom where learning is a more participatory experience and where the tools they use in their everyday lives — social networking, videos, chat, aren’t checked at the door.” In addition, students who aren’t familiar with these tools — and yes, as Siva Vaidhyanathan’s recent article on the myth of digital natives points out, many aren’t — can start to learn the new literacies required for living in a networked world (something MIT’s ProjectNML, another MacArthur funded initiative, has been working on).

Metamedia, one of HyperStudio’s earliest platforms, was designed to serve a similar role as SMC. Teachers can upload videos, documents, images or audio files to create collections which students can then explore, comment on, or share with others. Students can also upload their own materials. While Metamedia has reached its conceptual end, it continues to be used in classrooms to share and discuss multimedia materials. At HyperStudio, we’ve discussed the idea of making Metamedia an open-source tool like SMC — who knows, perhaps then some of SMC’s Web 2.0 functionalities could then be easily integrated with Metamedia, giving the project new life.

(Thanks to Jess for alerting me to this! For more of Howard Rheingold on participatory learning, check out last month’s HASTAC Scholars discussion.)

Rethinking Interactivity in the Digital Archive

Friday, October 17th, 2008

As I’ve discussed at length elsewhere, I’m currently researching moving parts in books for my thesis on seventeenth-century volvelles, or spinning paper discs used to generate language. Unfortunately, digital archives have not been helpful in either identifying or studying these objects. Looking at images like this one –


– only reminds me how different these pages were in person, when I could use my thumbnail to gently turn the wheels against each other, or wiggle the surprisingly sturdy thread contraption holding it all together.

I don’t mean to fetishize the book. But, as our research increasingly relies on facsimiles — from fac simile, literally “to make similar” — it’s worth asking: what gets lost in the digital archive? What is flattened on the screen?

Despite persistent beliefs about so-called “print culture”, paper is not two-dimensional, and the codex does more than merely store and transmit text. Books are tactile objects, small sculptures designed to be folded, touched, torn, and written on, from the Old English writan, meaning to score a surface the way a stylus marks clay or papyrus. The most playful and imaginative authors understand this and exploit the expressive power of their medium, using the book to teach anatomy with paper flaps:

[From Thomas Gemini's English language version of Vesalius's anatomy (1543); various layers of flaps lift up to reveal different views of human anatomy.]


– or calculate the position of the stars with spinning paper discs:

[From Peter Apian's absolutely gorgeous Astronomicon Caesareum (1540); these layered volvelles and threads calculate positions of planets and stars]

– or simply depict certain beliefs about language, as Georg Philipp Harsdörffer does in his Fünffacher Denckring der Teutschen Sprache (1651, pictured above), used to automatically generate German words. We see digital archives, faceted browsing and visualizations as having a certain depth — you can zoom in, we say, or drill down — yet, ironically, depth is precisely what is lost when we re-frame the printed page as a digital image. We should interrogate how the digital archive is mediating our relationship to the objects we study with as much vigorousness as we argue over how writing transformed oral culture, or how print transformed scribal culture.

Will the digital archive reinforce our often misguided notions about the fixity of print, or the flatness of the page? What would the study of volvelles or book flaps look like in a digital space?

Last year, HyperStudio sponsored an talk entitled “Harlequin Meets The Sims,” by Jaqueline Reid-Walsh. Reid-Walsh has done fascinating research on the history of children’s interactive narrative media, digging up paper doll games, puzzles, and flap books from the eighteenth and nineteenth centuries. Because the materials she works with are fragile and little-known, she’s turned to digital humanities labs like HyperStudio to digitize and display the materials.

The only problem: no one has come up with a good digital solution for capturing what it feels like to cut out and play with paper dolls, or flip the page of a pop-up book to reveal a small paper universe. And, of course, we never will. The British Library’s Turning Pages technology is neat, but paper is not a screen, and a screen is not paper. Instead of trying to “recreate” these experiences in a virtual space, thereby pretending there’s a one-to-one correspondence between the two technologies, we should build on the digital archive’s strengths (broader access to rare materials, smart searches, the ability to manipulate and annotate the facsimile without destroying the original), and be honest about its possible weaknesses — what it elides, and how it frames the book.

Upcoming Event: Bamboo Workshop

Wednesday, October 15th, 2008

This Friday, Digital Humanists from all over the world will gather in San Francisco to participate in a workshop organized by Project Bamboo. Kurt and Pete will be attending, and we here at the Hyperstudio look forward to seeing what develops!

In their own words,

Bamboo is a multi-institutional, interdisciplinary, and inter-organizational effort that brings together researchers in arts and humanities, computer scientists, information scientists, librarians, and campus information technologists to tackle the question:

How can we advance arts and humanities research through the development of shared technology services?

One of the most exciting aspects of Project Bamboo is that it is deeply collaborative. Bamboo offers a site of synergy across disciplines and across institutions. It remains to be seen what will ultimately be produced by Project Bamboo. At the very least, they are offering a valuable contribution to the evolving cyberinfracture of scholarship.

The first series of workshops was held last spring in Berkeley, Chicago, and Paris. The outpouring of enthusiasm was so great – and so many people and institutions wanted to participate – that a fourth workshop was held in Princeton. Part of an initial planning phase, the goal of these workshops was to begin mapping the field and imagining possibilities. Notes from these sessions can be seen on Project Bamboo’s wiki. The next series of workshops are meant to build upon these previous conversations and identify next steps and organizational principles for future work.

Once Kurt and Pete return from the workshop, we’ll post more on their experiences at the conference.

Tagging Art

Friday, October 3rd, 2008

Major art museums have embarked on a project to include social tagging – by people like you and me – to increase the findability of objects in their collections.  The Steve Tagger makes it easy for people to describe works of art in our own words, which can then be used by others.  Steve: the Museum Social Tagging Project offers a suite of open source tagging tools, and has the participation of the Metropolitan Museum of Art, the Guggenheim, and the San Francisco MOMA among others.

Searching Images

Friday, October 3rd, 2008

I just watched the demo for Xcavator.net: Photo Search for Professionals, an image search engine that effectively uses the natural interplay between the left and right sides of the brain (linguistic processing and visual processing).

Recognize this scenario? I need an image or design to illustrate or symbolize an experience, product, concept, (something that represents the “thingness” of an idea). Criteria for success:  I’ll know it when I see it.

Visual processing allows you to effortlessly and quickly pick out (visual recognition) an image that matches something you’t fully describe with words.

The way the demonstrator was searching for something felt intuitive.  First, we begin your search with a text keyword/topic (or tag) and a bunch of thumbnails appear as possible results.  Now the search engine is designed to use the brain’s amazing process of scanning a collection of thumbnails in seconds, to zero in on the kind of image we want.  Drag that thumbnail to an image search box.  We can refine your search by directly selecting (move the cursor over) the specific spot on the photograph with the pictorial element(s) that that the right side of our brain is responding to…such as overall color, a particular color in the photograph, the size of the object in relationship to background, whether we prefer singular or multiple objects within a photo.  Now we can refine our search further using associated keywords and/or an interactive color wheel.

Currently this photo search portal is used with stock photographs, but I’m curious if the technology would work with digitized image collections, perhaps from museums or public digital collections such as NYPL Digital Library Collection, or from private digitized collections.

Conference: Digital Humanities & the Disciplines

Friday, October 3rd, 2008

There is a Digital Humanities & the Disciplines Conference coming up at Rutgers this weekend, October 2-3.  Topics include New Directions in Digital History (Dan Cohen, Gorge Mason University), and Cyberinfrastructure and Cultural Heritage (Gregory Crane, Tufts).

Beautiful Functions

Friday, October 3rd, 2008

What makes a good visualization? Well, the NSF had some sort of answer last week, when they announced the winners for their 2008 Science and Engineering Visualization Challenge. For the most part, these are scientific illustrations, and, while interesting, only tangentially related to the field of digital humanities. (Aside: I also was wondering why the NSF used the term “visualization.” Is there a difference between a visualization and an illustration? A large question for another time.) Take a look; some of the work is stunning and the entries did get me thinking about why some visualizations are better than others.

One project, in particular, really caught my eye. Chris Harrison, a PhD student in the Human-Computer Interaction Institute at Carnegie Mellon University, collaborated with Christoph Römhild, a Lutheran Pastor, to create a visualization of textual cross-references in the Bible.

Bible graph

Harrison writes,

The bar graph that runs along the bottom represents all of the chapters in the Bible. Books alternate in color between white and light gray. The length of each bar denotes the number of verses in the chapter. Each of the 63,779 cross references found in the Bible is depicted by a single arc – the color corresponds to the distance between the two chapters, creating a rainbow-like effect.

Writing about his design decisions, Harrison continues,

As work progressed, it became clear that an interactive visualization would be needed to properly explore the data, where users could zoom in and prune down the information to manageable levels. However, this was less interesting to us, as several Bible-exploration programs existed that offered similar functionality (and much more). Instead we set our sights on the other end of the spectrum –- something more beautiful than functional.

I think this choice is very interesting. When I look at this image, I want to go digging, metaphorically. I want to play with the rich data and learn. I think this is what a successful visualization does. It presents us with something interesting at first glance, while also suggesting a need to explore further. I would have liked it even better if I could have interacted with the data. Still, I respect Harrison’s decision to seek out an alternative to the usual route. (I also appreciate that he provides a hi-res image, so that if one liked, one could really look at the data, static though it may be.) With new fields, new technologies, new ideas, there is always something gained in exploration. Even though this visualization is not “usable,” it invites the viewer to understand a text in a new way. And, it really is beautiful.

Further Reading:
The winning entries for the NSF’s 2008 Science and Engineering Visualization Challenge will appear in this month’s Science. They can also be viewed in online gallery.

Chris Harrison’s many other interesting projects can be found on his website.