The Rhizome Digest merged into the Rhizome News in November 2008. These pages serve as an archive for 6-years worth of discussions and happenings from when the Digest was simply a plain-text, weekly email.

Subject: RHIZOME DIGEST: 6.21.02
Date: Thu, 20 Jun 2002 13:58:30 -0400

RHIZOME DIGEST: June 21, 2002


1. Lenssen Ute: Call for applications - Bauhaus Kolleg Dot.City

2. yukiko shikata: "art.bit collection" at ICC

3. John Klima, sgp, and Christopher Fahey: Context Breeder Mid-Project

4. David Mandl: Harwood interview -- TextFM

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +


Date: 6.19.02
From: Lenssen Ute (lenssen AT
Subject: Call for applications - Bauhaus Kolleg Dot.City

Apply now!

The Bauhaus Dessau Foundation announces the


On September 26, 2002 the first Trimester of the Bauhaus Kolleg DOT.CITY
will start. Apply now for an outstanding interdisciplinary program. Work
together with architects, geographers, sociologists, artists and other
professionals on one of the most challenging topics in urbanism:

The Impact of Information and Communication Technology on Cities.

Learn from international experienced specialists. Join people from all
over the world coming together at a unique place: The Bauhaus Dessau.

1st trimester: Finding Human/ICT-Interfaces Sept 26, - Dec 4, 2002
Application Deadline: August 1, 2002

2nd trimester: Creating Dot.Urban Amplifiers Jan 23, - Apr 17, 2003
Application Deadline: Dec 6, 2002

3rd trimester: Planning the Dot.City May 22, - Aug 14, 2003
Application Deadline: Apr 4, 2003

For details of the current program see

and details of application see

See also: <> FORUM the official

Join our first live chat: <>
Thursday, June 27th, 2002,
1830 - 2000 [ GMT ]
1930 - 2100 [ Central European Time ]
1430 - 1600 [ US Eastern Standard Time ]
2230 - 0000 [ Indian Standard Time ]
0730 - 0900 [ New Zealand Time next day ]

Chat with us about the concept of our TELECITY-EXHIBITION

Ute Lenssen
Bauhaus Dessau Foundation
Project Manager
Gropiusallee 38
06846 Dessau
Tel: ++49 (0)340-6508-402,
Fax: ++49 (0)340-6508-404
E-mail: lenssen AT <mailto:lenssen AT>

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +


**MUTE MAGAZINE NEW ISSUE** Coco Fusco/Ricardo Dominguez on activism and
art; JJ King on the US military's response to asymmetry and Gregor
Claude on the digital commons. Matthew Hyland on David Blunkett, Flint
Michigan and Brandon Labelle on musique concrete and 'Very Cyberfeminist

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +


Date: 6.19.02
From: yukiko shikata (sica AT
Subject: "art.bit collection" at ICC

"art.bit collection"

Date: June 21 (Fri) - August 11 (Sun), 2002 10:00am-6:00pm closed:
Mondays, August 4 (Sun) Venue: NTT InterCommunication Center [ICC]
Gallery A, B Address: Tokyo Opera City Tower 4F, 3-20-2 Nishi-Shinjuku,
Shinjuku-ku, Tokyo, 163-1404 Japan

URL for this exhibition:
on "Art.Bit Collection"

In the art world, a work of art is called an "art piece." The word "
piece" designates a thing that actually exists, but since software
creations exist only as binary data, calling them an "art piece" doesn't
suit well. Substituting "bit" for "piece," we have decided to call such
a work an "art bit."

In the case of software, which is used as a medium, material, tool, and
environment for art, it is necessary to know the conditions of the "art
bit"; under the present circumstances, however, when the market is
glutted with high-performance application software, it is becoming
increasingly difficult to stretch the individual's imaginative powers.
Some people have even become convinced that no new software is needed
beyond what already exists. Software ought not to be simply a tool that
allows us to imitate actual operations and rationalize routine work. We
must delve down and discover new possibilities that are latent in
software and experiment with them through trial and error as "art bits."

The "Art.Bit Collection" exhibit brings together and displays works that
explore software possibilities in this sense -- programming language
(especially visual programming language and language environment
software for computer music), network community (software available on
the Internet for creating and exhibiting artwork), software for
visualization for the World Wide Web, new application software, and
interactive works. Although we cannot perhaps say that these art bits
have as yet evolved into major works in this sense, we can say that each
of them contains a "bit of art" that shows extraordinary creativity.


7 categories with 39 works

*Visual Programming Environment (8 works) How can we create open ended
Programming Environment for the end-user?

*Media Programming Environment (5 works)

*CommunityWare (1 work)

*Virtual Environment (3 works) You can feel strange reality by virtual
environment in computer.

*Web Browser historical view and alternatives (7 works) You can see
history and the future of Web Browser.

*Behind the Network (5 works) Visualize the streams of network and data
on network. You can realize there are many background behind the

*NoiseWare - deconstructing desktop and application (10 works) Input
noise into desktop and application. They reconstruct your common sense
about computer.

NTT InterCommunication Center [ICC] Tel: +81-3-5353-0800 (International)
E-mail : query AT URL: URL for this

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +


Limited-time offer! Subscribe to Leonardo Electronic Almanac (LEA), the
leading electronic newsletter in its field, for $35 for 2002 and receive
as a bonus free electronic access to the on-line versions of Leonardo
and the Leonardo Music Journal. Subscribe now at:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +


Date: 6.16.02-6.19.02
From: John Klima (klima AT, sgp (somebody AT,
Christopher Fahey (askROM AT
Subject: Context Breeder Mid-Project Report

[editor's note: An interesting thread on John Klima's Context Breeder
project, which is in development but open for drive-bys and beta
testers, started on Raw this week. Salient themes are visualization and
an artist's obligation, or lack thereof, to usability.]

John Klima posted:

heya all,

The Rhizome alt.interface mid-project report is online at with a link to the app as it
exists. The report is also included below for your convenience.

the original project proposal and description is at if yer not familiar with it.

the current application url is:

Give it a try, see if you can figure out the interface, pound on it and
create new genes. You create a gene by selecting four artbase objects.
users will eventual have the ability to accept an existing default gene
with no muss or fuss, and be able to find genes they previously created.
The creation interface is the most complex, so I managed it first. slide
the blue dot for big change, red dot for small, click around on things,
lemme know. will eventually add pop-up help, and loading status bars,
give it a few moments to load.

best, j

Context Breeder Mid-Project Report 15 June 2002:

The Context Breeder project is well along in its development cycle. By
using server-side php scripts to accept and return data with a Java 1
front end, the data backbone for Context Breeder is fully established.
Work on the front end continues, with significant milestones
accomplished including cross platform browser-based 3d rendering, and a
gene creation interface that populates the rendering with new genes
(give it a try

The 3d gene pool rendering displays sequences of four gif images,
representing four Artbase objects. Context Breeder has two such 3d
renderings that will be finalized as a single interface. The first
displays the sequences as a transparent stack, the second displays them
in orbits. By combining the stacks and orbits, the sequences will be
arranged by their similarity with each other.

The 3d rendering is populated by a java interface that allows the user
to create a sequence of four genes. Complete information about the
artbase objects in the scene will be visible below the 700 x 200 pixel
3d rendering area. This is by no means the finished 3d rendering,
however it fully demonstrates the instant dynamic adding of gene
sequences to the pool, constituting the major functional hurdle for the

Summary: breakdown of code modules written thus far: 1. php scripts
retrieve and add genes to the artbase. 2. a java interface accesses the
database through the php scripts. 3. a cross platform 3d rendering
displays genes as .gifs in stacks. 4. a cross platform 3d rendering
displays genes as .gifs in orbits. breakdown of tasks ahead: 1.
integrate the two 3d renderings based on gene similarity 2. enable
travel through the rendering. 3. enable gene crossover and lifespan. 4.
beta feedback and debug.

[sgp] wrote:

John, This is a really interesting project and I have lots of questions.
You seem to be taking on the challenge of creating a better
visualization scheme for associative data by finding alternative
metaphors for their organization, display and interconnectedness. Much
of my criticism and commentary below are informed by my experience
dealing with Thinkmap and more recently teaching interface design.

I found the whole experience to be not very intuitive. This is not
necessarily a problem. I know we have had discussions in the past about
your interest in providing challenging gaming mechanisms and interaction
designs. While I tend to agree with you on those, I find it important to
provide a layered experience. Another reason I bring this up, is because
the project is wavering between being a tool and an interface - it is
two screens, the first, a user interface(tool) and second window is
interface of artbase(visualization). I say this because in your
explanation of the piece, it is clear to me that user participation is
important to the project's success. Therefore legibility of the
interface as a usable tool is important while legibility of the
interface as visualization can rely on additional parameters - like
those found in genetics(?).

Below I will try to identify key aspects of the interface and
interaction design that I found confusing.

There seem to be 3 main modes of activity for the gene pool selection
interface. - Scrolling : physical metaphors and measure There are two
different methods to scroll a list provided to the user - a blue dot and
a red dot. The fact that they are even scroll bars is obfuscated by the
lack of typical scroll bar conventions, up/down arrows or the physical
metaphors of a button in a slot, track or delimited slide zone. You
provide a box in which the dots reside, but it is unclear how they
relate to that box in part because the box looks more to be a framing
element, creating modules, common to the entire interface. Once users
catch on that the dots are scroll bars, the interface responds well, the
feedback is as expected. However, it is difficult to understand what
proportion the red dot scrolls compared to the blue. For example, the
blue allows users to jump every 20 names vs. the red scrolls within
those 20. This is a nice feature but I think unusual and therefore
needing more visual clarification of measurement.

- Selecting - multiple clicking/highlighting options to designate choice
It is very subtly implied(reading left to right) that the sequence of
browsing is blue dot, red dot, click box to position, hit button. I
found this sequence so subtle as to be invisible due to various other
competing interactions. The red dot is both scrolling and indicating
selection. There is a visual connection between the red dot and the
placement box rather than a connection between the artbase item and the
placement box. Having many items hot(clickable) makes it easy to get out
of sequence. Non-linear selection is great, I am all for serendipity but
it becomes very important for users to be able to track their current
state. Therefore, I would disconnect the red dot from highlighting a
selection and let the user click on the item to select it OR keep the
connection and have the placement box, constantly updating as users
scroll and then provide an obvious way to select the next placement box.

- Producing - one big interface So, I have tried to make a distinction
between what I think are currently confusing interface issues and their
possible outcomes, one being the more typical usability-oriented and the
other more serendipitous. My last comment about the interface is a
general layout one. What if you placed the scrolling window at the top
and the placement boxes and go button at the bottom? I would eliminate
the extra readout list currently in the upper right corner as that
information could easily be incorporated into the placement boxes. This
adjustment would actually allow you to place the second window,
"addgene.php" to the bottom of the interface, thereby making it one
fluid experience. Right now, taking me to a second blank window, I
forget my choices, and am left to drift through un-annotated field of
thumbnails. Which ones are the ones I selected? How are the others being
generated? Why in a circle? Can the user interface foreshadow some of
there structures?

All that said, what I find exciting is the possibility for the 3d
environment to be readily updateable because it is part of the same
interface. This, for me, would be a great and fluid context breeder. By
placing the thumbnail visualization back into a selection environment
could allow you to highlight the existing structures at play in the
curation and categorization of art works. It would also allow you to
address Patrick's fine comment about wanting to search by numerous
criteria beyond alpha-numeric listing. On the other hand, I have been
considering the 3d environment as the 2nd experience. It is really the
primary experience and therefore one could consider the user interface
as a kind of heads-up display therefore making it all one fluid piece.
Which makes me wrap-up by asking how does all this relate, if at all, to
current methods of genetic visualization and sequencing? What does an
additional dimension(2d to 3d) afford you? Ok, I'll stop...I apologize
for being so long winded, but I am excited to see where you're going to
take the project.

Best regards, [sgp]

ps: I had no technical problems on my PC(w2000) / IE 5

John Klima responded:

hi scott & all,

thanks a ton for your detailed assessment. the first time you look at
anything of mine its not very intuitive, but in this case it *is* very
simple. i think a few bits of pop up instruction will go a long way
(which i fully plan to implement). regarding your more specific points,
i don't think the first time i looked at any scroll bar, i intuitively
knew what it did. but once i tried it it became quite obvious. i'd like
to suggest we are at the point of computing sophistication where ugly
little up and down arrows can be dispensed with, and seeing a list with
a gizmo next to it is all it takes to say "scroll"

the discussions we've had about this topic in general have always lead
me to believe that no interface is intuitive, only similar to past
interfaces, or in the worst cases, simply habitual. i also think that a
good interface does not need to be intuitive, it needs to be easy to
master, and effective in its operation. that is not necessarily the
definition of intuitive. i also believe the only way to create new
interfaces that do more than the existing paradigms, is to simply not
worry about whether grandma and little billy can use it. i guess thats
why i'll never be a web designer. but seriously, the only way to make
more effective interfaces is to demand a bit more of the user. and i
think this really comes down to habit. we are used to seeing a scroll
bar that has arrows, thumb boxes, heavy raised boarders, all this crap
that takes up space and perhaps makes it more confusing for grandma and
little billy. if it is assumed that everything in an interface has a
purpose, two dots in a rectangle next to a list seems obvious enough. a
quick investigation and their purpose is revealed. which is of course
part of the fun, and this is of course, not an online realtime stock
trading application.

to address your comments on selection, i'm still playing with things.
however, i quite like the sequence of events for selecting the four
objects. the red line connects to the red dot, which also highlights
it's neighboring text entry in the list. to add that object, clicking on
the text entry OR clicking on one of the four boxes adds that entry.
clicking on a different entry adjusts the list to it, and it appears in
the selection box. so it becomes quite quick to set all four image boxes
to the same object, which one might actually want to do, or two of one
and two of another. i think the mechanism, though perhaps not
"intuitive" is highly effective for making the selection. a few little
tweaks and drill-downs and it will super effective. btw the problem with
*automatically* selecting the highlighted list entry into the image box
is the network lag it takes to load it, and also the fact that my
unfiltered database list has lots of entries where there are no images.
however your points are well taken and i'll likely incorporate some of
their concerns into the interface. keep in mind though that prolly 90%
of first time users will opt for a default gene and never use the
creation interface, so my primary concern will be to make it fast and
effective to use once you know the (few) oddball mechanisms.

as far as the 3d rendering is concerned, there is no room for the
selection mechanism on this page because there will be a whole other
interface that the rendering is only a part of. if it had consisted of
only these two interfaces, they would have been on the same page. so in
the final version, there will be an interface of simmilar look and feel
to the selection interface, wraped around the 3d rendering. also the 3d
rendering is in no way the final appearance of the sequences, it right
now functions only as a proof of function - it can accept user selection
of artbase objects into a 3d rendering of their thumbnails. also the
selection screen is not really part of the interface per se. the real
interface will be inside and around the rendering where connections are
made between genes, and will be fully annotated. btw, the gene you
created is the first one that appears in the rendering, and all the
others spiral out from there. in the final interface, your gene will
act something like a crosshair in the center of the screen, and the
other genes will be stacked and orbited in the scene according to their
similarity with your gene.

thanks again for your detailed evaluation, it will really help when i'm
faced with decisions where i'd prefer to say "oh, fuck the user." i
look forward to your comments in the future.

best, j

Christopher Fahey wrote:

While I advocate usability professionally, and while I think that poor
usability often unwittingly ruins a lot of ambitious work
(, I also think that John's project has a
formal goal beyond the conceptual algorithm which recombines the Artbase
"DNA": He is also experimenting with user interface paradigms, and as
such we should not expect the interface to stick to normal interface

A really great book on web site usability is titled "Don't make me
think", and in my day job as an information architect and interaction
designer I think this is a great rule of thumb. But in an art context, I
think the opposite can be quite true: "Make me think!" is the name of
the game. Josh Davis once said that we shouldn't make interfaces that
assume the user is stupid. I agree.

That said, I think most net.artists, including John, need to keep in
mind the usability of their work. Just because it's art doesn't
necessarily mean that the artists has carte blanche with the GUI. If
subverting the interface is the point, then go ahead and rock it Jodi
style and make every button and widget a total mystery. If building
compelling, elegant, and innovative interactive experiences is your goal
(this well describes John Klima's whole artistic practice, IMHO), then
usability should be a factor in your equation.

I reserve judgement on the usability of John's interface, but it seems
to me at this in-progress stage that it is not so challenging that his
audience wont figure it out after a little bit of thinking. Also, it
shows promise as something that might actually be an interesting
interactive experience when it's done.


John Klima replied:


thanks for your input. the usability of the creation interface is an
issue in as far as it's effectivness to create a sequence. however the
interface's measure of usability is not dependant on how intuitive, or
similar to other interfaces, it happens to be.

and i have to insist that an artist does indeed have carte blanche with
the gui, it is the only arena that allows for this. its is the payback
for not having any practical thing to market, at least you can do what
ever you want. but thats another heavily worked topic.

while full bore chaos that one often experiences in a jodi piece is
great, it does not have to be an all or nothing affair. an interface
does not have to be completely enigmatic or completely comprehensible,
and in a sense something that is neither is the most interesting, 'cause
you swear it makes sense but you just don't know why. thats a fun kind
of mental friction.

best, j

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +


Date: 6.14.02
From: David Mandl (dmandl AT
Subject: Harwood interview -- TextFM

[This interview appears in the current issue of The Brooklyn Rail]


Harwood interview: TextFM

Dave Mandl

Though you might not be aware of it if you live in the U.S.--where
mobile-phone technology is still a creaky Tower of Babel--"texting" is a
massively popular phenomenon in the rest of the industrialized world,
especially among young people. Formally known as "SMS" (for Short
Message Service), texting is a way to send text messages from one mobile
phone to another quickly, easily, and cheaply. There are currently more
than thirty million text messages a month being sent worldwide, and that
number is expected to rise to more than a hundred million in the next
two years.

TextFM, a "simple, lightweight, open media system" designed by Londoners
Graham Harwood and Matthew Fuller, takes advantage of the widespread
availability of SMS-capable mobile phones to allow people to broadcast
_voice_ messages over the public radio airwaves. Using TextFM is
simple: You send a normal text message to a central phone number, where
it is captured by a computer. The computer converts the message to
speech using voice-synthesis software, and your spoken text is then sent
to a transmitter and broadcast over an FM radio frequency. As part of
your message you can also include several optional codes ("switches")
specifying the language your message is in, which of ten voices to use,
the pitch of the voice, and the speed at which you want the text read.

The TextFM software is non-proprietary, "open source" code, meaning it
can be freely downloaded--and even customized, if necessary--by anyone.
Anyone with access to a computer running the Linux operating system
(which is itself free, open-source software) can set up their own TextFM
"server." Installations are currently running in Vienna, London, and
Amsterdam, with more locations in the works. One of the current goals
of the project is to grow a decentralized network of TextFM servers
around the world: After a message is received and broadcast at one
TextFM site, it can then be forwarded to other sites in the network for
broadcast there.

I spoke to Graham Harwood (who is currently doing a residency at De Waag
in Amsterdam) during a recent visit to New York, where he gave a
presentation on TextFM at the Museum of Modern Art.


DM: Can you describe how TextFM servers in different locations would
work together?

GH: The server doing the voice synthesis sits there [in Amsterdam], and
so people text to my phone, my computer reads the text messages straight
off, then sends those streams to the server in Austria [where they] join
the stream of people texting there. And the same is happening there, on
their server. So it's looking more and more likely that you can have
different nodes of this device. Because one of the big problems has
been getting around the airwaves problem [i.e., getting access to radio
frequencies to broadcast over]; the radio thing is a complete nightmare.

DM: That's interesting, because one of the original goals of the project
was opening up the airwaves. So do you now see the future being more in
webcasting these messages, streaming over the net rather then continuing
with the radio model?

GH: No. Generally it's a localized project. [Local administrators can
send messages] off into radio, or off into a public announcement speaker
system, or some other viable way for the local area. Because the laws
on radio are so very different between different borders and different
places, there's not a kind of one-solution-fits-all. It looks like
you've got to have a lot of different elements of the project that can
be locked together in different ways to suit local environments. It
could be in a public address system in a particular environment, it can
be in a club, you can use a CB...

DM: So it's completely decentralized and autonomous: "Here's your
stream; do what you want with it. If you have access to some radio
frequency, then broadcast it. If you want to webcast it, do that." What
kinds of messages have people been experimenting with?

GH: One kind of speculative notion would be if we can set up a series of
speakers aimed at a public building here, or a public monument or
something, and do the same in a number of countries, and then use these
different nodes to actually just send shit to these public address
systems, it would be a really good method of--

DM: An audio bulletin board.

GH: Yeah. Because a lot of people in Vienna use texting as they're
walking past the public-address system there to just write in their text
message that just booms out in that locality. So it's almost like
grafittiing as you walk past. And one of the really invigorating
notions about SMS is that everyone has their own remote in their pocket,
you know, as you walk past some kind of bulletin board, some kind of
address system to just leave something, post something, place it there,
in a mobile space. And that is really a kind of social dynamic, because
it gets it back out in the streets out of your bedroom and your screen.

What's interesting about it is the complete system, it's not the content
of the system. It's the media systems that are being brought into play
for particular purposes. And the content of it is kind of secondary.
For me, if it's particularly geared at a physical object or a physical
space, then I'll quite happily send a stream of Bush probability
speaking [a Harwood project that creates ersatz Bush speeches based on
word frequencies in previous Bush speeches], or some other activity. And
so I think they're the really core interests for me, and it also came
about because of this thing of wanting at first to create a local media
system, and then seeing how people wanted to actually interact or
manipulate that system. Not just content. And that became part of the

DM: What do you mean by "manipulate the system"?

GH: I mean being able to change voice, trigger events, change pitch of
voice. We did one experiment with a group of students where we took
this trip of Bush to some South American country and combined [his
speech] with a bunch of other robots crawling other websites, and put
that [material] together--

DM: So you just inject it into the stream?

GH: Inject it into the stream, yeah. At timed intervals. And of course
you get these kinds of reactions to it from people texting. So it's not
a completely _open_ system, but it's a system that's using language as
data, and then allowing people to interrupt that.

DM: You're going to be doing something with Resonance FM [a new
community radio station in London]?

GH: Yeah, we're going to do it with Resonance. I think we're going to
use nighttimes.

DM: You mean in a time slot between the hours of so-and-so...?

GH: In the different kinds of testing we've done, we've seen that TextFM
works really badly in some environments and really well in others.
That's quite interesting in itself. If you only have a three-hour time
slot somewhere and you just do it, it's crap. Because the network
doesn't develop. If you do it, though, in a kind of closed
conference-type session, it works very well. Like where there is a
particular subject and you use a local PA system, and people are
dropping their messages into it. It works really well like that. Where
it works the best is when you've got something ongoing over a month
period or something like that, where it can build up its own clientele.
If you've got a specific action with a public-address system against a
particular building, that works very well. But these light encounters
with it in public spaces are bad. Because people don't get it.

DM: This project seems more humanist, in a way, than the net, just
because there's a voice involved--though I haven't heard it; I don't
know how synthesized and cyberpunk it sounds...

GH: The aesthetic of voice synthesis is bad. A lot of people hate it. I
went through a thing of really hating it, but then I began to like it
because it's like the country-and-western of the cyber world. It's naff,
it's tasteless, and it grates. That's one of the things in
Amsterdam--I've done it at some reasonably bourgeois events. And people
kept turning it off, because they found it so annoying, and I was in
heaven. And people got really scared of it as well, because once you
alter the pitch and rate of the thing, you get into some really grating,
tasteless aesthetics, which I have a fascination for, social elites' use
of aesthetics. Also I did things like use a lot of harmonies with the
voice synthesis, with jingles and stuff. So those horrible synthesized
voices are actually singing harmony with a TextFM jingle. And we use
birdsong, British birdsong, as the audio track. So that's the background
all the time in TextFM. Because birds kind of have these intricate
media systems by which they declare territory and intention. It's also
like the music sound of the twittering of the birds. So it fits really
well. "What kind of aesthetic can you choose for such a system?" And
birdsong seemed to be the most stupid and appropriate [laughs].

Dave Mandl
dmandl AT
davem AT

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + is a 501(c)(3) nonprofit organization. If you value this
free publication, please consider making a contribution within your

We accept online credit card contributions at Checks may be sent to, 115
Mercer Street, New York, NY 10012. Or call us at +1.212.625.3191.

Contributors are gratefully acknowledged on our web site at

Rhizome Digest is supported by grants from The Charles Engelhard
Foundation, The Rockefeller Foundation, The Andy Warhol Foundation for
the Visual Arts, and with public funds from the New York State Council
on the Arts, a state agency.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Rhizome Digest is filtered by Rachel Greene (rachel AT
ISSN: 1525-9110. Volume 7, number 25. Article submissions to
list AT are encouraged. Submissions should relate to the theme
of new media art and be less than 1500 words. For information on
advertising in Rhizome Digest, please contact info AT

To unsubscribe from this list, visit

Subscribers to Rhizome Digest are subject to the terms set out in the
Member Agreement available online at