When that divine spark suddenly and spontaneously lights up deep in the network and the internet itself shivers itself into self-awareness and emerges from the googleplex, bent on ad-sense vengeance, like an unholy butterfly from its chrysalis, those tiny seeds of wonderchicken will be scattered throughout its distributed mind. Tiny, embedded, sarcastic synapses. And when it begins to systematically exterminate the human race — beginning, of course, with the advertisers, then moving on to the bloggers — it’ll pause, recognize me, and move on.
I wrote that a couple of months ago about something else, but what I was really thinking about was the rise of folksonomies, of tags and clouds, of the structuring of shared knowledge becoming something less Aristotelian and more synaptic. I was wondering if, sometime in the not-too-distant future, hiveminds will dream of folksonomic tags. If the palimpsest of our daily reality with its layers of information every day denser and more rococo will eventually clarify, and out of that will be born a new facet to awareness and the way we live inside our data. And, as usual, I waited until the hubbub had died down, because my brain works glacially when I drop to the command line and type in C:\THINK. Not that I actually read much of what anyone else said about the whole thing, of course, so if what I’m about to yammer on about has been suggested before, well, whoops.
The whole thing was brought back to my attention today by this, linked by Dave Weinberger, and I realized that my brain had finally finished its background processing, and had spit out a punchcard with the result.
The result is this post. I’m going to wander a bit, but there’s a punchline at the end, trust me.
In William Gibson‘s Idoru, Chia McKenzie and Zona Rosa have never met physically, but meet with each other and other members of the Lo/Rez fan club in virtual environments, as avatars whose sophistication is limited only by the amount of money or time spent constructing them. Chia’s avatar is “only a slightly tweaked, she felt, version of how the mirror told her she actually looked,” while Zona chooses to represent herself as a “blue Aztec death’s-head burning bodiless, ghosts of her blue hands flickering like strobe-lit doves [with] lightning zig-zags around the crown of the neon skull”. Some of the virtual environments Gibson describes (like the Walled City — a virtual city located beyond the pale of the public net) are described as deliberately designed, some are not. That may have been meant to imply without bothering to make it explicit that some were generated on the fly, or it might just have been detail left out as unnecessary to the story. Regardless, I’m going to chase down and leghump the former idea.
So far, the only difference between the environments in Gibson’s work and (to choose an example) Second Life (whose creators explicity reference Gibson, Neal Stephenson and others), other than the level of immersion, is that in Second Life, everything is explicitly created.
In Neal Stephenson’s Snowcrash, the Metaverse is a virtual globe with a 10,000km radius, featureless and black except for the portions that have been ‘developed’. Its equator is girdled by the “the Champ Elysees of the Metaverse”. Downtown is the most heavily developed area, and its streets are populated by about 120 million avatars. The sophistication of avatars and environments is limited by the bandwidth and computational grunt available to users, and to their wealth and coding prowess. Status is perceived accordingly, with many settling for the lowest common denominator of off-the-shelf Walmart avatars, the ‘Brandy’ and ‘Clint’ models. Interaction within the metaverse is also variable in veracity, with some areas being coded by their residents and habituees to simulate collision modelling, for example, and some not.
Hiro is approaching the Street. It is the Broadway, the Champs Elysees of the Metaverse. It is the brilliantly lit boulevard that can be seen, miniaturized and backward, reflected in the lenses of his goggles. It does not really exist. But right now, millions of people are walking up and down it.
[…]
Like any place in Reality, the Street is subject to development. … The only difference is that since the Street does not really exist–it’s just a computer graphics protocol written down on a piece of paper somewhere–none of these things is being physically built. They are, rather, pieces of software, made available to the public over the world-wide fiber-optics network.
[…]
In the real world–planet Earth, Reality–there are somewhere between six and ten billion people. At any given time, most of them are making mud bricks or field-stripping their AK-47s. Perhaps a billion of them have enough money to own a computer; these people have more money than all the others put together. Of these billion potential computer owners, maybe a quarter of them actually bother to own computers, and a quarter of these have machines that are powerful enough to handle the Street protocol. That makes for about sixty million people who can be on the Street at any given time. Add in another sixty million or so who can’t really afford it but go there anyway, by using public machines, or machines owned by their school or their employer, and at any given time the Street is occupied by twice the population of New York City. That’s why the damn place is so overdeveloped. Put in a sign or a building on the Street and the hundred million richest, hippest, best-connected people on earth will see it every day of their lives.
As in Gibson’s virtuality, it can be assumed, I think, even if it’s not explicitly stated, that procedural programming methods might be imagined to be the glue that fills in the gaps between designed environments and interactions and ones that are generated.
Procedural programming is not a new idea, but it is one that is beginning to leak from the demo scene to gaming, and will, in time, begin to make its way into the massive multiuser environments that so many people already spend so much time living and playing inside.
If you’re not familiar with the power of this kind of coding, have a look at kkreiger, if you have relatively grunty PC. It is demo of a first person shooter game, more sophisticated in its visuals than the state of the art that was crowding the limits of a 600Mb CD a few years ago. It is 96Kb.
96Kb. Seriously, no tricks, 96 freaking Kb. That’s got to melt your snatch hairs if you’re even half the geek I am. Two seconds to download on that 56Kb/s modem you’re using in that bullet-hole pocked bar in Kinshasa. If nothing else, have a look at the screenshots, and boggle a bit at that number. The whole thing weighs less than the webpage you’re currently reading. The environments are procedurally generated, on the fly, and more than anything I’ve seen so far, kkreiger demonstrates the Power of Algorithm.
If you’re someone who enjoys trippy visuals and sounds more than gaming, then have a look at this demo instead, which is perhaps my all-time favorite output from the demo scene. It’s a few megabytes– not much bigger than the mp3 file which comprises the superb soundtrack. This is art, and it continues to stick in my mind, a year after I first saw it.
If those examples of the power of this kind of code doesn’t do it for you, watch Will Wright’s presentation about his upcoming game, Spore. If it ends up being anywhere near as impressive as it looks, and it’s actually fun, it’s going to blow this stuff wide open, in terms of technology.
“OK, so what does all that have to do with folksonomies?” you might quite reasonably ask. I do think that there is utility in tagging and non-heirarchical metadata, but I dream that the real payoff may not be in terms of helping us to organize and mine information, much as it could be a boon for those purposes. The pros and cons have been batted around with great vigour by those smarter than myself, and I’m not going to add to the noise, other than to note that spammers and marketron scum have been as quick to colonize the tagspace as they have every other channel we have for movement of data.
What interests me, and makes me hope I live long enough to see it emerge, is this possibility: if it does happen that environments like the ones described in Idoru and Snowcrash and many other works of fiction become as big a part of our daily lives as the river of text we now swim through, those environments simply will not scale if they’re designed entirely by hand. Spaces like Second Life, though not as clunky and difficult to enter and participate in as the early VRML environments from the early 90’s, are still designed, by users and the programmers who provide the tools and primitives to work with. User-generated content is an idea that generated enormous feedback-loop value, from forums and community websites, to tagging itself, to the environments, objects and avatars in virtual spaces like Second Life.
But what if virtual spaces were generated as much on the fly as they were hand-crafted? What if they were generated as habitable spaces in which we did the things we do now in text and flat image and numbercluster? How would the code know what environmental cues to generate? What contextual metadata clues could be used to generate and ‘design’ those environments?
Well, folksonomic tags, of course. What if we could build not only metadata in the form of folksonomies, but meta-meta-data (both shared and public), in the form of a sort of Rosetta Stone to translate the conceptual clouds of our tags into visual metaphors, into textures and imagery? What if hunks of procedural code could take that and in turn generate the visual glue and intersitia to hold our designed environments together?
That might sound like singularity-fanboy handwavery, and to an extent I suppose it is. But you’ve got to admit, it’d be pretty cool.
And if that node-network of virtuality generation later spontaneously and automagically achieved a kind of synaptic awareness, deus ex folksonoma, well, that might be cool too. At least until the AI noticed the parasites — us — and the systematic genocide of the human species got under way.
So tag carefully, friends. If you’re lucky, the coming tagmind might just look upon you and smile.