UNIVERSAL CAPTURE / REALITY REASSEMBLED

The method which came to be called “Universal Capture” combines the best of two worlds: physical reality as captured by lens-based cameras, and synthetic 3D computer graphics. While it is possible to recreate the richness of the visible world through manual painting and animation, as well as through various computer graphics techniques (texture mapping, bump mapping, physical modeling, etc.), it is expensive in terms of labor involved. Even with physically based modeling techniques endless parameters have to be tweaked before the animation looks right. In contrast, capturing visible reality through lens on film, tape, DVD-R, computer hard drive, or other media is cheap: just point the camera and press “record” button.

The disadvantage of such recordings is that they lack flexibility demanded by contemporary remix culture. This culture demands not self-contained aesthetic objects or self-contained records of reality but smaller units – parts that can be easily changed and combined with other parts in endless combinations. However, lens-based recording process flattens the semantic structure of reality – i.e. the diffirent objects which occupy distinct areas of a 3D physical space. It converts a space filled with discrete objects into a flat field of image grains or pixels that do not carry any information of where they came from (i.e., which objects they correspond to). Therefore, any kind of editing operation – deleting objects, adding new ones, compositing, etc – becomes quite difficult. Before anything can be done with an object in the image, it has to be manually separated by creating a mask. And unless an image shows an object that is properly lighted and shot against a special blue or green background, it is impossible to mask the object precisely.

In contrast, 3D computer generated worlds have the exact flexibility one would expect from media in information age. (It is not therefore accidental that 3D computer graphics representation – along with hypertext and other new computer-based data representation methods – was conceptualized in the same decade when the transformation of advanced industrialized societies into information societies became visible.) In a 3D computer generated worlds everything is discrete. The world consists from a number of separate objects. Objects are defined by points described as XYZ coordinates; other properties of objects such as color, transparency and reflectivity are similarly described in terms of discrete numbers. This means that the semantics structure of a scene is completely preserved and is easily accessible at any time. To duplicate an object hundred times requires only a few mouse clicks or typing a short command; similarly, all other properties of a world can be always easily changed. And since each object itself consists from discrete components (flat polygons or surface patches defined by splines), it is equally easy to change its 3D form by selecting and manipulating its components. In addition, just as a sequence of genes contains the code that is expanded into a complex organism, a compact description of a 3D world that contains only the coordinates of the objects can be quickly transmitted through the network, with the client computer reconstructing the full world (this is how online multi-player computer games and simulators work).

Beginning in the late 1970s when James Blinn introduced texture mapping , computer scientists, designers and animators were gradually expanding the range of information that can be recorded in the real world and then incorporated into a computer model. Until the early 1990s this information mostly involved the appearance of the objects: color, texture, light effects. The next significant step was the development of motion capture. During the first half of the 1990s it was quickly adopted in the movie and game industries. Now computer synthesized worlds relied not only on sampling the visual appearance of the real world but also on sampling of movements of animals and humans in this world. Building on all these techniques, Gaeta’s method takes them to a new stage: capturing just about everything that at present can be captured and then reassembling the samples to create a digital (and thus completely malleable) recreation. Put in a larger context, the resulting 2D / 3D hybrid representation perfectly fits with the most progressive trends in contemporary culture which are all based on the idea of a hybrid.

– taken from the article Image Future (2006) by Lev Manovich

http://www.manovich.net/DOCS/imagefuture_2006.doc

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: