wade tillett on Fri, 22 Nov 2002 00:57:02 +0100 (CET)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

[Nettime-bold] Re: <nettime> software as brainboxing


IF one were to link all of these individual life databases together,
via some sort of metanarrative based on position and time, there is
the possibility of creating a sort of ultra-rational 4D historical
representation. Put a GPS stamp on all the input, uplink a tag to a
centralized database, make the stored data on people's computer
available, and then, upon entry of a search for a certain time and
place - you could pull up all the photographs, sounds, or any other
form of recorded representation linked to that time/place.

Then, with intense processing and a minimum number of input data
sources (cameras, etc.), a singular 4D data source could be compiled.
That is, a centralized history based on everyone's personal 'surrogate
memories'. What could be produced is a sort of ultra-rationalist
centralized 'surrogate memory' which would be passed off as nothing
less than a Reality of History.

The multiple camera images could be assimilated into a singular time
space image based on a GPS stamp indicating time and position
(relative to the absolute GPS time-space). By using the GPS
information and perspective algorithms, a virtually inhabitable
time-space could be recreated from which one could enter and explore.
That is, NEW perspectives of an already passed time-space could be
generated, and presented as accurate representations of the past
reality. Areas lacking adequate data could be infilled with models -
of buildings, of people, of weather - that is, with macro-data.

Based on a sort of reality consensus, off the mark data would have to
be discarded during processing.


For example, imagine a complicated crime scene in a crowded tourist
district. Multiple criminal acts committed simultaneously under the
eyes of multiple cameras. A face caught in a various cameras and in
various frames could re-processed into a 3 dimensional model, based on
the multi-perspectival frames, and then relocated and moved to
correspond with the fourth dimensional data. A close-up frontal mug
shot could be produced when none were actually taken. Overlapping
sounds from various cameras could be placed onto each other and mapped
to the 4D model. One could zoom in close to a whisper that was not
decipharable in any of the 3D (stereo audio + time) camera sound
files, but which could be reconstructed by parsing the background data
to a location based on overlaps and the GPS marks. Each participant,
each element could be reconstructed into a 4D history in which the
contemporary viewer selects the new position, perspective, time, rate,
and data to display. The police, in the recreation, show a zoomed out
axonemetric aerial view, with the figures in question conveniently
highlighted, selectively clicking on each to hear the audio emanating
from that source. Beyond the camcorder and the surveillance camera,
history becomes the virtual model of reality, recorded in real time,
but accessed on demand.




-----

http://www.newscientist.com/news/news.jsp?id=ns99993084
19:00 20 November 02

"Imagine being able to run a Google-like search
on your life," says Gordon Bell, one of the developers.
...<snip>
The system can also be used to build narratives involving other
people,
events or places. Searching for the name of a friend would bring
together a chronological set of files describing when you both did
things together, for instance.
...<snip>
Bell believes that for some people, especially those with memory
problems, MyLifeBits will become a surrogate memory that is able to
recall past experiences in a way not possible with the familiar but
disparate records like photo albums and scrapbooks. "You'll begin to
rely on it more and more," he believes.
...<snip>



_______________________________________________
Nettime-bold mailing list
Nettime-bold@nettime.org
http://amsterdam.nettime.org/cgi-bin/mailman/listinfo/nettime-bold