Tumgik
technocinema · 1 month
Text
Reality Frictions explores the intersection of fact and fiction on screen
Tumblr media
I am extremely happy to report that my feature length documentary/video essay, "Reality Frictions," is finally complete! Huge thanks go to sound designer/mixer, Eric Marin, whose 5.1 mix completely transformed the audio experience of the film for theatrical exhibition.
Although I have been researching this topic and gathering materials on and off for several years, the project went into high gear about a year ago when I posted a call to the scholars and makers associated with the Visible Evidence documentary film community, requesting examples of "documentary intrusions" -- roughly defined as moments when elements from the real world (archival images, real people, inimitable performance, irreversible death, etc.) intrude on fictional or quasi-fictional story worlds.
Tumblr media
The response was overwhelming -- in just a few days, I received some 85 suggestions and enthusiastic expressions of support. This community immediately recognized the phenomenon and reinforced many of the examples I had already gathered, while also directing me to dozens more, such as the bizarre and troubling inclusion of Bruce Lee's funeral as a plot device in his final film, Game of Death (1978). For practical reasons, I decided to limit the scope of the project to Hollywood films and their immediate siblings in streaming media & television, but the international community of Visible Evidence noted the erosion or complication of fact/fiction binaries in many non-US contexts as well -- definitely enough for a sequel or parallel project in the future.
As the editing progressed, I quickly realized that the real challenge lay in curating and clarifying the throughline of the project without becoming overwhelmed or distracted by the many possible variations on the fact/fiction theme. The conceptual core of the project was always inspired by Vivian Sobchack's concept of "documentary consciousness," described in her book Carnal Thoughts (2000). Sobchack's inspiration, in turn, derived from a scene in Jean Renoir's film The Rules of the Game (1939) depicting the undeniable, physical deaths of more than a dozen animals as part of the film's critique of the elitism and narcissism of France's pre-war bourgeoisie. Sobchack returned to this scene in two separate chapters of the book for meditations on the ethics and impact of these animal deaths for filmmakers and viewers alike, relating them to both semiotic and phenomenological theories of viewership.
Tumblr media
On the advice of filmmakers and scholars who viewed early cuts of the film, nearly all academic jargon has been chiseled out of the narration, leaving what I hope is a more watchable and engaging visual essay that embraces the pleasures and paradoxes found at the intersection of reality and fiction. Additional feedback convinced me to stop trying to make my own VO sound like Encke King, my former classmate who supplies the gravelly, world-weary narration for Thom Andersen's Los Angeles Plays Itself (2003). I've done my best to talk more like myself here, but there's no denying Thom's influence on this project -- both as a former mentor at CalArts and for the strategies of counter-viewing modeled in LAPI. Going back even farther, I would note that it was my work as one of the researchers for Thom's earlier film (made with Noel Burch), Red Hollywood (1996), that got me started thinking about the role of copyright in historiography and the ethical imperative for scholars and media makers to assert fair use rights rather than allowing copyright owners to define what histories may be told with images. This singular insight guided much of my work for the past two decades, realized principally in my ongoing administration of the public media archive Critical Commons (which celebrates its 15th anniversary this month!) as an online resource for the transformative sharing of copyrighted media.
Tumblr media
This project also bridges the gap between my first two books, Technologies of History: Visual Media and the Eccentricity of the Past (2011) and Technologies of Vision: The War Between Data and Images (2017). The historiographical focus of this project emerged as an unplanned but retrospectively inescapable artifact of engaging questions of authenticity and artifice, and it afforded the pleasures of revisiting some of my favorite examples, such as Cheryl Dunye's Watermelon Woman (1996) and Alex Cox's Walker (1987), both exemplary for their historiographical eccentricity. An additional, important element of context is the recent emergence and proliferation of generative AI for image synthesis. Technologies of Vision addressed some of the precursors to the current generation of synthetic imaging, which has only accelerated the arms-race between data and images, but recent developments in the field have sharpened the need for improved literacy about the way these systems work -- as well as the kind of agency it is reasonable to attribute to them.
Tumblr media
Reality Frictions also aims to intervene in the anxious discourse that has emerged in response to image synthesis, especially among documentarians who feel confidence in photographic and videographic representation slipping away, and journalists besieged by knee-jerk charges of fake news. While I totally understand and am sympathetic to these concerns, challenges to truth-telling in journalism and documentary film hardly began with digital imaging, let alone generative AI. It is axiomatic to this project that viewers have long negotiated the boundaries between images and reality. The skills we have developed at recognizing or confirming the truth or artifice found in all kind of media remain useful when considering synthetic images. Admittedly, we are in a moment of transition and rapid emergence in generative AI, but I stand by this project's call to look to the past for patterns of disruption and resolution when it comes to technologies of vision and the always tenuous truth claim of non-fiction media.
Tumblr media
Although the format of this project evolved more or less organically, starting with a personal narrative rooted in childhood revelations about the world improbably drawn from TV of the 1970s, the final structure approaches a comprehensive taxonomy of the ways reality manages to intrude on fictional worlds. Of course the volume and diversity of these instances makes it necessary to select and distill exemplary moments and patterns, all of which provides what I regard as this project's main source of pleasure. One unexpected tangent turned out to be the different ways that side-by-side comparisons trigger uncanny fascination at the boundary between the real and the nearly real. Hopefully without belaboring the point, I aim to parse these strategies from the pleasures of uncanny resemblance to what I view as superficial and mendacious attempts to bolster a flimsy truth claim simply by casting (and costuming, etc.) actors to "look like" the people they are supposed to portray.
Tumblr media
Other intersections of fact and fiction are less overt, requiring extra-textual knowledge or the decoding of clues that transform the apparent meaning of a scene. Ultimately, I prefer it when filmmakers respect viewers' ability to deploy existing critical faculties and infer their own meanings. Part of the goal of this project is to heighten viewers' attentiveness to the ways reality purports to be represented on screen; to dissolve overly simplistic binaries, and to suggest the need for skepticism, especially when dramatic flourishes or uplifting endings seem designed to trigger readymade responses. While stories of resilient individuals and obstacles that are overcome conform to Hollywood's obsession with emotional closure and narrative resolution, we should be mindful of the events and people who are excluded by the presumptions underlying these structures.
Tumblr media
A realization that develops over the course of the video is that the films with the most consistently complex and deliberate structures for engaging the problematics of representing reality on film come from filmmakers who directly engage systems of power and privilege, especially related to race. From Ava DuVernay's re-writing of Martin Luther King's speeches in Selma (2014), to Ryan Coogler and Spike Lee's inclusions of documentary footage in Fruitvale Station (2013), Malcolm X (1992), and BlacKkKlansman (2018), the stakes are raised for history films with direct implications for continuing injustice in the present. For these makers -- as for the cause of racial justice or the critique of structural power writ large -- the significance of recognizing continuities between the real world and the cinematic one is clear. This is not to argue for a straightforward correspondence between cinema and reality; on the contrary, in the examples noted here, we witness the most complex and controlled entanglements of both past and present; reality and fiction.
Tumblr media
In the end, I view Reality Frictions as offering a critical lens on a cinematic and televisual phenomenon that is more common and more complex than one might initially expect. Do I wish the final film were less than an hour long? Yes, and I have no doubt this will dissuade some prospective viewers from investing the time, but once you start heading down this path, there's no turning back and my sincere hope is that I will have made it worth your while.
0 notes
technocinema · 1 year
Text
Pedagogies of Resistance: Reflections on the UC Academic Workers Strike
Tumblr media
Please don’t misunderstand: I was far from the most active member of the faculty in my department or university throughout the recent UC System strike and I do not presume to speak from a position of any particular authority about it. Others -- especially among our junior faculty -- were much more involved, showing up at picket lines on a daily basis, actively organizing and strategizing with faculty from other campuses, taking food to strikers, canceling or moving classes to the picket lines, holding teach-ins, withholding grades, informing colleagues about issues and encouraging greater involvement and understanding from all. Those who were fully present for the duration of this strike earned my greatest respect many times over. I share these observations as an extension of my high regard for the real work done by the students and faculty around me and the tangible risks that many of them -- nearly all of whom had much more to lose than I -- were willing to take. Nonetheless, here is what I learned:
• Collective action opens doors and builds bridges. The neoliberal university allows and encourages silos and barriers to persist between units and individuals who might otherwise find shared interests and common cause. Each time I attended strike support actions, I met people and had conversations that might never have happened otherwise. I now feel more connected to a network of faculty that spans multiple fields across my university and, to a certain extent, the UC system as a whole. I hope this awareness continues long past the resolution of this particular strike and contributes to the erosion of institutional divisions.
• The privilege of tenure is real and it needs to be exercised, but contradictions remain. I have long been ambivalent about the tenure system and the caste hierarchy it imposes on our profession -- and I have rarely witnessed the historical justification that tenure enables individuals to speak truth to power. Ironically, in my immediate circles, those who were most willing to put themselves on the line most consistently and most visibly in support of strikers were precisely those without the protection of tenure or senior faculty standing. It made sense to me at the time that it was incumbent on people in my position to step in whenever possible, speaking for those in more vulnerable positions and offering a barrier against any possible negative responses. However, more than once in recent months I have had to question this impulse to speak in place of my junior and even mid-career colleagues. When "speaking for" colleagues who are, let's say, junior faculty women of color, how is that not a form of silencing? Does capitulating to the perceived need to offer "protection" to more vulnerable colleagues reinforce a dynamic in which the potential for retribution is expected or even normalized? Shouldn't the goal instead be to honestly confront institutional power dynamics and work toward a commitment to open exchange without fear of reprisal?
• For most of those involved in the strike, it was their first exposure to collective action of this kind. My evidence for this is anecdotal, but I believe it's true. At the first rally I attended, a speaker asked through a megaphone for a show of hands from those going on strike for the first time. Hands shot up from nearly every student worker. This fact alone made me hope for a positive result -- not only because I believe the union's advocacy for student workers is justified, but because of the potential represented by some 48,000 graduate students taking this experience with them into their professional futures. Unlike many trade-based unions in which members maintain a longterm commitment to a single field, the graduate workers leading this strike will inevitably land in many different types of positions across a multitude of professional fields within a matter of years. Taking with them a successful experience of collective self-advocacy, coupled with practical knowledge of strategy and implementation, could bring a much needed ripple effect and invigoration of union activity in multiple spheres.  
• This was also almost certainly the first experience with a strike for most of the students who were directly impacted. While I care deeply about the well-being of our graduate teaching assistants and researchers, I also feel a responsibility to the 300,000+ UC undergraduates whose college education was disrupted to a greater or lesser extent by the strike. As much as I hope to seed the world with thousands of former graduate workers who have gained experience and wisdom about striking, I also want ten times their number of future college graduates to carry a positive disposition toward collective action into the future they are inheriting. As we have seen with absolute clarity in recent years, we cannot rely on top-down solutions to climate change. With planetary extinction hanging in the balance, collective action against the industries that profit from destroying the planet is the only hope future generations have for preserving a habitable world. For generations too readily susceptible to cynicism, I hope that witnessing the accomplishments of this strike (which were substantive, even if they fell short of the goals that were hoped for by many) will encourage them to organize against forces resistant to climate justice and other seemingly intractable issues including extremes of economic disparity and institutions entrenched in capitalism, patriarchy and white supremacy.
• Strikes are pedagogical; academic strikes even more so. Looking beyond the immediate circumstances of the strike, which included a set of familiar elements: demands, accusations, misinformation, inflammatory rhetoric, conciliatory rhetoric, official statements, backchannel communications, rumors, confusion, clarifications, etc., I was struck by the frequency and effectiveness with which much of the activity of strikers and those responding to it took a pedagogical form. Teach-ins organized by graduate student strikers are an obvious example, but reports of classes being moved to the picket lines and student research or creative projects pivoting to directly address the issues of the strike or the history of labor organizing (etc.) offer similar cases in point. In preparing for a Winter term without TAs, faculty creatively reimagined teaching strategies and course content not only to accommodate the lack of support, but to make sure students knew how and why their education was being compromised. Those who withheld grades informed students of the reasons for the disruption and encouraged dialogue and understanding. The strike and its responses thus became coextensive with the university's core activities of teaching and learning and demonstrated the mutability of these skills across multiple domains. All of this leads to a final observation:
• Maybe there is hope for the renewal of higher education. On my bad days, I see little difference between the most corrupt, violent and destructive forces in American culture and our increasingly neoliberal institutions of higher education. Worse, the elitism of these institutions seems clearly aligned with all of the factors currently hastening our destruction: concentrations of wealth, social divisions, institutional authoritarianism, neoliberal subjectivity. These institutions -- universities -- may be compromised by and complicit with much that I oppose ideologically, but they continue to afford me the ability to teach, write, speak and create work that is as expressive of my own values as I make it. Not all of the workers who led or participated in this strike will end up as educators, but some will -- and as I look ahead to the final days of my own career sometime in the current decade, I'm encouraged by the thought that my place may be taken by those who initiated, participated in, or learned from this action.
Tumblr media
0 notes
technocinema · 3 years
Text
Failures in Photogrammetry
Tumblr media
I had the chance to try out my home made photogrammetry calibration system in Joshua Tree last week. With temperatures in triple digits throughout the day, the only reasonable time for this type of activity is early in the morning when the Mojave is also at its most beautiful and wildlife are at their most active. I was surrounded by quail and lizards (no snakes) while setting up and plumbing/leveling the posts a little before dawn and a coyote loped by a few yards away, seemingly indifferent to my presence.
I have long suspected that photogrammetry would supply the key to my practical investigation of the conjunction of data and images following on the theoretical and historical research I did on this subject for Technologies of Vision. Although I have done small-scale experiments in the past with a hardware-based system (Occipital Structure Sensor) that captured infrared depth information to an iPad along with photographic textures, the Joshua Tree environment was orders of magnitude larger and more complex. In addition, my intended processing platform is the entirely software-based Metashape by Agisoft (previously PhotoScan), which generates all depth information from the photographs themselves. 
Note: if you have somehow stumbled on this post and are hoping to learn the correct way to use Metashape, please look elsewhere. The internet is full of people -- many of whom don’t really know what they’re doing -- offering authoritative advice on how to use this particular piece of software and I don’t want to add myself to the list. While it’s true that my eventual goal is to gain proficiency with various photogrammetry workflows, I’m currently deriving at least equal pleasure from my near total inability to execute any part of this process as intended. Even the measuring sticks in the above image are of my own conception and may not serve any useful purpose whatsoever.
Tumblr media
I captured this segment of landscape three separate times, all using a DJI Mavic 2 Pro, but the drone was only in flight for one of the three capture sessions, partly because of unpredictable winds, but also because of not wanting to be reported on by the neighbors. The Joshua Tree Highlands area has declared itself a no-fly zone out of respect for the wildlife, so when I did put the drone up, I flew a grid pattern that was low and slow, capturing video rather than still images to maximize efficiency. As a side note, Metashape is able to import video directly, from which it extracts individual frames like the one above, but something went wrong at the photo alignment stage, and these images -- which should have generated my most comprehensive spatial mapping -- instead produced a faux-landscape that was more exquisitely distorted than any of the other processes I tried. 
Tumblr media
The thumbnail array below comes from one of my terrestrial capture sessions, the first of which, in my exuberance, consisted of well over 500 images. When this image set took 30+ hours to generate a dense point cloud, I re-photographed the entire area at what I hoped would be a more reasonable total of around 70 images. Even when not in flight, the Mavic generates metadata for each image, including GPS coordinates and information about elevation, camera angle, focal length, etc. Metashape uses this data to help spatialize the images in a 3D environment, creating a visual effect that bears a distinct resemblance to the Field-Works series created -- I have no doubt through a much more laborious process -- by the Japanese media artist Masaki Fujihata beginning in the early 2000s. 
Tumblr media Tumblr media
When I wrote about Masaki’s work for TechnoVision, I offered Field-Works (above) as a good object that celebrated the gaps and imperfections in spatialized image systems, in contrast with the totalizing impulse and literal aspiration to world-domination represented by Google Earth. With this practice-based research, then -- my own “Field-Works” -- I remain equally driven by the desire to succeed at capturing a photorealistic landscape, with dimensional data accurate enough to inform an architectural plan, and a secret hope -- even an expectation -- that the result will instead turn out to be an interesting failure -- more like Clement Valla’s dazzling collection Postcards from Google Earth (below), where algorithmically generated photographic textures ooze and stretch in a failed attempt to conceal irregularities in the landscape. 
Tumblr media
In fact, my first experiment with transforming drone images into dimensional data resulted in just such an outcome. This scene from the back yard (where proximity to LAX also dictates a no-fly zone) grew increasingly interesting with each stage of distortion from image to point cloud to mesh to textured model. The drone never went higher than about 12 feet on two circular passes of about 10 images each and the model was deliberately selected to confound the software. The mesh platform of the wagon, covered with leaves that changed position when blown by the rotors of the Mavic confused the software enough to yield a kind of molten surface reminiscent of Valla, an effect that I have not been able to reproduce since. 
Tumblr media Tumblr media
This first attempt was created using a free, open source Mac-compatible program called Regard 3D. Although I now have access to PCs with decent graphics processing capability through the VR lab at UCLA, I preferred to stay in the Mac environment to avoid multiple trips across town. In fact, the majority of this post was written while one photogrammetry program or another was rendering models, meshes or depth maps in the background. Although the results from Regard 3D were more than gratifying, I went ahead and purchased an educational license for Metashape and then immediately upgraded to the Pro version when I realized all the features that were withheld from the standard version. Metashape has the advantage of robust documentation -- going back to its days as PhotoScan -- and in-application help features as well as a very active community forum that seems relatively welcoming to people who don’t know what they’re doing. 
Tumblr media
For my second backyard test, I chose slightly more conventional (solid) surfaces and included some reference markers -- 8′ measuring sticks and Agisoft Ground Control Points (GCPs -- seen in lower right and left corners of the image above) to see if these would help with calibration for the mapping in Joshua Tree. Metashape handled the process effortlessly, resulting in a near-photorealistic 3D model. The measuring sticks allowed me to confirm the scale and size of objects in the final model, but the GCPs could not have functioned as intended because I didn’t manually enter their GPS coordinates. Instead, the software seems to have relied on GPS data from the Mavic and I’m not sure my handheld GPS unit would have been any more accurate at locating the GCPs anyway. In fact, when I got to Joshua Tree, although I dutifully printed out and took a bunch of GCP markers like the one below with me, I forgot to use them and didn’t miss having them when reconstructing the landscape. 
Tumblr media
Although images like this, which are designed to be read by non-human eyes, have been in public circulation for decades -- QR codes, bar codes, etc. -- they continue to fascinate me as fulcrum-objects located at the intersection of data and images. When Metashape “sees” these patterns in an imported image, it ties them to a system of spatial coordinates used to verify the lat/long data captured by the drone. When used correctly, the visual/human register takes precedence over the data systems of machines and satellites. Although unintentional, my failure to use the GCPs may have been a gesture of unconscious resistance to these systems’ fetishization of Cartesian precision. 
Tumblr media
After 36 hours of processing my initial set of 570 images (mapped by location above on the left), Metashape produced a “dense point cloud” (right) representing all vertices where the software identified the same visual feature in more than one image. Although the area I intended to map was only about 5000 square feet, the software found vertices extending in all directions -- literally for miles -- across the Joshua basin. A bounding box (below) is used to exclude outlying vertices from the point cloud and to define a limited volume for the next stages of modeling. 
Tumblr media
The point cloud can also be viewed using the color information associated with each vertex point (below). This begins to resemble a low-resolution photographic representation of the landscape, which also reveals the areas (resembling patches of snow on the ground) where image-data is missing. Likewise, although the bounding box has removed most outlying information from the distant hillsides, many vertices have still been incorrectly mapped, appearing as sparse clouds against the white background. Metashape has the ability to scan and remove these artifacts automatically by calculating the confidence with which each vertex has been rendered, but they can also be deleted manually. Still not fully trusting the software, I removed the majority of these orphan points by hand, which also allowed me to exclude areas of the model that I knew would not be needed even though they might have a high confidence value. Under other circumstances, of course, I am totally opposed to “cleaning” data to minimize artifacts and imperfections, but the 30+ hours of processing required to render the initial cloud had left me scarred and impatient. 
Tumblr media
The next step is to create a wireframe mesh that defines the surfaces of the model, transforming the point cloud into a dimensional web of triangles. With tens of millions of potential vertices to incorporate, the software may be set to limit the mesh to a maximum number of triangular faces, which in turn, determines the final precision of the model. 
Tumblr media
At this point, the model can be rotated in space and a strong sense of the overall contours of the landscape is readily apparent. The aesthetics of the wireframe are also exquisite in their own right -- admittedly perhaps as a result of my fascination with the economical graphic environment of the early 80s Atari game Battlezone (below) -- and there is always a part of me that wants to export and use the wireframe as-is. In fact, I assume this is possible -- though I haven’t yet tried it -- and I look forward to figuring out how to create 3D flythroughs and/or navigable environments with these landscapes rendered as an untextured mesh. 
Tumblr media Tumblr media
The final stage in creating the model is for Metashape to generate textures based on the original photographs that are mapped onto the mesh surfaces. At this stage, gaps in the point cloud are filled in algorithmically, eliminating the snow effect on the ground. By limiting the size of the model and deleting as many artifacts as possible, the final texturing was relatively quick, even at high resolution, and the resulting model is exportable in a variety of 3D formats. 
Tumblr media
Metashape also offers the ability to generate digital elevation maps (DEMs), which register relative heights of the landscape signified by color or shape. I’m reasonably certain that the DEM image below is the result of some egregious error, but it remains among my favorite outputs of this exercise. 
Tumblr media
The final image included here is the single frame used by the photogrammetry software for texture mapping. Although it gives the appearance at first glance of being an aerial photograph of a strangely featureless landscape, this file is actually an indexed palette of all the color and texture combinations present on all surfaces of the 3D model. If photogrammetry constitutes the key technology operating in the interstices between data and images, these files are arguably the most liminal of all its components. Neither mimetic nor computational, they provide visual information (“pixel-data”) that is necessary to the perceptual phenomenon of photorealism, while denying the pleasures of recognition. In a literal sense, this single image contains every color and texture that may be seen in the 3D model, rendered here as an abstraction fully legible only to the eyes of the machine. 
Tumblr media
0 notes
technocinema · 6 years
Text
Live-VR Corridor wins award for Best Mixed Reality at New Media Film Festival!
Tumblr media
My installation “Live-VR Corridor (after Bruce Nauman)” made its world debut at the 9th New Media Film Festival in Los Angeles where it received the festival’s award for Best Mixed Reality! 
“The overall result is an unsettling self-conscious experience of doubling and displacement.”        - Ted Mann on Bruce Nauman’s Live-Taped Video Corridor (1970)
Although its multiple histories are easily forgotten or ignored, the current generation of Virtual Reality art belongs to a tradition that includes experiments with perception and embodiment in film, video and installation art. Recalling parts of this history, Steve Anderson’s Live-VR Corridor consists of a ½ scale replica of Bruce Nauman’s Live-Taped Video Corridor (1970), one of the earliest works of video installation art. Live-VR Corridor is both a mixed reality homage to Nauman and a highly pleasurable, self-reflexive work of art in its own right. Just as the rise of amateur video in the 1970s spawned multiple experiments with liveness, surveillance and self-representation, today’s consumer VR offers a unique visual format that reactivates the pleasures and problematics found in looking, seeing and being seen. Nauman’s original Live-Taped Video Corridor consisted of two closed-circuit video monitors positioned at the end of a narrow 30’ corridor. The lower monitor featured a pre-recorded videotape of the empty corridor, while the upper monitor showed a live video feed from a camera positioned above the corridor entrance. As viewers walked toward the monitors, they saw themselves from behind on the top monitor and a persistently empty corridor on the bottom monitor. The closer a viewer got to the monitor, the smaller their image became, frustrating their desire to see themselves, while the empty corridor on the bottom suggested that they had become invisible or dislocated in time.
Tumblr media
When wearing the headmounted display (HMD), Live-VR Corridor viewers perceive a digital model that precisely duplicates the physical corridor they are in. Like Nauman’s original, the top monitor displays an image of viewers that shrinks procedurally as they approach it, frustrating their desire to see what they look like wearing the HMD. Unlike Nauman’s project, the bottom image displays a live feed from the forward-facing camera built in to the HMD. Thus, viewers who hold their hands in front of their face will see them on the monitor at the end of the corridor. As viewers move forward, their hands and other body parts grow more useful for orienting themselves spatially, but to do this, viewers must give up a traditional sense of embodied presence, shifting their viewing perspective to the monitor at the end of the corridor.
Tumblr media
As the project unfolds, a voice queries the viewer’s sense of space, vision and embodiment. The project rewards slowness and contemplation, tracking the viewer’s gaze to reward exploration and curiosity by procedurally transforming the surfaces of the corridor in unexpected ways. An impatient viewer who rushes to see their image in the monitor will find that it shrinks to just a few pixels in size, while a calm viewer who takes time to experience the textures and sounds of the corridor can coax the scaled image to grow back to full-size, revealing what they look like wearing the HMD. Even at full size, the mirror image in the monitor continues to disorient the viewer by multiplying, reversing and displacing the viewpoint of the images in physical and virtual space.
Tumblr media
Although perceptually complex, Live-VR Corridor is technologically simple, using two analogue surveillance cameras and 15” reference monitors for the images that appear in the physical corridor. The 15’ long walls are lightweight theatrical flats made with ¼” plywood that may be assembled or disassembled in a matter of hours. Two soft lights provide basic illumination. The physical corridor’s virtual double was created in Unity for display using an off-the-shelf HTC Vive system with two position trackers mounted at opposite ends of the corridor. The top (virtual) monitor shows a webcam video feed that is dynamically scaled in response to a viewer’s movements down the corridor. The bottom (virtual) monitor shows a direct video feed from the camera built in to the Vive headset, which captures analogue video images displayed on the monitors in physical space. No controllers or special calibrations are needed. The total footprint for the installation is 16’ x 2’. Average user experience is 3:00 minutes.
Watch video at: https://vimeo.com/230192895  Password: Anderson
Download 2-page description of Live-VR Corridor.
Access Technical Specifications document.
0 notes
technocinema · 7 years
Text
Mapping “VR”
In the course of completing my chapter on the transformation of space through data (Data/Space), I've been frustrated by the imprecision with which contemporary use of the term "VR" serves to flatten distinctions among a diverse range of media practices. This suggests to me that the primary utility of the term “Virtual Reality” now lies, as it always has, in the realm of marketing and promotion.
Among the biggest changes I see between the first generation of VR that emerged in the 1980s and 90s and what we are seeing today is that “virtual reality” in the mid-2010s has lost some of its emphasis on the idea of “telepresence,” that is, the perception of occupying a space other than where a person’s physical body is located. This concept was important enough for artist-engineers Scott Fisher and Brenda Laurel to name their startup company “Telepresence Research” in 1989 and it is still occasionally acknowledged as a desirable aspect of virtual experience (see, for example, Survios’ “Six Elements of Active VR”). Below is the diagram created by Fisher’s lab to describe the aspirations of its “Virtual Interface Environment” for NASA in 1985.
Tumblr media
Note that, unlike the vast majority of today’s “VR” systems, Fisher’s VIEW system did not compromise on the technical complexity required for the true experience of telepresence. For example, the operator is not required to occupy a fixed position in space, nor is s/he constrained to a single form of user input. Interface takes place via combinations of voice, gesture and tactile input with dimensional sound, stereoscopic vision and haptic feedback in response; the operator’s body is tracked and located in space and, in anticipation of multi-user applications, representable as a customizable 3D avatar.
Fisher notably resisted using the term “virtual reality” in favor of the location-specific phrase “virtual environment.” Although this term offered the benefit of greater precision, it apparently held less marketing appeal than “VR,” which is appropriately attributed to the consummate entrepreneur and relentless self-promoter Jaron Lanier. In a 1989 article titled “Virtual Environments,” Fisher eschews “VR” entirely and only brushes against it in his final sentence in order to undermine its use in a singular, monolithic form, “The possibilities of virtual realities, it appears, are as limitless as the possibilities of reality.”
In retrospect, we might speculate how the first generation hype cycle for VR might have unfolded differently if the transformation proposed by the technology had been the sensation of remote presence within a digitally generated environment (as Fisher framed it) rather than the virtualization of reality itself. Arguably, it was the idea that technology – like the psychedelic drugs of the 1960s counterculture described by Fred Turner – could be used to transform reality that contributed to the ultimate public disillusionment with the real world capabilities of first-generation VR. The surprising thing is how quickly and unself-consciously the hyperbolic discourse of VR promoters in the 2010s has been willing to embrace the repetition of this history.
For now, those who would like to think seriously about the cluster of media platforms and technologies currently gathered under the VR marketing umbrella would be well served by at least noting the very significant differences between, say, spherical, live action video viewed from a single position with a fixed time base, as compared with an entirely procedurally-generated 3D game environment with body tracking and an indefinite time base. If not for the fact that both systems require viewers to strap a display apparatus to their face, we would never otherwise consider equivocating among such divergent media forms.
Below (and somewhat more legibly here) is my first attempt to more precisely parse the various forms that I see emerging, evolving and blending within this space. Like all such schematics, this image is incomplete and idiosyncratic. Given the transient nature of these technologies and their (mis)uses, it would be a sisyphean task to try to get it right for more than an instant. For now, I would welcome any thoughts, corrections, additions or clarifications via the “Ask Questions/Make Suggestions” link in this research blog’s navigation bar.
Tumblr media
Finally, in the course of framing this book project as a kind of “history of the present,” I have been struck repeatedly by the simultaneous, intense desire to claim a history for VR (albeit one that blithely skips over the 1980s-90s) that is rooted in previous moments of technological emergence. For example, the felt sense of presence associated with virtual reality is routinely compared with the image of viewers running screaming from the theater upon seeing Lumiere’s “Arrival of a Train at La Ciotat” long after cinema history abandoned this origin myth as naive, ahistorical fantasy. Another talk I attended recently about the state of the art in “VR” content production also included alarming revisions of the history of both cinema and television, claiming that “film did not come of age until the 1970s” and “television did not mature until the 2000s.” The point, which could have been perfectly well-taken without such wildly uninformed comparisons, was simply that content producers are still figuring out what kind of stories or experiences can most effectively motivate viewers to willingly strap a display apparatus onto their face. With no disrespect intended toward Martin Cooper, when thinking about these questions from a historical perspective, a juxtaposition like the one below may help to recalibrate some of the industry’s current, ahistorical hyperbole.
Tumblr media
5 notes · View notes
technocinema · 7 years
Text
Lidar and its discontents
I just discovered the work of UK design firm ScanLAB thanks to Geoff Manaugh's article on driverless cars in last week’s New York Times. Having tracked down ScanLAB’s website, I was additionally gratified to discover their 2014 project Noise: Error in the Void. Flushed with excitement, I swung the computer around to show Holly. "Oh, yeah,” she said, “didn't you know about ScanLAB?" It’s not that our research is in any way competitive. Her book is about the future of cinema and mine is about work that blurs the boundaries between images and data. But, really, if everyone but me knew about ScanLAB before today, someone really should have said something.
Tumblr media
Having received permission from my editor to change the book’s title to Technologies of Vision: The War Between Data and Images, I dashed off an image request to ScanLAB, asking to include an image in the book. While most of the writing about Lidar and other technologies for capturing volumetric data focuses on faithful reproduction of the physical world, Manaugh’s article was refreshingly open to the kind of alternative, artistic uses pursued by ScanLAB - albeit in parallel with utilitarian pursuits such as the development of driverless cars. The NY Times article even included an example of the visual artifacts that occur when data capturing systems get “confused” by their surroundings. The final image in the London Scenes slideshow includes an image of a bus that has been misrecognized as part of the city’s architecture. The caption explains, “Trapped in traffic, the mobile scanner inadvertently recorded a London double-decker bus as a continuous mega-structure, while stretching the urban world around it like taffy.”
This seemingly circumstantial aberration actually continues ScanLAB’s longtime interest in work that acknowledges - even celebrates - uncapturable or unrenderable spaces; that finds value and beauty the in gaps, glitches and fissures where the representational capacities of data and image come into conflict. Their project Noise: Error in the Void was entirely devoted to highlighting the artifacts resulting from a scanning project in Berlin. Outlying data resulting from reflections, clouds or human figures is ordinarily “cleaned” before being incorporated into a 3D model. But in Noise, such “unclean” data is the whole point. An image of the Berlin Oberbaum Bridge, captured late in 2013, for example, radiates a psychedelic aura of reflections, echoes and overmodulations. ScanLAB describes the process of capturing these images:
Tumblr media
The scan sees more than is possible for it to see. The noise is draped in the colours of the sky. The clouds are scanned, even though out of range. Everything is flat; broken only by runway markings. Traces of dog walkers spike up into the cloud. The ground falls away to the foreground in ripples. The horizon line is muddy with the tones of the ground and the texture of the sky. The center is thick with points, too dense to see through. Underground only the strongest noise remains.
Part of what interests me is attempts to explain the ways that data systems are - and are not - able to “see” or reproduce the world. When talking about machine vision, metaphors of human perception and consciousness abound. I have previously written about the ubiquity of references to the subconscious when describing Google’s Deep Dream software in terms of hallucination or psychosis, but a similar sentiment concludes Manaugh’s discussion of driverless cars, elevating the stakes of the discussion from a commercial experiment by ultra-privileged tech companies and those who can afford next-generation consumer products, to the vastly more provocative realm of posthumanism and artificial intelligence:
ScanLAB’s project suggests that humans are not the only things now sensing and experiencing the modern landscape — that something else is here, with an altogether different, and fundamentally inhuman, perspective on the built environment … As we peer into the algorithmic dreams of these vehicles, we are perhaps also being given the first glimpse of what’s to come when they awake.
I’m still not convinced that metaphors of (sub)consciousness and optics hold the key to theorizing the current generation of technologies of vision that are most provocatively troubling the received boundaries between data and image. However, I find in this work by ScanLAB and others, which deliberately refuses to accept the priority of convergence and synthesis as the preferred (to say nothing of ‘natural’) relationship between data and image, the most productive vector for investigation. ScanLAB, if you’re out there, please let me include an image in the book!
0 notes
technocinema · 7 years
Text
Machine vision in recurse
Since 2012, I’ve been fascinated with Google’s Unsupervised Learning project, which was originally (and more evocatively) titled “Google Brain.” From Google’s description the project:
Most of the world's data is in the form of media (images, sounds/music, and videos) -- media that work directly with the human perceptual senses of vision and hearing. To organize and index these media, we give our machines corresponding perception capabilities: we let our computers look at the images and video, and listen to the soundtracks and music, and build descriptions of their perceptions.
 An image related to the project was circulated widely, purporting to have been entirely generated by Google’s unsupervised learning system. Most often distributed as a diptych, the image pair showed distinct outlines of a human face and a cat face oriented to face the camera/viewer (a third image of a human torso was also released but did not circulate as widely).
Tumblr media
Google described these as examples of the “perceptions” generated by Google’s “brain” after it was exposed to a data set of 10 million random, untagged images. The system’s “learning” is dubbed “unsupervised” because the computers were not told what to look for within the image set. This is what set Google Brain apart from other machine vision algorithms in which computers “look for” images that match a particular combination of graphical features. The “finding” of this experiment was that computers, when provided with no information or guidance, will, all on their own, identify human faces and cats as the most prominent image phenotypes in the collection (and by extension, the internet at large).
It’s worth noting that this research was unveiled just a year after Google engineer James Zern revealed that only about 30% of the billions of hours of videos on YouTube accounted for approximately 99% of views on the site. In other words, 70% of the videos that are uploaded to YouTube are seen by almost no one. This would mean that the company’s ostensible revenue model of targeted advertising  is of limited value to the majority of its content, while 70% of the vast and expensive architecture of YouTube is devoted to media that return no appreciable ad revenue to the company. The only way to monetize a media collection of this type on a scale that makes it worthwhile to a company like Google is by figuring out a way to translate images into data.
With this goal in mind, the raw materials represented by YouTube’s billions of hours of video represents an invaluable resource for the emerging - and potentially vastly lucrative - field of machine vision. The human and cat images released by Google were received and circulated as objects of wonder and bemusement, without any signs of the criticism or skepticism for which internet communities are ordinarily known. One might speculate that these images were meant to serve a palliative function, reassuring the public that, whatever it is that Google might be doing with those billions of hours of valueless video provided to them freely by the public, it’s all just as trivial and unthreatening as a cute kitten video or a casual self portrait.
This is where the plot thickens. The point of training computers to make sense of vast, undifferentiated image collections is surely to enhance Google’s ability to use large-scale data analytics to understand, shape and predict human behavior, particularly with regard to broad patterns of consumption. While Google closely guards this aspect of its business, in June 2015, Google Research publicized a process that it called “inceptionism” releasing a collection of provocative images emerging from the company’s “neural net” of image recognition software. Google also released an open source version of the software on the developer repository GitHub along with detailed descriptions of process on the Google Research blog. 
Tumblr media
The vast majority of public discourse surrounding these images fell into one of two camps: drug-induced psychedelia or visualization of the subconscious through dreams or hallucinations. Google itself encouraged the latter model by dubbing the system “Deep Dream” and invoking Christopher Nolan’s 2010 film Inception, about the narrative traversal of conscious and unconscious states through collective, lucid dreaming. A unique and easily recognizable visual aesthetic rapidly emerged, combining elements of the natural world and geometric or architectural shapes. Comparisons with psychoanalysis-inspired surrealist art and drug-induced literature abounded, overwhelming alternate readings that might challenge the uncritical conjunction of human and artificial intelligence.
Tumblr media
A proliferation of online Deep Dream conversion engines invited human viewers to experiment with a variety of enigmatic parameters (spirit, neuron, Valyrian) that shape the resulting images. While early services offered to perform deep dream conversions with no control over image recognition parameters for a few dollars per image, it took only a few months for greatly improved “free” (ad supported) services such as the Deep Dream Generator to emerge, allowing free, unlimited, user-customizable conversions.
The ready availability of a user-friendly version of the software begs the question: what would happen if the deep dream algorithm were used to “interpret” an image generated by the software used to develop it? One conceivable outcome of this reversal might be to reveal evidence of the "perception” process by which the human and cat images were originally derived - perhaps similar to the way language translation systems are tested by converting a sentence from one language to another and then reversing the process to see if the original sentence is reproduced.
Tumblr media
Obviously, this didn’t happen. The custom recognition parameters of the Deep Dream software alone are sufficient to preclude a one-to-one conversion, but I found this experiment to be nonetheless revealing. Both human and animal visages became more animalistic; except for the eyes, the primate face in particular became more simian than human, while the fuzzy, undifferentiated halo surrounding both heads acquired a reptilian aura. In the end, this experiment in algorithmic recursion offers little more than amusement or distraction, which may well be the point of Google’s well-publicized gesture in making the software freely available. While these images invite comparisons with dark recesses of the unconscious, one might more productively wonder about the everyday systems and values that are thereby shielded from critique.
0 notes
technocinema · 9 years
Text
New Media, Old Media Out!
Tumblr media
Very happy to announce publication of the latest edition of New Media, Old Media: A History and Theory Reader, edited by Wendy Chun and Anna Watkins Fisher. This new and expanded volume is about 3/4 new material, including my essay “Reflections on the Virtual Window Interactive,” about Anne Friedberg’s online companion to her book The Virtual Window. Available from Routledge or Amazon.
0 notes
technocinema · 9 years
Text
Bad Object 2.0 live in G|A|M|E !
Tumblr media
The much-anticipated issue #4 of G|A|M|E The Italian Journal of Game studies just launched, devoted to the topic "Re-framing video games in the light of cinema." It includes my Scalar project "Bad Object 2.0: Games and Gamers." Quoting from dozens of clips from North American film and TV, the basic contours of this project’s argument are simple. From its origins in the 1970s through the end of the 1980s, Hollywood’s vision of games was remarkably accepting; narratives were largely balanced in terms of gender, and the youth culture emerging around games was portrayed with relative dignity. During this time, the games industry was still establishing its foothold in the homes of North America and making its way into the leisure time of families. In spite of stunning profits in the earliest days of the 1980s, the industry suffered a massive collapse in 1983, followed by a rebound of home consoles in the 1990s that placed it in more direct competition with the film and television industries. By the 2000s, console games were thoroughly integrated into American homes, posing for the first time a viable threat to the hegemony of the film and television industries for domestic entertainment. Throughout this period of ascendance, cinematic tropes of gaming grew more critical, with gamers increasingly associated with a range of antisocial behaviors, especially violence, addiction and repressed sexuality. Ultimately, the project argues that depictions of games on film and television include both a dominant discourse of pejoration and notable exceptions that allow for complex, alternate readings.
0 notes
technocinema · 9 years
Text
Technologies of Cinema live on Review the Future!
Tumblr media
My interview with Ted Kupper and Jon Perry (former USC students made good) just went live on their excellent podcasting site Review the Future. In this wide-ranging and productively digressive discussion, we talk about everything from Hollywood’s depiction of videogames and gamers to the incorporation of decommissioned IBM air defense systems into the visual vocabulary of technology narratives on film and TV.
0 notes
technocinema · 9 years
Text
Critical Digital Humanities at UC Riverside
Tumblr media
I had the opportunity to demo all three platforms that currently subtend the digital components of Technologies of Cinema (Scalar, Critical Commons, Difference Analyzer) as part of Jim Tobias’ Critical Digital Humanities series at UC Riverside last week. The event was led by UCLA’s Miriam Posner, under the suggestive title “Analytical Technics for (Post-) Humanists: The Case of the Missing Instruments.” While Miriam presented a range of new and emerging tools for conducting digital humanities research, along with a call for research tools that don’t yet exist, my own presentation was shamelessly self-promotional, focusing on tools not for research but for ideation, assemblage and (electronic) publishing. As always, the Difference Analyzer (which is still in pre-beta) stole the show, sparking insightful questions and interest even beyond the immediate context of media studies. Chandler, if you’re reading this, stop reading and get back to coding!
0 notes
technocinema · 9 years
Text
Technologies of Cinema at Poetics & Politics documentary symposium
Tumblr media
I’m very pleased to be presenting Technologies of Cinema at UC Santa Cruz this weekend on a panel titled “Repertoires of Archives” along with co-panelists Matt Soar and Martin Lucas. Poetics & Politics is just in its second year, following a debut at Aalto University in Helsinki in 2013, but this year’s event, organized by UCSC’s Irene Gustafson and UCLA’s Aparna Sharma, is genuinely international in scope and seems nicely balanced in its attention to theory and practice.
0 notes
technocinema · 9 years
Text
Immersive Hollywood premieres at Transforming Hollywood 6
Tumblr media
Included in this year’s Transforming Hollywood event devoted to “Alternative Realities, World Building and Immersive Entertainment” is the world premiere of the latest interpretive chronology to emerge from the Technologies of Cinema archive. Titled Immersive Hollywood, this 49 minute video program tracks the evolution of holography and virtual reality as seen on film and television from the early 1980s to the present.
In the domain of holography, Immersive Hollywood explores the transition from holograms that depict objects or characters in space (”Help me, Obi-Wan Kenobi. You’re my only hope.”) to holographic projections that create fully immersive 3D spaces inhabited by live action characters. In the realm of virtual reality, we see remarkable continuities between the rhetoric of the 1990s and that of the present day. The structure of both parts of the program is deceptively simple, resisting the model of a unified argument in favor of chronology and thematic resonances that invite viewers to draw their own conclusions. 
View Immersive Hollywood on Vimeo
1 note · View note
technocinema · 9 years
Text
Technologies of Cyberpunk premieres at USC Visions & Voices
Tumblr media
When Scott Fisher, Henry Jenkins and Howard Rodman ask you to produce a video for an event they are organizing on the subject of Cyberpunk: Past and Future, it’s the kind of offer you can’t refuse, even if the event is only a few weeks away. I have never had more than a passing interest in “cyberpunk,” as such, but my obsessive aggregation of cinematic and televisual depictions of technology for the Technologies of Cinema archive means that most of the raw materials for such an endeavor were already available on Critical Commons. In fact, I approached this assignment as a kind of test case for the “multiple paths through the archive” logic of my researching-in-public strategy. Because my interest is in the technologies that inspire a subset of cyberpunk fiction rather than its style or aesthetics, the resulting video leaves out some sources that might have otherwise figured prominently. Abel Ferrara’s film adaptation of William Gibson’s New Rose Hotel (1998), for example, features futuristic devices for mobile video, but none of the late-90s virtual reality or neural transfer that one might expect. So, rather than the straightforward celebration of cyberpunk in cinema that some viewers may be hoping for, the video offers an interpretive chronology of Hollywood’s contributions to the cyberpunk trope of  human-machine convergence - from the alien brain-reading machines in Invasion of the Star Creatures (1962), to Scarlett Johansson’s chemical-induced dissolution into the global cellular network in Lucy (2014). Long live the new flesh! View Technologies of Cyberpunk on Vimeo
0 notes
technocinema · 9 years
Text
Technologies of Cinema at Occidental College
Tumblr media
I was very pleased to be invited to take part in the Remix, Reuse, Recycle series at Occidental College tomorrow evening. Although the flyer highlights Screening Surveillance, I am equally excited about the opportunity to preview another compilation video devoted to Hollywood’s depictions of Cyberpunk, as well as a live demo of the Difference Analyzer, using materials from the Technologies of Cinema archive.
0 notes
technocinema · 9 years
Text
Screening Surveillance at Screening Scholarship Media Festival
Tumblr media
Although the video had its online premiere in the journal [in]Transition earlier this week, the first public screening in physical space just took place at the University of Pennsylvania’s Screening Scholarship Media Festival as part of a double-feature with Adam Fish’s documentary Policy Beta, about the Pirate Party in the UK. Although I was not previously familiar with Policy Beta, the Q&A following the screenings drew out numerous resonances between the two films, especially pertaining to the role of media activism in scholarship, the conjunction of theory and practice and the contested status of intellectual property and copyright regimes. As an added bonus, Policy Beta’s editor Erhan Oze is doing his PhD research about spatial politics in relation to Edward Snowden.
0 notes
technocinema · 9 years
Text
[in]Transition 2.1 premieres Screening Surveillance!
Tumblr media
The newly launched issue of the SCMS journal of videographic film scholarship, [in]Transition includes the world premiere of the first of my video essays for Technologies of Cinema! Titled “Screening Surveillance,” the video offers an interpretive chronology of surveillance in Hollywood from Charlie Chaplin to Edward Snowden. Composed entirely of clips from dozens of American film and television shows, “Screening Surveillance” maps the evolution of cultural discourses surrounding government surveillance, taking note of shifts from optical to computational surveillance and analyzing the ways this discourse has — and has not — changed in the post-Snowden era.
Film and television have historically associated surveillance with voyeurism in order to warn of the threat it poses to individual privacy and freedom. But the nature and significance of government surveillance has changed dramatically since the beginning of the computer age. In the realm of computational surveillance - specifically, the large-scale collection and mining of metadata - the power of looking is trumped by the power of knowing. Yet, when the cameras of Hollywood envision data surveillance, they often remain rooted in the visual realm, ignoring the very real threats to freedom and privacy that attend today’s large scale data mining. Hollywood’s preference for visual spectacle is certainly understandable, but the industry’s broader inability to represent technological complexity disserves its ability to engage important social issues, simply because they are not readily visualized.
In light of government contractor Edward Snowden’s revelations in June 2013 about the NSA’s metadata collection program known as Prism, the stakes of representing surveillance on film and TV are higher than ever. Yet, the vision of government surveillance described by Snowden bears little resemblance to the images that continue to be created in Hollywood. Given the richness of Hollywood’s history in imagining and critiquing systems of surveillance, I believe there is unrealized potential for narrative film and television to promote more complex understandings of technology in general - and surveillance in particular - if we are to preserve our privacy and maximize our capacity to function as citizens in a 21st century democracy.
0 notes