In postmodernist theories the ‘hyperrealism’ of computer graphics has been interpreted not as presenting a more analogous image of the real world, but rather as heralding its disappearance.’ Lister, 2003: 145

My proposed project has not merely been an investigation into hyper-realism within CGI, but more specifically has intended to show how photography within the process of lighting and rendering has helped question the degree of perceived realism within digital imaging.

Since the start of the course I’ve become fascinated in the synthetic manipulation of reality through computer animation. This was initiated by some visualisation projects that examined how physical space could be measured, calculated and redefined within a digital environment. In particular I was drawn to the process of rendering a simulacrum of real world objects focusing on how the simulation of light was integral to generating this sense of realism.

This led me onto my research paper How photography through Image Based Lighting has reassessed notions of perceptual realism in CGI, that investigated how physically accurate lighting models can be used to challenge the traditional assumptions about photographic realism in computer graphics. This paper really opened up a wealth of contextual research about the process of lighting in 3D. First of all I became aware of the work by Paul Debevec who pioneered the use of photography as a 360 degree environment within a virtual space to achieve physically correct omnidirectional lighting. Practically this proved far more effectual than previous Reflectance Mapping as synthetic objects used the direct analogue of the environment to process light interaction and global illumination techniques. I have focused predominantly on these methods founded by Debevec within my own work, utilizing HDRI photography as lighting probes.

Continuing this research I interviewed the founding director of 1st Avenue Machine whose short film Sixes Last effectively employed this form of lighting to seamlessly integrate CGI over live-action video. On reflection, this was the first time I began to question the nature of Image Based Lighting (IBL) in relation to photography’s ontological claim to truth. The HDRI used to light the scene enabled the indexical nature of the photographic image to create spatial context for the CGI incorporated into the live-action footage. The elements are successfully composited and while we acknowledge that the CGI cyberflora can’t be real, our eye is deceived through the reflected HDRI environment. A paradox is created as unreal objects become perceptually real, existing as ‘referentially fictional’ while the live-action video of trees and plants is ‘referentially realistic’, creating a dialogue between the two images. As Stephen Prince pointed out this degree of realism,

designates a relationship between the image or film and the spectator, and it can encompass both unreal images and those which are referentially realistic. Because of this, unreal images may be referentially fictional but perceptual realistic.’ (Prince, 1996: 32).

Using alternative lighting techniques, Alex Roman’s short film The Third and The Seventh explores through CGI existing architectural structures with unprecedented levels of realism. From this piece I noted three major components for simulating such realism; Firstly, the importance of high quality photographic texture mapping on all forms of geometry to simulate real world materials and surfaces. Secondly, how methods of Global Illumination are imperative for the accurate transport and interaction of light to generate diffuse and ambient reflections and refractions. Thirdly, understanding how to replicate the effects and workings of a real film camera, taking into consideration exposure values, shutter speeds, ISO settings and vignettes. It was precisely this third case in point that struck home how important having photographic knowledge and technical skill was in grasping a firm understanding of the virtual cameras in 3D. The dialogue between photography and CGI is a theme central to my work as both approach the fundamentals of lighting grounded on the physical laws of reality. If the photographic form is created by ‘light itself, rather than the human hand and mind’ (Buckland, 2002 : 199) then to replicate photorealism in CGI, the principles of its nature must be thoroughly examined. The continual desire of CGI to appear photographically is important in how we perceive and value it as a record of the real. The truth of the film camera exists outside this predicament,

The photographic is an analogue of the real. However the digital (or post- photographic) image is not determined or limited to the actual world in the same way. Whereas the photographic image is an analogue of the pre-existing real objects that is reproduced automatically, the digital image is produced by numerical digital codes, each of which is then realized on screen as a pixel or point of light.’ Buckland, 2002: 209

While discovering these progressive methods of lighting I began experimenting within my own practice with site specific High Dynamic Range Images and motion tracking. This contextual evidencing for my thesis fed directly into my practical work, opening a plethora of cutting edge lighting techniques in 3D, all with the intention of simulating perceptually real objects. At this point I began thinking about my degree show film. I knew it should focus on the process of lighting but the question was how to visually simulate such a method. I began reading various online journals about Photobiology and Photoreception that study the interaction of light with living organisms. Focusing on artificial light I became interested in the biological response as a form of metabolic simulation to show how growth and development is controlled by such environmental lighting conditions. Having completed some leaf renders for artist duo Cornford & Cross I was keen to investigate this dialogue between the natural and artificial.

Within Maya I started by developing a lighting rig that could be used to drive a photoreceptive response. I was eager to see how a biological change could be controlled by light intensity and world position, prompting a dynamic curve / particle simulation, a lattice deformation or a texture change. This proved to be an important lesson in writing and understanding MEL (Maya Embedded Language) expressions that made me realise for the first time the infinite possibilities of 3D animation as a way of simulating the physical laws of the real world. The whole time I was developing this rig I began examining certain materials and shaders that would allow for the Subsurface Scattering of light such as plant matter and skin. In the rendering software Mental Ray these shaders were often very complex and extremely time consuming to make, leaving me with extensive node trees that usually faked the interplay of light and object to generate a more believable image.

Therefore when I made the decision to change rendering programs it felt only logical that an unbiased and more physically accurate engine like Maxwell Render would be more relevant to my area of practice. Mental Ray had elements of physically accurate algorithms but these never yielded impressive results. It’s ability to manipulate and ‘fake’ realism makes it one of the leading software renderers in the Post-Production industry because of its extensive flexibility. What attracts me to Maxwell is the simple fact that it is an unbiased advanced lighting renderer based on real world units and settings. Nicknamed ‘The Light Simulator’ this engine is based on the mathematical equations governing light transport performed using spectral information and high dynamic range data. Within the context of my project on perceptual realism, it is important for me to remain aware that such technologies will soon be replaced by faster, more ‘physically accurate’ software. We are on the brink of real time photoreal rendering that in 5 -10 years will be a given. As Manovich points out;

Each new technological development…points to the viewers just how “un-realistic” the previous image was and also reminds them that the present image, even though more realistic, will be superseded in the future.’ Manovich 1992: 12

Working in 3D you are constantly faced with problem solving; how to find multiple solutions to any number of technical issues. While Maxwell promises certain aesthetic advantages, its integration within Maya is still rather limited, opening up a whole range of incompatible features and settings. Firstly the lighting rig I had been developing could not be driven as a simulation and was therefore removed from my production workflow. I had only created 2 versions of the rig with the intention of delving further into MEL scripting and while this seemed like a wasted opportunity to develop the rig further, I already knew there would be greater benefits then loses.

On reflection, it was at this point during Unit 2 when I decided to pursue capturing HDRI from site specific locations. Generic virtual space within Maya is non-specific, this is a given. Therefore when I looked back over my work I saw the importance of capturing a specific time and place in reality to be manipulated and redefined in 3D. With my subsequent films Lights Out and Photoperiod Piccadilly, I have focused on these particular real world settings as a point of departure.

For an exhibition held within a WW2 bunker in Dalston, I was asked to create work responding to the conceptual and physical properties of the gallery. The work entitled Lights Out attempts to capture site-specific lighting data from above the bunker which is then used as environmental light to illuminate a virtual bunker in 3D. The process of taking natural light from the outside inside allows for transformative photoreception to occur within a space usually devoid of natural light. The green translucent shoots ‘grow’ towards the light while leaves on the ground take on a square form, reiterating the physical boundaries and shape of the bunker. While nature can flourish within this environment if exposed to light, the possibilities are limited by the concrete space.

My degree show film Photoperiod Piccadilly uses real world lighting data over a single day to show the fluctuating intensity of environmental light in Piccadilly Circus. Twenty-four High Dynamic Range Images are sequentially used to illuminate a perceptually real synthetic environment, simulating the length of day and night; The Photoperiod Effect. Through synthetic manipulation a dynamic wave responds to the changes in Lux (illuminance) values throughout the day as both artificial and natural light fills the space. Photographic exposure values are calculated by a virtual light meter, connecting physical actions with simulated experiences. In Camera Lucida, Barthes defines the photograph as never being separate from its referent. Applying Peirce’s theory of semiotics he sees, ‘Every photograph is a certificate of presence’ (Barthes, 1980: 5) referring to the relationship between objects and their image as indexical. Through photographing the surroundings of Piccadilly Circus at a specific place in time I became a recorder of the real. The photograph ‘mechanically repeats what could never be repeated existentially’ (Barthes, 1980: 4) creating an indexical trace of reality. By reacting to light, my camera captured the moment and documented the event.

While recording the lux values hourly, I began drawing links between my own immediate experience of being there and the fluctuating levels of light all around me. Somehow the luminance range seemed to reflect my engagement with the surrounding environment. During the first couple of hours after midnight, a powerful glow radiating from the digital screens spilt bright artificial light all around the junction, illuminating everything as if it were day. All the people stood facing the huge advertising displays, mesmerized by the power and scale of such an installation. It reminded me of Olafur Eliasson’s The Weather Project at the Tate Modern some years back where the audience could sit and bask in the glorious orange light, fixated on the simulacrum of the universal star. These observations evoked a primordial train of thought about photoperiodism, the length of day and night and circadian rhythms.

Instead of animating a digital lux meter in the film, I was keen to remain true to the dialogue between photography and CGI. The light meter used was based on an old General Electric model, connecting analogue functions to the digital present. When illuminating the scene I had to manually adjust the f-stop, ISO settings, shutter speed and exposure values of the Maxwell camera to simulate the photographic action. From these settings I animated the light meter’s needle for each sequential HDRI. It seemed quite strange that I was using a virtual camera to calculate the response of a light meter, an action that would normally be in reverse.

I decided to approach the virtual space the same way as I had done the physical space, investigating through the camera my own navigation around Piccadilly Circus. I found much of my time staring at the concrete slabs surrounding the memorial, making note of the textural complexity of every square meter. With this in mind I wanted to recreate this attention to detail within the film, texturing space and object with great precision. Photography’s indexical claim to truth allows the viewer to be situated within the virtual space. Object, illumination and sound all combine to create a synthetic manipulation of the physical space itself, redefining the 24 hour performance I had endeavoured. For me this was imperative; how to connect and immerse the audience into an experience driven by the fluctuating intensities of light.

Post MA Ambitions

On finishing this MA degree I seem to have a clear and conscientious plan of how to further develop my self-directed area of study while successfully working within a field that merges creativity with technical ability. One of the fortunes of being a part- timer on this course is that I’ve paralleled my studies along side working in a creative design and animation company. This has been an important symbiotic relationship whereby my technical and creative skills have been used both in a commercial sense as well as personal.

Further down the line I would be interested in pursuing a PHD, continuing my exploration of perceptually real CGI in relation to the processes of photography. For the time being I intend to work full-time as a 3D designer within the company, predominantly specialising in the process of lighting and rendering. My self-directed area of study at Camberwell focusing on this area has been foundational in achieving this position. Not only is taking my project to the next level of major importance to me, but also understanding the gravity of reflection and self analysis that must continue along side my practice if I am to progress at a consistent pace. How I develop my practical and theoretical objectives will remain rooted in the same methodologies learnt and applied throughout this MA.


Barthes, R 1980 Camera Lucida. Vintage. London Elsaesser, T and Buckland, W 2002 Studying Contemporary American Film. New
York. Oxford City Press.
Manovich, L. 1992 Assembling Reality: Myths of Computer Graphics AFTERIMAGE 20, no. 2 September 1992 : 12-14
Prince, Stephen, True Lies: Perceptual Realism, Digital Images and Film Theory, Film Quarterly 49:3, Spring 1996 : 27-37.
Debevec, P and Malik, J 1997 Recovering High Dynamic Range Radiance Maps from Photographs. In SIGGRAPH ‘97


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: