Archive for November, 2009


November 25, 2009

Simulation of two balls colliding made to look like a high speed capture at 1000 fps.

Playblast from the camera’s perspective within Maya of the geometric and polygonal objects colliding.



November 25, 2009


November 25, 2009


November 9, 2009

PERIPETICS, OR THE INSTALLATION OF AN IRREVERSIBLE AXIS ON A DYNAMIC TIMELINE is a high-end computer generated moving image short directed by Zeitguised. I came across an interview with one of the directors Henrik Mauler on the website Ventilate who sheds some light on the process and accessibility of CGI’s role in the arts. This work was exhibited last year at the Zirkel Gallery and entails “six imaginations of disoriented systems that take a catastrophic turn, including the evolution of educational plant-body-machine models and liquid building materials.” –


Please tell us about the founding of Zeitguised.

We founded Zeitguised in early 2001, as an effort to channel our common and disparate interests (art, architecture, sculpture, fashion, design) into a field that could explore each of them and test the boundaries of what these areas meant to us.

How has Zeitguised progressed since completion of its first project?

Initially, we started out doing short independent films, installation projects, music clips to test the waters. We didn’t have clear intentions, we sort of meandered through the terrain that came up in front of us. We started on some commercial work in 2003, but we probably were too resilient and always had that independent itch. In 2006 we started to do this kind of work for a living and have since, without cutting out time for experimental work.


I believe, in part, that Peripetics speaks to the creation of Art for Art’s sake, without the restraint of accelerated timelines?

Computer Graphics, like Photography before, is on the verge of becoming an “accepted” technique in contemporary art world, as is every imaging technique, naturally. So the technical side explores the self imposed boundaries of CG on the one hand, on the other side it deals with the accessibilty of art and its representation in galleries and the circulating images thereof. Other than that, it is a very personal stance towards producing imagery. A more structural approach where relations and changes of shapes, patterns, colors, geometries and motions become the narrative and the protagonists.

Please describe the “… new, hybrid mix of scripted motion and 3D CG realism”, created for this piece. Does this process relate to generative art?

Yes. There is a notion of generative geometries and motions that seduce you to feel above and beyond mere artistic work with its traditional techniques. You get a sense of meta creation, where rules that you have to find and put into place are the start for new and unintended forms that are more interesting than what you could conceive and shape. Some part of that is of course a complete delusion, yet still it is fun, at least for us with a crossover between the “handcrafted” cg and the algorithmic self-runners.

The Peripetics ‘making-of’ shows an intense amount of experimentation. How much time was given to experimentation and is this sort of exploration generally allowed within client projects?

Peripetics was conceived a year before completion as a concept, but then it was much different than it turned out in the end. We stopped thinking about it for a while and tended to a string of commercial projects. Once we got back into it, we had some months of experimentation during a quiet summer, and produced the scenes and sound within a few weeks. So production time was almost negligable compared to the experimentation time. To think that sort of time would apply within a “normal” client project is quite surely not a good idea.


What kind of render farm solution was used for Peripetics?

Just a very small inhouse render farm, not much of a solution, rather a rig. The production was entirely on our budget, so our own machines had to crunch the job. We layered it nicely so we were rendering at night, and while one scene was being built, the other one was being rendered. We couldn’t rerender much, because render times were high. We also didn’t do much post- it basically is how it came out of the renderer, with some level correction.

Please describe the client brief and the creative process executed.

Universal Everything came to us with an open project, where we only had to make sure to follow a beat structure of a given musical score. We had a lot of fun with the project because we were able to work on a project we had been playing with for a bit, and they gave us all the freedom to explore and develop it into many directions and iterations, I can recall weeks of unrestrained experimentation. Luckily, MTV liked it as well in the end, and they made it easy to settle for some of their creative input like the heart shape. All in all a wonderful project.

Do you allow for any additional design development once production starts?

We did with this one, yet unfortunately in the world of commercials that is still not the norm. usually, deadlines are too tight- sometimes we work overtime and sneak improvements into script and design.


Were there any outstanding technical challenges with the dynamics?

We used cinema 4d for this, of which everyone knows the dynamics are near worthless. yet we were working with a great plugin which handles large amounts of objects easily. but we ended up with a construction of >10.000 objects, which left us working outside of comfortable speeds once we had to bake the dynamics into the file near the end of the process.

What kind of render farm solution did you use?

It was rendered in full HD at 60 fps, so we had to use an external remote independent professional renderfarm in liverpool, uk, which was absolutely great


What was the last great book you have read and what was the last great movie you have seen?

book: peter soloterdijk, the trilogy spheres, contemporary philosphy, pragmatic, remarkably funny polemic and inspiring for all aspects of aesthetics and design

other book: pattern recognition by william gibson

film: not a film, but the entire “twin peaks” series by David Lynch is one of the richest “cinematic” experiences in terms of concept, narration, aesthetics. it is so precise and at the same time incredibly open, with countless “attachment points”. we never get tired of watching it.

Full Interview taken from



November 8, 2009

In a previous blog I mentioned the problems I was encountering when matchmoving some video in Maya Live. It turned out the CMOS technology within my camera produced a rolling shutter effect that made things very difficult when tracking as each frame slightly bent and warped with the movement of the camera. I sort of gave up for a while with the intention of coming back to this when I got my hands on a different camera. However I had got tired of waiting and believed that if the camera movement was kept to a mininal amount, some degree of successful may follow. I was right. Below is my step-by-step process to create a smooth track with little but some handheld real world camera movement.

1. Load Maya Live into the plug-in Manager and create a New Matchmove Scene

2. Browse for the Image Sequence of the video that needs tracking and click the green icon Ready To Track

Picture 1(2)

3. Change from Setup to Track in the drop-down box. Click the red arrow icon and Create to insert the first tracking point into the scene.

Picture 3(2)

4. Move the tracking point1 onto a spot or well defined area where there is a sharp contrast in tonal range. This data is important for the efficiency of the track and must be chosen carefully. Click Start Track.

5. Once point1 has tracked the full length of the clip the results will show in the graph window where the data is represented by the colour green if successful, yellow if moderate or red for a failed attempt. The image below shows tracking point2 only reaching frame 150 before losing its mark. If this happens, return the tracking marker to the point it’s following and press Start Track to continue.

Picture 4(2)

6. When the coloured bar at the bottom of the graph window is mainly green Maya is ready to Solve the scene. Around 10 tracking points are generally needed before this occurs.

Picture 5(2)

7. Change from Track to Solve in the drop-down box and chose the option Survey. It’s important now to create a plane that will line up with the ground in our video. After selecting all the tracking points on the floor change the Constraint Type to Plane and click create. Make sure that Registration Only is ticked.

8. Select 2 tracking points from the scene and change the constraint type to Distance. Enter a number that will represent the physical distance between each point and uncheck Registration Only for this.

9. The last constraint we need to create is a Point constraint that will snap the blue plane to one of the tracking markers in the center of our scene.

Picture 7(2)

10. Once these 3 constraints have been made click back onto the Solve icon where the tracking data will be solved and parented to the 3D camera.

11. If the solve is successful you will see the blue plane fixed to the ground when you playback the video. The 3D camera in Maya now accurately copies the movement of the video camera, allowing realistic integration of CGI over live-action.

Picture 9(2)

Picture 8(2)