Adventures in Fisheye Lenses

During the past couple of weeks, we have been doing some experiments for The Witness that involve pre-rendering a scene with a fisheye lens and then using that render during gameplay. If I were to say exactly what this is for, it would be a massive spoiler, so I'll just say it's for a kind of environment-mapped rendering.

The general requirement was that we need to be able to capture a pre-rendered scene with a really wide viewing angle. At first I was confused about the technical aspects of the problem (hopefully forgivable, since I spend most of my time thinking about gameplay these days, so my tech is a little rusty), so I thought that we might need to have a linear projection in order to solve the specific problem under consideration. So Shannon made a mock-up scene in 3D Studio MAX and we started making prerenders with increasingly wide camera angles, in order to test our special environment mapping. Here are three shots of the same scene with fields-of-view of 90 degrees, 120 degrees, and 150 degrees:






(Click on the images to see actual sizes.)

It's clear that as the field of view becomes wider, we can see more of the scene at once, laterally. This is good for our application, because we want to see as much as possible! But at the same time, when the angle is wide, objects at the center of the scene appear further away, occupying fewer pixels in the rendered image. This is bad because it means we only have low-resolution imagery for the most important things in the scene! (These screenshots are also badly artifacted because for some reason the avi encoder we were using interlaces the output by default, and we couldn't find a way to not interlace, without using a different codec! Hint to anyone making a video codec, anywhere in the past or future of this universe or any other: NOBODY LIKES INTERLACING. IT IS UGLY AND CAUSES A LOT OF PROBLEMS. DON'T DO IT. Please do your part to make the world a better place.)

It's around this time that I realized we didn't have to use a linear projection, which was good -- if we can warp the image however we need to, then we can have the important parts of the scene landing really big in the middle of the texture, using lots of pixels, and squeeze the rest of the scene nonlinearly around the edges of the bitmap, using fewer pixels.

A 180-degree fisheye lens seemed like the right tool to do this. For other reasons that I won't go into, another nice property of the fisheye lens (as opposed to some arbitrary distortion) is that it is physically plausible -- you can mount a fisheye lens onto a physical camera and generate a similar image.

We found some plugins for Mental Ray (a renderer that you can use from Max or Maya) that seemed to do the trick. Here's the output of one of the plugins we found (this is from Jan Sandström's JS_fisheye.c, which appears to be a modification of a simple fisheye lens shader in the Mental Ray reference manual):

Unfortunately the field of view in this image we saved is less than 180 degrees, which makes it hard to directly compare with the later images, but just look at the basic character of it for now.

At first I assumed that this shader was implementing actual fisheye lens math, though there were no real comments to go on. Here's the math used in the stripped-down Mental Ray Reference Manual:

    mi_vector_to_camera(state, &camdir, &state->dir);
    t = state->camera->focal / -camdir.z /
           (state->camera->aperture/2);
    x = t * camdir.x;
    y = t * camdir.y * state->camera->aspect;
    r = x * x + y * y;
    if (r < 1) {
        dir.x = camdir.x * r;
        dir.y = camdir.y * r;
        dir.z = -sqrt(1 - dir.x*dir.x - dir.y*dir.y);
        mi_vector_from_camera(state, &dir, &dir);
        return(mi_trace_eye(result, state, &state->org, &dir));
    }

(JS_fisheye.c has more parameters but is basically the same thing.)

Essentially, this code takes an input texture coordinate and converts it into a view vector in 3D space. In order to use the rendered image from within a shader in the game, I need to be able to invert this function: turn it from a view vector into a 2D texture coordinate. But when I tried to do this, I had all kinds of problems: the math was messy and ugly. I got the inkling that this shader may not be acting as a physical lens really would, but rather, was a 2D image-warping effect that gives the same general impression as a fisheye lens.

(The reason for thinking this is that light behaves bidirectionally; there's no difference between light moving forward and light moving backward. So if something is simple when light is going in one direction, it ought to be simple in the other direction too. If the equation starts looking a lot more complicated, that is sort-of a violation of the way that the physical universe works, mathematically, and so one starts thinking that something is wrong. That was the intuition I had, anyway.)

Over on this page I found a nice reference for the way that a real fisheye lens bends light. Indeed the equation is very simple (though as it is written on that page, it's not quite in the proper form for us to use). Based on that, I wrote a new Mental Ray shader that looks like this:

    mi_vector_to_camera(state, &camdir, &state->dir);
    miScalar x = state->raster_x / state->camera->x_resolution * 2 - 1;
    miScalar y = state->raster_y / state->camera->y_resolution * 2 - 1;
    
    miScalar r2 = x * x + y * y;

    if (r2 < 1) {
        miScalar c = 1 - r2;
        miScalar s = sqrtf(2 - r2);

        dir.x = x * s;
        dir.y = y * s;
        dir.z = -c;  

        mi_vector_from_camera(state, &dir, &dir);
        return mi_trace_eye(result, state, &state->org, &dir);
    }

Superficially it doesn't look too different from the previous example, but in fact this version refracts light in a physical way and is invertible. Here's what a render looks like using this lens:


As I mentioned, it's hard to compare with the earlier shot because the field-of-view is different (sorry about that), but I think it's evident that the nature of the distortion is fairly different between the two shots. In our version, it feels milder.

Here's the runtime pixel shader code that inverts it:

    // (xprime, yprime, zprime) is the view vector in the same space where the environment map was rendered.

    float c = abs(zprime);

    video_uv.xy = float2(xprime, yprime);
    float scale = 1 / sqrt(1 + c);
    video_uv.xy *= scale;

    // Now we have video_uv in [-1, 1]; for [0, 1], scale appropriately.

It's very simple, and the only real math required is a multiplication by a reciprocal square root (very fast for shaders!) So that was pleasant.

Once we had these both hooked up and working, the scene rendered perfectly, and we knew that the effect we wanted to create was achievable. By way of improving it, Ignacio suggested using a cylinder map instead of a fisheye lens, because that is better for this particular shot: we need to render a wide room, and see a lot laterally, but the vertical span is roughly constant and much shorter than the horizontal span. (The fisheye lens is more general, and we can use it for any scene, but to optimize texture resolution for specific cases, we might go to other things like the cylinder map. You can think of the cylinder map as being fisheyed along only one axis.)

Here's the cylinder map version of the scene:

This is what we are going with for now. Problem solved, job seemingly well-done.

Ignacio put the cylinder mapping code into the same shader as the fisheye lens, and also added a latitude-longitude distortion.

Our work was sped up drastically by the fact that we were able to find JS_fisheye.c free on the internet, as well as information for the way a fisheye lens works. So here's our attempt to give back a little: our final Mental Ray shader and the .mi file that defines the interface for it:


witness_fisheye.c
witness_fisheye.mi


If you are an experienced graphics programmer asking yourself why the hell didn't they just use a cube map for this, well, there is a very good answer to that, but it involves the spoiler. You'll see when the game is released!

20 Comments:

  1. i know what the spoiler is: the entire game is focused on a fish! yay!!!!

  2. very interesting! surprisingly i did understand a lot of that… i got most of it! but i don’t know how other first person views work. this sounds like a wider view, wider than any other game has done, kind of like how people look. b/c i don’t care what people say but *I* view things in a way wider than its taller, kind of like a 180 view. if the first person view for TW is just like this, is going to be very interesting to play a game looking at things like that… did you want this for specific kinds of puzzles or those this actually influence gameplay? who knows?! but it’s lookin awesome.(really cool looking mock-up scene btw )

    good luck with other experiments and innovative ideas, jonathan and team!

  3. Oh, the final rendered scene would be just like you expect to see.

    What we are talking about here is for, say, drawing reflections on reflective objects (that’s not actually what we are using it for but it is similar in implementation.)

  4. Interesting. This immediately brought to mind Wouter van Oortmerssen’s Fisheye Quake experiments: http://strlen.com/gfxengine/fisheyequake/compare.html

  5. all this technical stuff about shadows, lighting, mapping, photons, reflections and points of view is like those scientist that make experiments with all this stuff and then all the rooms and in-door spaces look like those render boxes? i’m guessing the place holder graphics objects and rooms intentionally look like bunkers to look like Cornell Boxes to make all the drawing reflections on reflective objects experiments easy to determine what is right and wrong, but it’s just lovely looking how the whole world looks like a render farm or experiment for computing graphics and all that testing stuff…

  6. Nothing special to comment. I just wanted to say I really like reading about these kind of smaller problems that a game designer deals with. I know how satisfying it is to solve one. Keep up the writing please! :)

  7. What I’m wondering is what the massive spoiler could be.

    By my limited understanding, the player does not normally view the world through this, it’s just a technique used for rendering things like shadows, reflections etc. What could be so spoiler-ey about that? Perhaps some secret message that’s only revealed when you view the scene through a fish-eye lense? Like anamorphic drawings? (http://tinyurl.com/6j2msq9)

    I can already tell this game is going to be something special. Keep up the awesome work!

    • If they are encoding a fish eye view of actual gameplay into an AVI, maybe this is later (or earlier in game time…?) projected as reflections and/or shadows into the live scene. So you *witness* yourself (or someone else) doing things at a different point in time, like some kind of spectral apparition.

      Pierre: What I�m wondering is what the massive spoiler could be.By my limited understanding, the player does not normally view the world through this, it�s just a technique used for rendering things like shadows, reflections etc. What could be so spoiler-ey about that? Perhaps some secret message that�s only revealed when you view the scene through a fish-eye lense? Like anamorphic drawings?(http://tinyurl.com/6j2msq9)I can already tell this game is going to be something special. Keep up the awesome work!

  8. Just thought I’d drop a link to a paper that seemed useful when I was working on something in a similar vein a while back.

    http://artis.imag.fr/Publications/2008/GHFP08/

    At the time, I was working on computing the fisheye view from each polygon in the scene along its normal to figure out what it could see. This “cube” of data then got passed along to a coworker for more calculations, so my experience is somewhat limited.

    I ended up going with a straight forward cube map method in the end as it was fast enough for our needs, but it sounds like cubemaps our out for you guys. Not sure if the non-linear projection shader code/paper at the link will be of any use, but it was some interesting reading at least :P.

  9. Where can I find a version of your fisheye shader? Is it possible to send it to me, and is it possible to work it in Maya?

    Thanks, and nice work guys.

  10. We link to the shader in the post, so I am not sure what else you need?

  11. i just spent 3 hours trying to get this to work with 3ds max 2012 x64. i failed. this is also the first mental ray shader i’ve ever tried to use. i think it will be perfect for my project, but i have no idea what i’m doing wrong. I even tried using the original JS_fisheye.dll and .mi and got errors…

    i got all this code from autodesk’s mental ray 2012 help file…

    i compiled your .c as:
    cl /c /O2 /MD /W3 -DWIN_NT -DBIT64 witness_fisheye.c

    then tried to link the obj with:
    link /nodefaultlib:LIBC.LIB /OPT:NOREF /DLL /OUT:witness_fisheye.dll witness_fisheye.obj “C:\Program Files\Autodesk\3ds Max 2012\mentalimages\dev\lib64\shader.lib”

    there i got fatal errors because four mi_ functions were not recognized. it did not error using the 32 bit shader.lib, but then it wouldn’t load. i did also change the include file in the .mi to match my file name.

    the JS_fisheye loads, but on render, i get error 051011: shader “JS_fisheye” not found.

    i hate my life. >.< am i doing something wrong or is it this stupid 2012 version of max?? any help would be greatly appreciated!!

    • We are using it in Maya 2011. I don’t know how it works in Max 2012, maybe they changed the interface and some new things are necessary? This could be a good learning opportunity…

      • after several hours of research, i managed to get it to work in Max. my ignorance of using custom shaders kept me from realizing that i was trying to compile for a 64-bit environment with a 32-bit compiler. >.< after i acquired the proper tools, everything worked fine. thank you for posting this. i have been searching for something similiar for a few months for my project. now, on to the animating!

        • Is it possible to send the .dll to me, i’m not a programer and i don’t have any idea of compile, i’m just need the shader to my project in 3Ds max, thnx

  12. Crazy question… But what are the chances that you’ll include fisheye support within the final launch of the game? Many planetarium domes would definitely be interested in playing the game.

  13. Here’s a question about the invertability of your fisheye code.

    In your pixel-to-world-ray shader, you have this :
    miScalar s = sqrtf(2 – r2);

    In your inverted (world-point-to-pixel) shader, you have this :
    float scale = 1 / sqrt(1 + c);

    Plots of each show that, indeed, this is not a perfect inversion. If, however, I change the pixel-to-world-ray shader to sqrtf(1 – r2), it is a perfect inversion.

    I presume you’ve increased the “fisheye’ness” of the pixel-to-world-ray shader since maybe it looks nicer for your needs, but wanted to make sure you take this modification into account in the inverted code.

    Thanks for the nice post!

  14. Finished the game, and I’m still not sure what things had a fisheye effect.

  15. ZioCfp Yeah bookmaking this wasn at a high risk conclusion great post!.

Leave a Reply

Your email address will not be published.