{"id":729,"date":"2011-01-22T18:32:12","date_gmt":"2011-01-23T01:32:12","guid":{"rendered":"http:\/\/the-witness.net\/news\/?p=729"},"modified":"2011-01-22T18:37:48","modified_gmt":"2011-01-23T01:37:48","slug":"adventures-in-fisheye-lenses","status":"publish","type":"post","link":"http:\/\/the-witness.net\/news\/2011\/01\/adventures-in-fisheye-lenses\/","title":{"rendered":"Adventures in Fisheye Lenses"},"content":{"rendered":"<p>During the past couple of weeks, we have been doing some experiments for <em>The Witness<\/em> that involve pre-rendering a scene with a fisheye lens and then using that render during gameplay.  If I were to say exactly what this is for, it would be a <strong>massive spoiler<\/strong>, so I'll just say it's for a kind of environment-mapped rendering.<\/p>\n<p>The general requirement was that we need to be able to capture a pre-rendered scene with a really wide viewing angle.  At first I was confused about the technical aspects of the problem (hopefully forgivable, since I spend most of my time thinking about gameplay these days, so my tech is a little rusty), so I thought that we might need to have a linear projection in order to solve the specific problem under consideration.  So Shannon made a mock-up scene in 3D Studio MAX and we started making prerenders with increasingly wide camera angles, in order to test our special environment mapping.  Here are three shots of the same scene with fields-of-view of 90 degrees, 120 degrees, and 150 degrees:<br \/>\n<P><br \/>\n<a href=\"http:\/\/the-witness.net\/news\/wp-content\/uploads\/2011\/01\/vlcsnap-2011-01-22-16h16m58s53.png\"><img src=\"http:\/\/the-witness.net\/news\/wp-content\/uploads\/2011\/01\/vlcsnap-2011-01-22-16h16m58s53-512x384.png\" alt=\"\" title=\"linear_90degrees\" width=\"307\" height=\"269\" class=\"aligncenter size-large wp-image-730\" \/><\/a><br \/>\n<a href=\"http:\/\/the-witness.net\/news\/wp-content\/uploads\/2011\/01\/vlcsnap-2011-01-22-16h17m22s200.png\"><img src=\"http:\/\/the-witness.net\/news\/wp-content\/uploads\/2011\/01\/vlcsnap-2011-01-22-16h17m22s200-512x204.png\" alt=\"\" title=\"linear_120degrees\" width=\"512\" height=\"204\" class=\"aligncenter size-large wp-image-731\" \/><\/a><br \/>\n<a href=\"http:\/\/the-witness.net\/news\/wp-content\/uploads\/2011\/01\/vlcsnap-2011-01-22-16h19m08s237.png\"><img src=\"http:\/\/the-witness.net\/news\/wp-content\/uploads\/2011\/01\/vlcsnap-2011-01-22-16h19m08s237-512x204.png\" alt=\"\" title=\"linear_150degrees\" width=\"512\" height=\"204\" class=\"aligncenter size-large wp-image-732\" \/><\/a><br \/>\n<P><br \/>\n<em>(Click on the images to see actual sizes.)<\/em><\/p>\n<p><!--more--><\/p>\n<p>It's clear that as the field of view becomes wider, we can see more of the scene at once, laterally.  This is good for our application, because we want to see as much as possible!  But at the same time, when the angle is wide, objects at the center of the scene appear further away, occupying fewer pixels in the rendered image.  This is bad because it means we only have low-resolution imagery for the most important things in the scene!  <em>(These screenshots are also badly artifacted because for some reason the avi encoder we were using interlaces the output by default, and we couldn't find a way to not interlace, without using a different codec!  Hint to anyone making a video codec, anywhere in the past or future of this universe or any other: <strong>NOBODY LIKES INTERLACING.  IT IS UGLY AND CAUSES A LOT OF PROBLEMS.  DON'T DO IT.  Please do your part to make the world a better place.<\/strong>)<\/em><\/p>\n<p>It's around this time that I realized we didn't have to use a linear projection, which was good -- if we can warp the image however we need to, then we can have the important parts of the scene landing really big in the middle of the texture, using lots of pixels, and squeeze the rest of the scene nonlinearly around the edges of the bitmap, using fewer pixels.<\/p>\n<p>A 180-degree fisheye lens seemed like the right tool to do this.  For other reasons that I won't go into, another nice property of the fisheye lens (as opposed to some arbitrary distortion) is that it is physically plausible -- you can mount a fisheye lens onto a physical camera and generate a similar image.<\/p>\n<p>We found some plugins for Mental Ray (a renderer that you can use from Max or Maya) that seemed to do the trick.  Here's the output of one of the plugins we found (this is from <a href=\"http:\/\/pixero.com\/downloads_mr.html\" onclick=\"_gaq.push(['_trackEvent', 'outbound-article', 'http:\/\/pixero.com\/downloads_mr.html', 'Jan Sandstr&ouml;m\\'s JS_fisheye.c']);\" >Jan Sandstr&ouml;m's JS_fisheye.c<\/a>, which appears to be a modification of a simple fisheye lens shader in the Mental Ray reference manual):<\/p>\n<p><a href=\"http:\/\/the-witness.net\/news\/wp-content\/uploads\/2011\/01\/vlcsnap-2011-01-22-16h40m09s91.png\"><img src=\"http:\/\/the-witness.net\/news\/wp-content\/uploads\/2011\/01\/vlcsnap-2011-01-22-16h40m09s91-384x384.png\" alt=\"\" title=\"JS_fisheye\" width=\"384\" height=\"384\" class=\"aligncenter size-large wp-image-742\" \/><\/a><\/p>\n<p>Unfortunately the field of view in this image we saved is less than 180 degrees, which makes it hard to directly compare with the later images, but just look at the basic character of it for now.<\/p>\n<p>At first I assumed that this shader was implementing actual fisheye lens math, though there were no real comments to go on.  Here's the math used in the stripped-down Mental Ray Reference Manual:<\/p>\n<p><code><\/p>\n<pre>\r\n    mi_vector_to_camera(state, &camdir, &state->dir);\r\n    t = state->camera->focal \/ -camdir.z \/\r\n           (state->camera->aperture\/2);\r\n    x = t * camdir.x;\r\n    y = t * camdir.y * state->camera->aspect;\r\n    r = x * x + y * y;\r\n    if (r < 1) {\r\n        dir.x = camdir.x * r;\r\n        dir.y = camdir.y * r;\r\n        dir.z = -sqrt(1 - dir.x*dir.x - dir.y*dir.y);\r\n        mi_vector_from_camera(state, &dir, &dir);\r\n        return(mi_trace_eye(result, state, &state->org, &dir));\r\n    }\r\n<\/pre>\n<p><\/code><\/p>\n<p>(JS_fisheye.c has more parameters but is basically the same thing.)<\/p>\n<p>Essentially, this code takes an input texture coordinate and converts it into a view vector in 3D space.  In order to use the rendered image from within a shader in the game, I need to be able to invert this function: turn it from a view vector into a 2D texture coordinate.  But when I tried to do this, I had all kinds of problems: the math was messy and ugly.  I got the inkling that this shader may not be acting as a physical lens really would, but rather, was a 2D image-warping effect that gives the same general impression as a fisheye lens.<\/p>\n<p>(The reason for thinking this is that light behaves bidirectionally; there's no difference between light moving forward and light moving backward.  So if something is simple when light is going in one direction, it ought to be simple in the other direction too.  If the equation starts looking a lot more complicated, that is sort-of a violation of the way that the physical universe works, mathematically, and so one starts thinking that something is wrong.  That was the intuition I had, anyway.)<\/p>\n<p>Over on <a href=\"http:\/\/wiki.panotools.org\/Fisheye_Projection\" onclick=\"_gaq.push(['_trackEvent', 'outbound-article', 'http:\/\/wiki.panotools.org\/Fisheye_Projection', 'this page']);\" >this page<\/a> I found a nice reference for the way that a real fisheye lens bends light.  Indeed the equation is very simple (though as it is written on that page, it's not quite in the proper form for us to use).  Based on that, I wrote a new Mental Ray shader that looks like this:<\/p>\n<p><code><\/p>\n<pre>\r\n    mi_vector_to_camera(state, &camdir, &state->dir);\r\n    miScalar x = state->raster_x \/ state->camera->x_resolution * 2 - 1;\r\n    miScalar y = state->raster_y \/ state->camera->y_resolution * 2 - 1;\r\n    \r\n    miScalar r2 = x * x + y * y;\r\n\r\n    if (r2 < 1) {\r\n        miScalar c = 1 - r2;\r\n        miScalar s = sqrtf(2 - r2);\r\n\r\n        dir.x = x * s;\r\n        dir.y = y * s;\r\n        dir.z = -c;  \r\n\r\n        mi_vector_from_camera(state, &dir, &dir);\r\n        return mi_trace_eye(result, state, &state->org, &dir);\r\n    }\r\n<\/pre>\n<p><\/code><\/p>\n<p>Superficially it doesn't look too different from the previous example, but in fact this version refracts light in a physical way and is invertible.  Here's what a render looks like using this lens:<br \/>\n<P><br \/>\n<a href=\"http:\/\/the-witness.net\/news\/wp-content\/uploads\/2011\/01\/vlcsnap-2011-01-22-16h40m39s8.png\"><img src=\"http:\/\/the-witness.net\/news\/wp-content\/uploads\/2011\/01\/vlcsnap-2011-01-22-16h40m39s8-384x384.png\" alt=\"\" title=\"vlcsnap-2011-01-22-16h40m39s8\" width=\"384\" height=\"384\" class=\"aligncenter size-large wp-image-743\" \/><\/a><\/p>\n<p>As I mentioned, it's hard to compare with the earlier shot because the field-of-view is different (sorry about that), but I think it's evident that the nature of the distortion is fairly different between the two shots.  In our version, it feels milder.<\/p>\n<p>Here's the runtime pixel shader code that inverts it:<\/p>\n<p><code><\/p>\n<pre>\r\n    \/\/ (xprime, yprime, zprime) is the view vector in the same space where the environment map was rendered.\r\n\r\n    float c = abs(zprime);\r\n\r\n    video_uv.xy = float2(xprime, yprime);\r\n    float scale = 1 \/ sqrt(1 + c);\r\n    video_uv.xy *= scale;\r\n\r\n    \/\/ Now we have video_uv in [-1, 1]; for [0, 1], scale appropriately.\r\n<\/pre>\n<p><\/code><\/p>\n<p>It's very simple, and the only real math required is a multiplication by a reciprocal square root (very fast for shaders!)  So that was pleasant.<\/p>\n<p>Once we had these both hooked up and working, the scene rendered perfectly, and we knew that the effect we wanted to create was achievable.  By way of improving it, Ignacio suggested using a cylinder map instead of a fisheye lens, because that is better for this particular shot: we need to render a wide room, and see a lot laterally, but the vertical span is roughly constant and much shorter than the horizontal span.  (The fisheye lens is more general, and we can use it for any scene, but to optimize texture resolution for specific cases, we might go to other things like the cylinder map.  You can think of the cylinder map as being fisheyed along only one axis.)<\/p>\n<p>Here's the cylinder map version of the scene:<\/p>\n<p><a href=\"http:\/\/the-witness.net\/news\/wp-content\/uploads\/2011\/01\/cylinder_map.png\"><img src=\"http:\/\/the-witness.net\/news\/wp-content\/uploads\/2011\/01\/cylinder_map-512x325.png\" alt=\"\" title=\"cylinder_map\" width=\"512\" height=\"325\" class=\"aligncenter size-large wp-image-741\" \/><\/a><\/p>\n<p>This is what we are going with for now.  Problem solved, job seemingly well-done.<\/p>\n<p>Ignacio put the cylinder mapping code into the same shader as the fisheye lens, and also added a latitude-longitude distortion.  <\/p>\n<p>Our work was sped up drastically by the fact that we were able to find JS_fisheye.c free on the internet, as well as information for the way a fisheye lens works.  So here's our attempt to give back a little: our final Mental Ray shader and the .mi file that defines the interface for it:<br \/>\n<P><br \/>\n<a href='http:\/\/the-witness.net\/news\/wp-content\/uploads\/2011\/01\/witness_fisheye.c'>witness_fisheye.c<\/a><br \/>\n<a href='http:\/\/the-witness.net\/news\/wp-content\/uploads\/2011\/01\/witness_fisheye.mi'>witness_fisheye.mi<\/a><br \/>\n<P><br \/>\nIf you are an experienced graphics programmer asking yourself <em>why the hell didn't they just use a cube map for this<\/em>, well, there is a very good answer to that, but it involves the spoiler.  You'll see when the game is released!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>During the past couple of weeks, we have been doing some experiments for The Witness that involve pre-rendering a scene with a fisheye lens and then using that render during gameplay. If I were to say exactly what this is for, it would be a massive spoiler, so I&#8217;ll just \u2026<\/p>\n<p class=\"continue-reading-button\"> <a class=\"continue-reading-link\" href=\"http:\/\/the-witness.net\/news\/2011\/01\/adventures-in-fisheye-lenses\/\">Continue reading<i class=\"crycon-right-dir\"><\/i><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[4,7],"tags":[],"_links":{"self":[{"href":"http:\/\/the-witness.net\/news\/wp-json\/wp\/v2\/posts\/729"}],"collection":[{"href":"http:\/\/the-witness.net\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/the-witness.net\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/the-witness.net\/news\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/the-witness.net\/news\/wp-json\/wp\/v2\/comments?post=729"}],"version-history":[{"count":25,"href":"http:\/\/the-witness.net\/news\/wp-json\/wp\/v2\/posts\/729\/revisions"}],"predecessor-version":[{"id":1242,"href":"http:\/\/the-witness.net\/news\/wp-json\/wp\/v2\/posts\/729\/revisions\/1242"}],"wp:attachment":[{"href":"http:\/\/the-witness.net\/news\/wp-json\/wp\/v2\/media?parent=729"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/the-witness.net\/news\/wp-json\/wp\/v2\/categories?post=729"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/the-witness.net\/news\/wp-json\/wp\/v2\/tags?post=729"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}