Taking umbrage
Roberto Casati's book "The Shadow Club"
was named a bit strangely, but was still very interesting. The book
discusses shadows from historical, artistic, cultural, eometric and
philosophical standpoints. It also presents several "brainteasers" that
illustrate the strange nature of shadows and what kind of objects (or
pseudo-objects) they really are: for example, the strange fact that a
shadow can move faster than light. (And as Lucky Luke knows, they can
also be a lot slower.) Small children's ideas of what shadows are and
how they work, as reported by Jean Piaget, are also interesting. One of
the brainteasers asks if you get a green shadow or a green light if you
hold a transparent green plate between a normal lamp and floor. What if
the lightbulb is completely covered with green glass?
After philosophical discussion of shadows, the end of the book goes to explain how ancient astronomers used shadows to reason about the universe. Contrary to the common misconception, ancient astronomers were perfectly aware that Earth and Moon are spherical (as can be seen from Earth's shadow on the Moon), and that Sun is much, much further away from Earth than the Moon.
The book also indirectly remound me of the time when I used to teach the computer graphics course back in the old country, and the problem of computing shadows when rendering a three-dimensional image. When such three-dimensional rendering is done with ray tracing, complex shadows come pretty much free as a side effect of this technique, but with the traditional polygonal rendering with projection (which is the way that all real-time 3D graphics that you would see in games are done), computing the non-trivial shadows (for example, when non-occluding objects stand on a flat plane, illuminated from above) is a surprisingly complex task. On the other hand, mathematically shadows are projections, so projection graphic engine often helps in generating them.
Ray tracing, once you have implemented the basic algorithm, gives you quite a lot of free stuff that would be very complex to do with projection graphics. In addition to shadows between arbitrary objects, we get mirror reflections between arbitrary objects (not just planar reflections), and these reflections and shadows can be fuzzy and have penumbrae. Transparency and refraction are also highly nontrivial problems for projection graphics, but for ray tracing, they are equally trivial, and again for arbitrary shapes. Unlike in projection rendering, objects do not first need to converted to polygon meshes for rendering, because the ray tracing rendering algorithm can directly handle any kind of shape for which we can calculate the intersection points between the shape and a given ray, and the direction that points directly "away" from a given point on the object surface. (This calculation is especially trivial for spheres which are hard for projection graphics, which is why all old test images that illustrate ray tracing contain a lot of reflective spheres.) Constructive solid geometry is similarly trivial.
For this reason, I kind of look forward to the day when ray tracing becomes the standard mechanism for generating real-time three-dimensional computer graphics. This will inevitably happen at some point in the future as scenes and their rendering get more complex. Projection graphics is still well ahead in this contest, since it can easily be speeded up with hardware. Even better, this hardware acceleration keeps getting faster than Moore's law predicts, because projection rendering is relatively straightforward and not a Turing-complete problem.
But when we draw the performance curves and extrapolate the future from them, we see that the ray tracing will eventually catch up with projection rendering and overtake it, and perhaps this will happen within the next ten years or so. The main problem will most likely be in devising a standard API for ray tracing (the same way that projection graphics currently has OpenGL and Direct3D) and getting all chip makers and software writers to use it, in the classic chicken and egg fashion. Creating specialized consumer hardware was historically must easier for projection graphics than it will be for ray tracing, since it was easy to get the ball rolling by speeding up only some essential parts of the calculation with hardware, and all the cool stuff could be added later. With ray tracing, the whole thing is pretty much all or nothing: you need to implement the whole algorithm to get any results. (The third well-known technique of radiosity suffers from this problem even more severely.)
When one of the brainteasers in the book noted that shadows are what you can see but the light source cannot (strictly speaking, this is true only with certain rather unrealistic assumptions about the ways that the objects reflect light), this brought to my mind an innovative shadow generation algorithm that I read years ago in Foley et al. but have never seen mentioned anywhere else, so I wonder if this algorithm is actually used anywhere. This algorithm works by first rendering the scene from the light source's point of view, and since only the Z-buffer really needs to be filled and no colour, shading, texturing, bump etc. calculations need to be done from the light source's point of view, this calculation can be done quickly. The scene is then rendered from the normal camera view, filling a second Z-buffer for the camera in the normal fashion. For each pixel, this Z-buffer information is used to compute the location of the original point that was projected there. This original point is then projected towards the light source, whose Z-buffer can now be used to instantly check if that point is visible to the light source, that is, whether the light source should be used to illuminate it.
After philosophical discussion of shadows, the end of the book goes to explain how ancient astronomers used shadows to reason about the universe. Contrary to the common misconception, ancient astronomers were perfectly aware that Earth and Moon are spherical (as can be seen from Earth's shadow on the Moon), and that Sun is much, much further away from Earth than the Moon.
The book also indirectly remound me of the time when I used to teach the computer graphics course back in the old country, and the problem of computing shadows when rendering a three-dimensional image. When such three-dimensional rendering is done with ray tracing, complex shadows come pretty much free as a side effect of this technique, but with the traditional polygonal rendering with projection (which is the way that all real-time 3D graphics that you would see in games are done), computing the non-trivial shadows (for example, when non-occluding objects stand on a flat plane, illuminated from above) is a surprisingly complex task. On the other hand, mathematically shadows are projections, so projection graphic engine often helps in generating them.
Ray tracing, once you have implemented the basic algorithm, gives you quite a lot of free stuff that would be very complex to do with projection graphics. In addition to shadows between arbitrary objects, we get mirror reflections between arbitrary objects (not just planar reflections), and these reflections and shadows can be fuzzy and have penumbrae. Transparency and refraction are also highly nontrivial problems for projection graphics, but for ray tracing, they are equally trivial, and again for arbitrary shapes. Unlike in projection rendering, objects do not first need to converted to polygon meshes for rendering, because the ray tracing rendering algorithm can directly handle any kind of shape for which we can calculate the intersection points between the shape and a given ray, and the direction that points directly "away" from a given point on the object surface. (This calculation is especially trivial for spheres which are hard for projection graphics, which is why all old test images that illustrate ray tracing contain a lot of reflective spheres.) Constructive solid geometry is similarly trivial.
For this reason, I kind of look forward to the day when ray tracing becomes the standard mechanism for generating real-time three-dimensional computer graphics. This will inevitably happen at some point in the future as scenes and their rendering get more complex. Projection graphics is still well ahead in this contest, since it can easily be speeded up with hardware. Even better, this hardware acceleration keeps getting faster than Moore's law predicts, because projection rendering is relatively straightforward and not a Turing-complete problem.
But when we draw the performance curves and extrapolate the future from them, we see that the ray tracing will eventually catch up with projection rendering and overtake it, and perhaps this will happen within the next ten years or so. The main problem will most likely be in devising a standard API for ray tracing (the same way that projection graphics currently has OpenGL and Direct3D) and getting all chip makers and software writers to use it, in the classic chicken and egg fashion. Creating specialized consumer hardware was historically must easier for projection graphics than it will be for ray tracing, since it was easy to get the ball rolling by speeding up only some essential parts of the calculation with hardware, and all the cool stuff could be added later. With ray tracing, the whole thing is pretty much all or nothing: you need to implement the whole algorithm to get any results. (The third well-known technique of radiosity suffers from this problem even more severely.)
When one of the brainteasers in the book noted that shadows are what you can see but the light source cannot (strictly speaking, this is true only with certain rather unrealistic assumptions about the ways that the objects reflect light), this brought to my mind an innovative shadow generation algorithm that I read years ago in Foley et al. but have never seen mentioned anywhere else, so I wonder if this algorithm is actually used anywhere. This algorithm works by first rendering the scene from the light source's point of view, and since only the Z-buffer really needs to be filled and no colour, shading, texturing, bump etc. calculations need to be done from the light source's point of view, this calculation can be done quickly. The scene is then rendered from the normal camera view, filling a second Z-buffer for the camera in the normal fashion. For each pixel, this Z-buffer information is used to compute the location of the original point that was projected there. This original point is then projected towards the light source, whose Z-buffer can now be used to instantly check if that point is visible to the light source, that is, whether the light source should be used to illuminate it.
Today 3D-cards already perform ray tracing locally. All latest graphics is calculated on per pixel basis and more and more the polygonal model is to just optimize ray tracing and cheating it whenever possible.
Posted by Anonymous | 9:48 AM
remound is a great word.
Posted by Otto Kerner | 2:55 AM