"Light Makes Right"
September 20, 1989
Volume 2, Number 6
Compiled by
All contents are copyright (c) 1989, all rights reserved by the individual authors
Archive locations: anonymous FTP at
ftp://ftp-graphics.stanford.edu/pub/Graphics/RTNews/,
wuarchive.wustl.edu:/graphics/graphics/RTNews, and many others.
You may also want to check out the Ray Tracing News issue guide and the ray tracing FAQ.
However, one major reason that I'm flushing the queue right now is that the node "hpfcrs" is disappearing off the face of the earth. So, please note my only valid address is the "wrath" path at the top of the issue. Thanks!
back to contents
To update my personal information in your files:
Surface mail: Panu Rekola Mannerheimintie 69 A 7 SF-00250 Helsinki, Finland Phone: +358-0-4513243 (work), +358-0-413082 (home) Email: pre@cs.hut.fi Interests: illumination models, texture mapping, parametric surfaces.
You may remove one of the names you may have in the contact list. Dr. Markku Tamminen died in the U.S. when he was returning home from SIGGRAPH. How his project will go on, is still somewhat unclear.
________
Andrew Pearce, pearce@alias
I wrote my MS thesis on Multiprocessor Ray Tracing, then moved to Alias where I sped up Mike Sweeney's ray caster. I've just completed writing the Alias Ray Tracer using a recursive uniform subdivision method (see Dave Jevans paper in Graphics Interface '89, "Adaptive Voxel Subdivision for Ray Tracing") with additional bounding box and triangle intersection speed ups.
Right now, I'm fooling around with using the guts of the ray tracer to do particle/object collision detection with complex environments, and particle/particle interaction with the search space reduced by the spatial subdivision. (No, I don't use the ray tracer to render the particles.)
In response to Susan Spach's question about mip mapping, we use mip maps for our textures, we get the sample size by using a "cone" size parameter which is based on the field of view, aspect ratio, distance to the surface and angle of incidence. For secondary rays this size parameter is modified based on the tangents to the surface and the type of secondary ray it is (reflection or refraction). This may be difficult to do if you are not ray tracing surfaces for which the tangent information is readily available (smooth shaded polygonal meshes?).
- Andrew Pearce - Alias Research Inc., Toronto, Ontario, Canada. - pearce%alias@csri.utoronto.ca | pearce@alias.UUCP - ...{allegra,ihnp4,watmath!utai}!utcsri!alias!pearce
________
Brian Corrie, bcorrie@uvicctr.uvic.ca
I am a graduate student at the University of Victoria, nearing the completion of my Masters degree. The topic of my thesis is producing realistic computer generated images in a distributed network environment. This consists of two major research areas: providing a distributed (in the parallel computing sense) system for ray tracing, as well as a workbench for scene description, and image manipulation. The problems that need to be addressed by a system for parallel processing in a distributed loosely coupled system are quite different than those addressed by a tightly coupled parallel processor system. Because of the (likely) very high cost of communication in a distributed processing environment, most parallel algorithms currently used are not feasible (due to the high overhead). The gains of parallel ray tracing in a distributed environment are: the obvious speedup by bringing more processing power to bear on the problem, the flexibility of distributed systems, and the availability of the resources that will become accessible as distributed systems become more prominent in the computer community.
Whew, what a mouthful. In a nutshell, I am interested in: ray tracing in general, parallel algorithms, distributed systems for image synthesis (any one know of any good references), and this new fangled radiosity stuff.
________
Joe Cychosz
Purdue University CADLAB Potter Engineering Center W. Lafayette, IN 47906
Phone: 317-494-5944 Email: cychosz@ecn.purdue.edu
My interests are in supercomputing and computer graphics. Research work is Vectorized Ray Tracing. Other interests are: Ray tracing on MIMD tightly coupled shared memory machines, Algorithm vectorization, Mechanical design processes, Music synthesis, and Rendering in general.
________
Jerry Quinn Department of Math and Computer Science Bradley Hall Dartmouth College Hanover, NH 03755 sunapee.dartmouth.edu!quinn
My interests are currently ray tracing efficiency, parallelism, animation, radiosity, and whatever else happens to catch my eye at the given moment.
________
Marty Barrett - octrees, parametric surfaces, parallelism. mlb6@psuvm.bitnet
Here is some info about my interests in ray tracing:
I'm interested in efficient storage structures for ray tracing, including octree representations and hybrid regular subdivision/octree grids. I've looked at ray tracing of parametric surfaces, in particular Bezier patches and box spline surfaces, via triangular tessellations. Parallel implementations of ray tracing are also of interest to me.
________
Charles A. Clinton Sierra Geophysics, Inc. 11255 Kirkland Way Kirkland, WA 98033 USA Email: ...!uw-beaver!sumax!ole!steven!cac Voice: (206) 822-5200 Telex: 5106016171 FAX: (206) 827-3893
I am doing scientific visualization of 3D seismic data. To see the kind of work that I am doing, check out:
'A Rendering Algorithm for Visualizing 3D Scaler Fields' Paolo Sabella Schlumberger-Doll Research Computer Graphics, Vol. 22, Number 4 (SIGGRAPH '88 Conference Proc.) pp 51-58
In addition, I try to keep up with ray-tracing and computer graphics in general. I occasionally try my hand at doing some artistic ray-tracing. (I would like to extend my thanks to Mark VandeWettering for distributing MTV. It has provided a wonderful platform for experimentation.)
________
Jochen Schwarze
I've been developing several smaller graphics packages, e.g. a 3D visualization of turning parts etc. Now I'm implementing the 2nd version of a ray tracing program that supports modular programming using a description language, C++ vector analysis and body class hierarchy, CSG trees, texture functions and mapping, a set of body primitives, including typeface rendering for logos, and a network ipc component to allow several cpu's to calculate a single image.
My interests lie - of course :-) - in speedup techniques, and the simulation of natural phenomena, clouds, water, etc. Just starting with this.
Jochen Schwarze Domain: schwarze@isaak.isa.de ISA GmbH, Stuttgart, West Germany UUCP: schwarze@isaak.uucp Bang: ...!uunet!unido!isaak!schwarze
S-Mail: ISA GmbH c/o Jochen Schwarze Azenberstr. 35 7000 Stuttgart 1 West Germany
back to contents
I am currently working on rewriting my ray tracer to employ radiosity-like effects. Your paper (with Wallace and Elmquist) is very nice, and suggests a really straightforward implementation. I just have a couple of questions that you might be able to answer.
When you shoot energy from a source patch, it is collected at a specific patch vertex. How does this energy get transferred to a given patch for secondary shooting? In particular, is the vertex shared between multiple patches, or is each vertex only in a single patch? I can imagine the solution if each vertex is distinct, but have trouble with the case where vertices are shared. Any quick insights?
The only other question I have is: HOW DO YOU GET SUCH NICE MODELS TO RENDER? [We use ME30, HP's Romulus based solids modeler - EAH]
Is there a public domain modeling package that is available for suns or sgi's that I can use to make more sophisticated models? Something cheap even?
[The BRL modeler and ray tracer runs on a large number of machines, and they like having universities as users - see Vol.2 No. 2 (archive 6). According to Mike Muuss' write-up, some department in Princeton already has a copy.
The Simple Surface Modeler (SSM) works on SGI equipment. It was developed at the Johnson Space Center and, since they are not supposed to make any money off it, is being sold cheap (?) by a commercial distributor. COSMIC, at 404-542-3265, can send you some information on it. It also runs on a Pixel Machine (which is what I saw it running on at SIGGRAPH 88), though I don't believe support for this machine will be shipped. It's evidentally not shipping yet (red tape - the product is done), but should be "realsoonnow". More information when I get the abstract. Does anyone else know any resources?]
________
Reply from John Wallace:
Computing the patch energy in progressive radiosity using ray tracing:
Following a step of progressive radiosity, every mesh vertex in the scene will have a radiosity. Energy is not actually collected at the mesh vertices. What is computed at each vertex is the energy per unit area (radiosity) leaving the surface at that location. The patch radiosity is the average energy per unit area over the patch. Finally, the patch energy is the patch radiosity times the patch area (energy per unit area times area).
The vertex radiosities can be considered a sampling of the energy per unit area at selected points across the patch. To obtain the average energy per unit area over the patch, take the average of the vertex radiosities. This assumes that the vertices represent uniform sub-areas of the patch. This is not necessarily true, and when it is not a more accurate answer is obtained by taking an area weighted average of the vertex radiosity. The weight given to a vertex is equal to the area of the patch that it represents. In our work we used a uniform mesh and weighted all vertices equally.
It doesn't matter whether vertices are shared by neighboring patches, since we're talking about energy per unit area. Picture four patches that happen to all share a particular vertex. The energy per unit area leaving any of the patches at the vertex is not affected by the fact that other patches share that vertex. If we were somehow collecting energy at the vertex, then it would have to be portioned out between the patches.
Once the patch radiosity is know, the patch energy is obtained by multiplying patch radiosity times patch area.
back to contents
I happened to mention the idea to Roy Hall, and he told me that this was an undocumented feature of the Wavefront package! Last year Wavefront came out with an image of two pieces of pottery behind a candle, with wonderful texturing on the objects. It turns out that the artist had wanted to tone down the brightness in some parts of the image, and so tried negative intensity light sources. This turned out to work just fine, and the artist mentioned this to Roy, who, as an implementer of this part of the package, had never considered that anyone would try this and so never restricted the intensity values to be non-negative.
back to contents
As was recently pointed out to me by Mike Schoenborn, the cylinder code in the version of the MTV raytracer is broken somewhat severely. Or at least it appeared to be so, what actually happens is that I forgot to normalize two vectors, which leads to interesting distortions and weird looking cylinders. Anyway, the bug is in cone.c, in the function MakeCone(). After the vectors cd -> cone_u and cd -> cone_v are created, they should be normalized. A context diff follows at the end of this. This makes the SPD "tree" look MUCH better. (And all this time I thought it was Eric's fault :-)
This bugfix will be worked into the next release, and I should also update the version on cs.uoregon.edu SOMETIME REAL SOON NOW (read, don't hold your breath TOO anxiously). Hope that this program continues to be of use... :-)
Somebody has some texture mapping code that they are sending me, I will probably try to integrate it in before I make my next release. I am also trying to get in spline surfaces, but am having difficulty to the point of frustration. Any recommendations on implementing them?
*** ../tmp/cone.c Fri Aug 25 20:25:52 1989 --- cone.c Fri Aug 25 21:31:04 1989 *************** *** 240,247 **** --- 240,251 ---- /* find two axes which are at right angles to cone_w */ + VecCross(cd -> cone_w, tmp, cd -> cone_u) ; VecCross(cd -> cone_u, cd -> cone_w, cd -> cone_v) ; + + VecNormalize(cd -> cone_u) ; + VecNormalize(cd -> cone_v) ; cd -> cone_min_d = VecDot(cd -> cone_w, cd -> cone_base) ; cd -> cone_max_d = VecDot(cd -> cone_w, cd -> cone_apex) ;
back to contents
The original program was written by David B. Wecker, translating from a Vax 11/750 to the Amiga, with a conversion to Sun workstations by Ofer Licht (ofer@gandalf.berkeley.edu). - EAH]
Below is an excerpt from the documentation RAY.DOC:
The RAY program knows how to create images composed of four primitive geometric objects: spheres, parallelograms, triangles, and flat circular rings (disks with holes in them). Some of the features of the program are:
Diffuse and specular reflections (with arbitrary levels of gloss or polish). Rudimentary modeling of object-to-object diffusely reflected light is also implemented, that among other things accurately simulates color bleed effects from adjacent contrasting colored objects.
Mirror reflections, including varying levels of mirror smoothness or perfection.
Refraction and translucency (which is akin to variable microscopic smoothness, like the surface of etched glass).
Two types of light sources: purely directional (parallel rays from infinity) of constant intensity, and spherical sources (like light bulbs, which cast penumbral shadows as a function of radius and distance) where intensity is determined by the inverse square law.
Photographic depth-of-field. That is, the virtual camera may be focused on a particular object in the scene, and the virtual camera's aperture can be manipulated to affect the sharpness of foreground and background objects.
Solid texturing. Normally, a particular object (say a sphere) is considered to have constant properties (like color) over the entire surface of the object, often resulting in fake looking objects. Solid texturing is a way to algorithmically change the surface properties of an object (thus the entire surface area is no longer of constant nature) to try and model some real world material. Currently the program has built in rules for modelling wood, marble, bricks, snow covered scenes, water (with arbitrary wave sources), plus more abstract things like color blend functions.
Fractals. The program implements what's known as recursive triangle subdivision, which creates all manners of natural looking surface shapes (like broken rock, mountains, etc.). The character of the fractal surface (degree of detail, roughness, etc.) is controlled by parameters fed to the program.
AI heuristics to complete computation of a scene within a user specified length of time. [???]
back to contents
======== USENET cullings follow ==============
What I want to do is to turn a path consisting of line and arc segments around an axis and then ray-trace the generated turning part. The rotated line segments produce cylinders or cones that are easy to intersect with a ray, whereas the arcs produce tori. To evaluate the intersection of the ray with a torus I'd have to numerically solve a polynomial equation of fourth degree.
Does anybody know a way that avoids solving a general fourth- degree equation? Perhaps something that respects torus geometry and allows to split the equation into two quadric ones? Any other fast way to do it?
Thanks very much.
Jochen Schwarze Domain: schwarze@isaak.isa.de ISA GmbH, Stuttgart, West Germany UUCP: schwarze@isaak.uucp Bang: ...!uunet!unido!isaak!schwarze
back to contents
________________________________________________________________ Didier BADOUEL badouel@irisa.fr INRIA / IRISA Phone : +33 99 36 20 00 Campus Universitaire de Beaulieu Fax : 99 38 38 32 35042 RENNES CEDEX - FRANCE Telex : UNIRISA 950 473F ________________________________________________________________
[Code removed. Find it at cs.uoregon.edu or write him - EAH]
back to contents
Robert Minsk had a question about how to do inverse mapping on a quadrilateral. This was my response:
For the inverse bilinear mapping of XYZ to UV, see p. 59-64 of "An Introduction to Ray Tracing", edited by Andrew Glassner, Academic Press (hot off the press). Tell me if you find any bugs, since I need to send typoes to AP. This same info is in the "Intro to RT" SIGGRAPH course notes from 1987 & 1988, with one important typo fixed (see old issues of the Ray Tracing News to find out the typo).
An excellent discussion of the most popular mappings (affine, bilinear, and projective), and for a discussion on why to avoid simple Gouraud interpolation, get a copy of Paul Heckbert's Master's Thesis (again, hot off the press), "Fundamentals of Texture Mapping and Image Warping". It's got what you need and is also a good start on sampling/filtering problems. Order it as Report No. UCB/CSD 89/516 (June 1989) from
Computer Science Division Dept of Electrical Engineering and Computer Sciences University of California Berkeley, California 94720
It was $5.50 when I ordered mine. Oh, I should also note: it has source code in C for most of the algorithms described in the text.
________
From: prem@geomag.fsu.edu (Prem Subrahmanyam) Newsgroups: comp.graphics Subject: Re: Texture mapping Organization: Florida State University Computing Center
I would strongly recommend obtaining copies of both DBW_Render and QRT, as both have very good texture mapping routines. DBW uses absolute spatial coordinates to determine texture, while QRT uses a relative position per each object type mapping. DBW has some really interesting features, like sinusoidal reflection to simulate waves, a turbulence-based marble/wood texture based on the wave sources defined for the scene. It as well has a brick texture, checkerboard, and mottling (turbulent variance of the color intensity). Writing a texture routine in DBW is quite simple, since you're provided with a host of tools (like a turbulence function, noise function, color blending, etc.). I have recently created a random-color texture that uses the turbulence to redefine the base color based on the spatial point given, which it then blends into the object's base color using the color blend routines. Next will be a turbulent-color marble texture that will modify the marble vein coloring according to the turbulent color. Also in the works are random color checkerboarding (this will require a little more thought), variant brick height and mortar color (presently they are hard-wired), the list is almost endless. I would think the ideal ray-tracer would be one that used QRT's user-definable texture patches which are then mapped onto the object, as well as DBW's turbulence/wave based routines. The latter would have to be absolute coordinate based, while the former can use QRT's relative position functions. In any case, getting copies of both of these would be the most convenient, as there's no reason to reinvent the wheel.
________
From: ranjit@grad1.cis.upenn.edu (Ranjit Bhatnagar) 4211 Pine St., Phila PA 19104 Newsgroups: comp.graphics Subject: Re: Texture mapping by spatial position Organization: University of Pennsylvania
The combination of 3-d spatial texture-mapping (where the map for a particular point is determined by its position in space rather than its position on the patch or polygon) with a nice 3-d turbulence function can give really neat results for marble, wood, and such. Because the texture is 3-d, objects look like they are carved out of the texture function rather than veneered with it. It works well with non-turbulent texture functions too, like bricks, 3-d checkerboards, waves, and so on. However, there's a disadvantage to this kind of texture function that I haven't seen discussed before: as generally proposed, it's highly unsuited to _animation._ The problem is that you generally define one texture function throughout all of space. If an object happens to move, its texture changes accordingly. It's a neat effect - try it - but it's not what one usually wants to see.
The obvious solution to this is to define a separate 3-d texture for each object, and, further, _cause the texture to be rotated, translated, and scaled with the object._ DBW does not allow this, so if you want to do animations of any real complexity with DBW, you can't use the nice wood or marble textures.
This almost solves the problem. However, it doesn't handle the case of an object whose shape changes. Consider a sphere that metamorphoses into a cube, or a human figure which walks, bends, and so on. There's no way to keep the 3-d texture function consistent in such a case.
Actually, the real world has a similar defect, so to speak. If you carve a statue out of wood and then bend its limbs around, the grain of the wood will be distorted. If you want to simulate the real world in this way and get animated objects whose textures stay consistent as they change shape, you have to use ordinary surface-mapped (2-d) textures. But 3-d textures are so much nicer for wood, stone, and such! There are a couple of ways to get the best of both worlds: [I assume that an object's surface is defined as a constant set of patches, whether polygonal or smooth, and though the control points may be moved around, the topology of the patches that make up the object never changes, and patches are neither added to or deleted from the object during the animation.]
1) define the base-shape of your object, and _sample its surface_ in the 3-d texture. You can then use these sample tables as ordinary 2-d texture maps for the animation.
2) define the base-shape of your object, and for each metamorphosized shape, keep pointers to the original shape. Then, whenever a ray strikes a point on the surface of the metamorphed shape, find the corresponding point on the original shape and look up its properties (i.e. color, etc.) in the 3-d texture map. [Note: I use ray-tracing terminology but the same trick should be applicable to other techniques.]
The first technique is perhaps simpler, and does not require you to modify your favorite renderer which supports 2-d surface texture maps. You just write a preprocessor which generates 2-d maps from the 3-d texture and the base-shape of the object. However, it is susceptible to really nasty aliasing and loss of information. The second technique has to be built into the renderer, but is amenable to all the antialiasing techniques possible in an ordinary renderer with 3-d textures, such as DBW. Since the notion of 'the same point' on a particular patch when the control points have moved is well-defined except in degenerate cases, the mapping shouldn't be a problem -- though it does add an extra level of antialiasing to worry about. [Why? Imagine that a patch which is very large in the original base-shape has become very small - sub-pixel size - in the current animated shape. Then a single pixel-sized sample in the current shape could be mapped to a large part of the original - using, for instance, stochastic sampling or analytic techniques.]
If anyone actually implements these ideas, I'd like to hear from you (and get credit, heh heh, if I thought of it first). I doubt that I will have the opportunity to try it.
________
From: ritter@versatc.UUCP (Jack Ritter) Organization: Versatec, Santa Clara, Ca. 95051
[Commenting on Ranjit's posting]
It seems to me that you could solve this problem by transforming the center/orientation of the texture function along with the object that is being instantiated. No need to store values, no tables, etc. The texture function must of course be simple enough to be so transformable.
Example, wood grain simulated by concentric cylindrical shells around an axis (the core of the log):
Imagine the log's center line as a half-line vector, (plus a position, if necessary), making it transformable. Imagine each object type in its object space, BOLTED to the log by an invisible bracket. As you translate and rotate the object, you also sling the log around. But be careful, some of these logs are heavy, and might break your teapots. I use only natural logs myself.
Jack Ritter, S/W Eng. Versatec, 2710 Walsh Av, Santa Clara, CA 95051 Mail Stop 1-7. (408)982-4332, or (408)988-2800 X 5743 UUCP: {ames,apple,sun,pyramid}!versatc!ritter
back to contents