|
|
Computer-generated pen-and-ink illustration of trees |
| |
Oliver Deussen,
Thomas Strothotte
|
|
Pages: 13-18 |
|
doi>10.1145/344779.344792 |
|
Full text: PDF
|
|
We present a method for automatically rendering pen-and-ink illustrations of trees. A given 3-d tree model is illustrated by the tree skeleton and a visual representation of the foliage using abstract drawing primitives. Depth discontinuities are used ...
We present a method for automatically rendering pen-and-ink illustrations of trees. A given 3-d tree model is illustrated by the tree skeleton and a visual representation of the foliage using abstract drawing primitives. Depth discontinuities are used to determine what parts of the primitives are to be drawn; a hybrid pixel-based and analytical algorithm allows us to deal efficiently with the complex geometric data. Using the proposed method we are able to generate illustrations with different drawing styles and levels of abstraction. The illustrations generated are spatial coherent, enabling us to create animations of sketched environments. Applications of our results are found in architecture, animation and landscaping.
expand
|
|
|
A simple, efficient method for realistic animation of clouds |
| |
Yoshinori Dobashi,
Kazufumi Kaneda,
Hideo Yamashita,
Tsuyoshi Okita,
Tomoyuki Nishita
|
|
Pages: 19-28 |
|
doi>10.1145/344779.344795 |
|
Full text: PDF
|
|
This paper proposes a simple and computationally inexpensive method for animation of clouds. The cloud evolution is simulated using cellular automaton that simplifies the dynamics of cloud formation. The dynamics are expressed by several simple transition ...
This paper proposes a simple and computationally inexpensive method for animation of clouds. The cloud evolution is simulated using cellular automaton that simplifies the dynamics of cloud formation. The dynamics are expressed by several simple transition rules and their complex motion can be simulated with a small amount of computation. Realistic images are then created using one of the standard graphics APIs, OpenGL. This makes it possible to utilize graphics hardware, resulting in fast image generation. The proposed method can realize the realistic motion of clouds, shadows cast on the ground, and shafts of light through clouds.
expand
|
|
|
Animating explosions |
| |
Gary D. Yngve,
James F. O'Brien,
Jessica K. Hodgins
|
|
Pages: 29-36 |
|
doi>10.1145/344779.344801 |
|
Full text: PDF
|
|
In this paper, we introduce techniques for animating explosions and their effects. The primary effect of an explosion is a disturbance that causes a shock wave to propagate through the surrounding medium. The disturbance determines the behavior of nearly ...
In this paper, we introduce techniques for animating explosions and their effects. The primary effect of an explosion is a disturbance that causes a shock wave to propagate through the surrounding medium. The disturbance determines the behavior of nearly all other secondary effects seen in explosion. We simulate the propagation of an explosion through the surrounding air using a computational fluid dynamics model based on the equations for compressible, viscous flow. To model the numerically stable formation of shocks along blast wave fronts, we employ an integration method that can handle steep pressure gradients without introducing inappropriate damping. The system includes two-way coupling between solid objects and surrounding fluid. Using this technique, we can generate a variety of effects including shaped explosive charges, a projectile propelled from a chamber by an explosion, and objects damaged by a blast. With appropriate rendering techniques, our explosion model can be used to create such visual effects as fireballs, dust clouds, and the refraction of light caused by a blast wave.
expand
|
|
|
Computer modelling of fallen snow |
| |
Paul Fearing
|
|
Pages: 37-46 |
|
doi>10.1145/344779.344809 |
|
Full text: PDF
|
|
In this paper, we present a new model of snow accumulation and stability for computer graphics. Our contribution is divided into two major components, each essential for modelling the appearance of a thick layer of snowfall on the ground.
Our accumulation ...
In this paper, we present a new model of snow accumulation and stability for computer graphics. Our contribution is divided into two major components, each essential for modelling the appearance of a thick layer of snowfall on the ground.
Our accumulation model determines how much snow a particular surface receives, allowing for such phenomena as flake flutter, flake dusting and wind-blown snow. We compute snow accumulation by shooting particles upwards towards the sky, giving each source surface independent control over its own sampling density, accuracy and computation time. Importance ordering minimises sampling effort while maximising visual information, generating smoothly improving global results that can be interrupted at any point.
Once snow lands on the ground, our stability model moves material away from physically unstable areas in a series of small, simultaneous avalanches. We use a simple local stability test that handles very steep surfaces, obstacles, edges, and wind transit. Our stability algorithm also handles other materials, such as flour, sand, and flowing water.
expand
|
|
|
Time-dependent visual adaptation for fast realistic image display |
| |
Sumanta N. Pattanaik,
Jack Tumblin,
Hector Yee,
Donald P. Greenberg
|
|
Pages: 47-54 |
|
doi>10.1145/344779.344810 |
|
Full text: PDF
|
|
Human vision takes time to adapt to large changes in scene intensity, and these transient adjustments have a profound effect on visual appearance. This paper offers a new operator to include these appearance changes in animations or interactive real-time ...
Human vision takes time to adapt to large changes in scene intensity, and these transient adjustments have a profound effect on visual appearance. This paper offers a new operator to include these appearance changes in animations or interactive real-time simulations, and to match a user's visual responses to those the user would experience in a real-world scene.
Large, abrupt changes in scene intensities can cause dramatic compression of visual responses, followed by a gradual recovery of normal vision. Asymmetric mechanisms govern these time-dependent adjustments, and offer adaptation to increased light that is much more rapid than adjustment to darkness. We derive a new tone reproduction operator that simulates these mechanisms. The operator accepts a stream of scene intensity frames and creates a stream of color display images.
All operator components are derived from published quantitative measurements from physiology, psychophysics, color science, and photography. ept intentionally simple to allow fast computation, the operator is meant for use with real-time walk-through renderings, high dynamic range video cameras, and other interactive applications. We demonstrate its performance on both synthetically generated and acquired “real-world” scenes with large dynamic variations of illumination and contrast.
expand
|
|
|
Toward a psychophysically-based light reflection model for image synthesis |
| |
Fabio Pellacini,
James A. Ferwerda,
Donald P. Greenberg
|
|
Pages: 55-64 |
|
doi>10.1145/344779.344812 |
|
Full text: PDF
|
|
In this paper we introduce a new light reflection model for image synthesis based on experimental studies of surface gloss perception. To develop the model, we've conducted two experiments that explore the relationships between the physical parameters ...
In this paper we introduce a new light reflection model for image synthesis based on experimental studies of surface gloss perception. To develop the model, we've conducted two experiments that explore the relationships between the physical parameters used to describe the reflectance properties of glossy surfaces and the perceptual dimensions of glossy appearance. In the first experiment we use multidimensional scaling techniques to reveal the dimensionality of gloss perception for simulated painted surfaces. In the second experiment we use magnitude estimation methods to place metrics on these dimensions that relate changes in apparent gloss to variations in surface reflectance properties. We use the results of these experiments to rewrite the parameters of a physically-based light reflection model in perceptual terms. The result is a new psychophysically-based light reflection model where the dimensions of the model are perceptually meaningful, and variations along the dimensions are perceptually uniform. We demonstrate that the model can facilitate describing surface gloss in graphics rendering applications. This work represents a new methodology for developing light reflection models for image synthesis.
expand
|
|
|
A microfacet-based BRDF generator |
| |
Michael Ashikmin,
Simon Premože,
Peter Shirley
|
|
Pages: 65-74 |
|
doi>10.1145/344779.344814 |
|
Full text: PDF
|
|
A method is presented that takes as an input a 2D microfacet orientation distribution and produces a 4D bidirectional reflectance distribution function (BRDF). This method differs from previous microfacet-based BRDF models in that it uses a simple shadowing ...
A method is presented that takes as an input a 2D microfacet orientation distribution and produces a 4D bidirectional reflectance distribution function (BRDF). This method differs from previous microfacet-based BRDF models in that it uses a simple shadowing term which allows it to handle very general microfacet distributions while maintaining reciprocity and energy conservation. The generator is shown on a variety of material types.
expand
|
|
|
Monte Carlo evaluation of non-linear scattering equations for subsurface reflection |
| |
Matt Pharr,
Pat Hanrahan
|
|
Pages: 75-84 |
|
doi>10.1145/344779.344824 |
|
Full text: PDF
|
|
We describe a new mathematical framework for solving a wide variety of rendering problems based on a non-linear integral scattering equation. This framework treats the scattering functions of complex aggregate objects as first-class rendering primitives; ...
We describe a new mathematical framework for solving a wide variety of rendering problems based on a non-linear integral scattering equation. This framework treats the scattering functions of complex aggregate objects as first-class rendering primitives; these scattering functions accurately account for all scattering events inside them. We also describe new techniques for computing scattering functions from the composition of scattering objects. We demonstrate that solution techniques based on this new approach can be more efficient than previous techniques based on radiance transport and the equation of transfer and we apply these techniques to a number of problems in rendering scattering from complex surfaces.
expand
|
|
|
Displaced subdivision surfaces |
| |
Aaron Lee,
Henry Moreton,
Hugues Hoppe
|
|
Pages: 85-94 |
|
doi>10.1145/344779.344829 |
|
Full text: PDF
|
|
In this paper we introduce a new surface representing, the displaced subdivision surface. It represents a detailed surface model as a scalar-valued displacement over a smooth domain surface. Our representation defines both the domain ...
In this paper we introduce a new surface representing, the displaced subdivision surface. It represents a detailed surface model as a scalar-valued displacement over a smooth domain surface. Our representation defines both the domain surface and the displacement function using a unified subdivision framework, allowing for simple and efficient evaluation of analytic surface properties. We present a simple, automatic scheme for converting detailed geometric models into such a representation. The challenge in this conversion process is to find a simple subdivision surface that still faithfully expresses the detailed model as its offset. We demonstrate that displaced subdivision surfaces offer a number of benefits, including geometry compression, editing, animation, scalability, and adaptive rendering. In particular, the encoding of fine detail as a scalar function makes the representation extremely compact.
expand
|
|
|
Normal meshes |
| |
Igor Guskov,
Kiril Vidimče,
Wim Sweldens,
Peter Schröder
|
|
Pages: 95-102 |
|
doi>10.1145/344779.344831 |
|
Full text: PDF
|
|
Normal meshes are new fundamental surface descriptions inspired by differential geometry. A normal mesh is a multiresolution mesh where each level can be written as a normal offset from a coarser version. Hence the mesh can be stored with a single float ...
Normal meshes are new fundamental surface descriptions inspired by differential geometry. A normal mesh is a multiresolution mesh where each level can be written as a normal offset from a coarser version. Hence the mesh can be stored with a single float per vertex. We present an algorithm to approximate any surface arbitrarily closely with a normal semi-regular mesh. Normal meshes can be useful in numerous applications such as compression, filtering, rendering, texturing, and modeling.
expand
|
|
|
√3-subdivision |
| |
Leif Kobbelt
|
|
Pages: 103-112 |
|
doi>10.1145/344779.344835 |
|
Full text: PDF
|
|
A new stationary subdivision scheme is presented which performs slower topological refinement than the usual dyadic split operation. The number of triangles increases in every step by a factor of 3 instead of 4. Applying the subdivision ...
A new stationary subdivision scheme is presented which performs slower topological refinement than the usual dyadic split operation. The number of triangles increases in every step by a factor of 3 instead of 4. Applying the subdivision operator twice causes a uniform refinement with tri-section of every original edge (hence the name √3-subdivision) while two dyadic splits would quad-sect every original edge. Besides the finer gradation of the hierarchy levels, the new scheme has several important properties: The stencils for the subdivision rules have minimum size and maximum symmetry. The smoothness of the limit surface is C2 everywhere except for the extraordinary points where it is C1. The convergence analysis of the scheme is presented based on a new general technique which also applies to the analysis of other subdivision schemes. The new splitting operation enables locally adaptive refinement under built-in preservation of the mesh consistency without temporary crack-fixing between neighboring faces from different refinement levels. The size of the surrounding mesh area which is affected by selective refinement is smaller than for the dyadic split operation. We further present a simple extension of the new subdivision scheme which makes it applicable to meshes with boundary and allows us to generate sharp feature lines.
expand
|
|
|
Piecewise smooth subdivision surfaces with normal control |
| |
Henning Biermann,
Adi Levin,
Denis Zorin
|
|
Pages: 113-120 |
|
doi>10.1145/344779.344841 |
|
Full text: PDF
|
|
In this paper we introduce improved rules for Catmull-Clark and Loop subdivision that overcome several problems with the original schemes, namely, lack of smoothness at extraordinary boundary vertices and folds near concave corners. In addition, our ...
In this paper we introduce improved rules for Catmull-Clark and Loop subdivision that overcome several problems with the original schemes, namely, lack of smoothness at extraordinary boundary vertices and folds near concave corners. In addition, our approach to rule modification allows the generation of surfaces with prescribed normals, both on the boundary and in the interior, which considerably improves control of the shape of surfaces.
expand
|
|
|
Environment matting extensions: towards higher accuracy and real-time capture |
| |
Yung-Yu Chuang,
Douglas E. Zongker,
Joel Hindorff,
Brian Curless,
David H. Salesin,
Richard Szeliski
|
|
Pages: 121-130 |
|
doi>10.1145/344779.344844 |
|
Full text: PDF
|
|
Environment matting is a generalization of traditional bluescreen matting. By photographing an object in front of a sequence of structured light backdrops, a set of approximate light-transport paths through the object can be computed. The original environment ...
Environment matting is a generalization of traditional bluescreen matting. By photographing an object in front of a sequence of structured light backdrops, a set of approximate light-transport paths through the object can be computed. The original environment matting research chose a middle ground—using a moderate number of photographs to produce results that were reasonably accurate for many objects. In this work, we extend the technique in two opposite directions: recovering a more accurate model at the expense of using additional structured light backdrops, and obtaining a simplified matte using just a single backdrop. The first extension allows for the capture of complex and subtle interactions of light with objects, while the second allows for video capture of colorless objects in motion.
expand
|
|
|
The digital Michelangelo project: 3D scanning of large statues |
| |
Marc Levoy,
Kari Pulli,
Brian Curless,
Szymon Rusinkiewicz,
David Koller,
Lucas Pereira,
Matt Ginzton,
Sean Anderson,
James Davis,
Jeremy Ginsberg,
Jonathan Shade,
Duane Fulk
|
|
Pages: 131-144 |
|
doi>10.1145/344779.344849 |
|
Full text: PDF
|
|
We describe a hardware and software system for digitizing the shape and color of large fragile objects under non-laboratory conditions. Our system employs laser triangulation rangefinders, laser time-of-flight rangefinders, digital still cameras, and ...
We describe a hardware and software system for digitizing the shape and color of large fragile objects under non-laboratory conditions. Our system employs laser triangulation rangefinders, laser time-of-flight rangefinders, digital still cameras, and a suite of software for acquiring, aligning, merging, and viewing scanned data. As a demonstration of this system, we digitized 10 statues by Michelangelo, including the well-known figure of David, two building interiors, and all 1,163 extant fragments of the Forma Urbis Romae, a giant marble map of ancient Rome. Our largest single dataset is of the David - 2 billion polygons and 7,000 color images. In this paper, we discuss the challenges we faced in building this system, the solutions we employed, and the lessons we learned. We focus in particular on the unusual design of our laser triangulation scanner and on the algorithms and software we developed for handling very large scanned models.
expand
|
|
|
Acquiring the reflectance field of a human face |
| |
Paul Debevec,
Tim Hawkins,
Chris Tchou,
Haarm-Pieter Duiker,
Westley Sarokin,
Mark Sagar
|
|
Pages: 145-156 |
|
doi>10.1145/344779.344855 |
|
Full text: PDF
|
|
We present a method to acquire the reflectance field of a human face and use these measurements to render the face under arbitrary changes in lighting and viewpoint. We first acquire images of the face from a small set of viewpoints under a dense sampling ...
We present a method to acquire the reflectance field of a human face and use these measurements to render the face under arbitrary changes in lighting and viewpoint. We first acquire images of the face from a small set of viewpoints under a dense sampling of incident illumination directions using a light stage. We then construct a reflectance function image for each observed image pixel from its values over the space of illumination directions. From the reflectance functions, we can directly generate images of the face from the original viewpoints in any form of sampled or computed illumination. To change the viewpoint, we use a model of skin reflectance to estimate the appearance of the reflectance functions for novel viewpoints. We demonstrate the technique with synthetic renderings of a person's face under novel illumination and viewpoints.
expand
|
|
|
As-rigid-as-possible shape interpolation |
| |
Marc Alexa,
Daniel Cohen-Or,
David Levin
|
|
Pages: 157-164 |
|
doi>10.1145/344779.344859 |
|
Full text: PDF
|
|
We present an object-space morphing technique that blends the interiors of given two- or three-dimensional shapes rather than their boundaries. The morph is rigid in the sense that local volumes are least-distorting as they vary from their source to ...
We present an object-space morphing technique that blends the interiors of given two- or three-dimensional shapes rather than their boundaries. The morph is rigid in the sense that local volumes are least-distorting as they vary from their source to target configurations. Given a boundary vertex correspondence, the source and target shapes are decomposed into isomorphic simplicial complexes. For the simplicial complexes, we find a closed-form expression allocating the paths of both boundary and interior vertices from source to target locations as a function of time. Key points are the identification of the optimal simplex morphing and the appropriate definition of an error functional whose minimization defines the paths of the vertices. Each pair of corresponding simplices defines an affine transformation, which is factored into a rotation and a stretching transformation. These local transformations are naturally interpolated over time and serve as the basis for composing a global coherent least-distorting transformation.
expand
|
|
|
Pose space deformation: a unified approach to shape interpolation and skeleton-driven deformation |
| |
J. P. Lewis,
Matt Cordner,
Nickson Fong
|
|
Pages: 165-172 |
|
doi>10.1145/344779.344862 |
|
Full text: PDF
|
|
Pose space deformation generalizes and improves upon both shape interpolation and common skeleton-driven deformation techniques. This deformation approach proceeds from the observation that several types of deformation can be uniformly represented as ...
Pose space deformation generalizes and improves upon both shape interpolation and common skeleton-driven deformation techniques. This deformation approach proceeds from the observation that several types of deformation can be uniformly represented as mappings from a pose space, defined by either an underlying skeleton or a more abstract system of parameters, to displacements in the object local coordinate frames. Once this uniform representation is identified, previously disparate deformation types can be accomplished within a single unified approach. The advantages of this algorithm include improved expressive power and direct manipulation of the desired shapes yet the performance associated with traditional shape interpolation is achievable. Appropriate applications include animation of facial and body deformation for entertainment, telepresence, computer gaming, and other applications where direct sculpting of deformations is desired or where real-time synthesis of a deforming model is required.
expand
|
|
|
The EMOTE model for effort and shape |
| |
Diane Chi,
Monica Costa,
Liwei Zhao,
Norman Badler
|
|
Pages: 173-182 |
|
doi>10.1145/344779.352172 |
|
Full text: PDF
|
|
Human movements include limb gestures and postural attitude. Although many computer animation researchers have studied these classes of movements, procedurally generated movements still lack naturalness. We argue that looking only at the psychological ...
Human movements include limb gestures and postural attitude. Although many computer animation researchers have studied these classes of movements, procedurally generated movements still lack naturalness. We argue that looking only at the psychological notion of gesture is insufficient to capture movement qualities needed by animated charactes. We advocate that the domain of movement observation science, specifically Laban Movement Analysis (LMA) and its Effort and Shape components, provides us with valuable parameters for the form and execution of qualitative aspects of movements. Inspired by some tenets shared among LMA proponents, we also point out that Effort and Shape phrasing across movements and the engagement of the whole body are essential aspects to be considered in the search for naturalness in procedurally generated gestures. Finally, we present EMOTE (Expressive MOTion Engine), a 3D character animation system that applies Effort and Shape qualities to independently defined underlying movements and thereby generates more natural synthetic gestures.
expand
|
|
|
Style machines |
| |
Matthew Brand,
Aaron Hertzmann
|
|
Pages: 183-192 |
|
doi>10.1145/344779.344865 |
|
Full text: PDF
|
|
We approach the problem of stylistic motion synthesis by learning motion patterns from a highly varied set of motion capture sequences. Each sequence may have a distinct choreography, performed in a distinct sytle. Learning identifies common choreographic ...
We approach the problem of stylistic motion synthesis by learning motion patterns from a highly varied set of motion capture sequences. Each sequence may have a distinct choreography, performed in a distinct sytle. Learning identifies common choreographic elements across sequences, the different styles in which each element is performed, and a small number of stylistic degrees of freedom which span the many variations in the dataset. The learned model can synthesize novel motion data in any interpolation or extrapolation of styles. For example, it can convert novice ballet motions into the more graceful modern dance of an expert. The model can also be driven by video, by scripts or even by noise to generate new choreography and synthesize virtual motion-capture in many styles.
expand
|
|
|
Timewarp rigid body simulation |
| |
Brian Mirtich
|
|
Pages: 193-200 |
|
doi>10.1145/344779.344866 |
|
Full text: PDF
|
|
The traditional high-level algorithms for rigid body simulation work well for moderate numbers of bodies but scale poorly to systems of hundreds or more moving, interacting bodies. The problem is unnecessary synchronization implicit in these methods. ...
The traditional high-level algorithms for rigid body simulation work well for moderate numbers of bodies but scale poorly to systems of hundreds or more moving, interacting bodies. The problem is unnecessary synchronization implicit in these methods. Jefferson's timewarp algorithm [22] is a technique for alleviating this problem in parallel discrete event simulation. Rigid body dynamics, though a continuous process, exhibits many aspects of a discrete one. With modification, the timewarp algorithm can be used in a uniprocessor rigid body simulator to give substantial performance improvements for simulations with large numbers of bodies. This paper describes the limitations of the traditional high-level simulation algorithms, introduces Jefferson's algorithm, and extends and optimizes it for the rigid body case. It addresses issues particular to rigid body simulation, such as collision detection and contact group management, and describes how to incorporate these into the timewarp framework. Quantitative experimental results indicate that the timewarp algorithm offers significant performance improvements over traditional high-level rigid body simulation algorithms, when applied to systems with hundreds of bodies. It also helps pave the way to parallel implementations, as the paper discusses.
expand
|
|
|
Interactive control for physically-based animation |
| |
Joseph Laszlo,
Michiel van de Panne,
Eugene Fiume
|
|
Pages: 201-208 |
|
doi>10.1145/344779.344876 |
|
Full text: PDF
|
|
We propose the use of interactive, user-in-the-loop techniques for controlling physically-based animated characters. With a suitably designed interface, the continuous and discrete input actions afforded by a standard mouse and keyboard allow for the ...
We propose the use of interactive, user-in-the-loop techniques for controlling physically-based animated characters. With a suitably designed interface, the continuous and discrete input actions afforded by a standard mouse and keyboard allow for the creation of a broad range of motions. We apply our techniques to interactively control planar dynamic simulations of a bounding cat, a gymnastic desk lamp, and a human character capable of walking, running, climbing, and various gymnastic behaviors. The interactive control techniques allows a performer's intuition and knowledge about motion planning to be readily exploited. Video games are the current target application of this work.
expand
|
|
|
Interactive manipulation of rigid body simulations |
| |
Jovan Popović,
Steven M. Seitz,
Michael Erdmann,
Zoran Popović,
Andrew Witkin
|
|
Pages: 209-217 |
|
doi>10.1145/344779.344880 |
|
Full text: PDF
|
|
Physical simulation of dynamic objects has become commonplace in computer graphics because it produces highly realistic animations. In this paradigm the animator provides few physical parameters such as the objects' initial positions and velocities, ...
Physical simulation of dynamic objects has become commonplace in computer graphics because it produces highly realistic animations. In this paradigm the animator provides few physical parameters such as the objects' initial positions and velocities, and the simulator automatically generates realistic motions. The resulting motion, however, is difficult to control because even a small adjustment of the input parameters can drastically affect the subsequent motion. Furthermore, the animator often wishes to change the end-result of the motion instead of the initial physical parameters.
We describe a novel interactive technique for intuitive manipulation of rigid multi-body simulations. Using our system, the animator can select bodies at any time and simply drag them to desired locations. In response, the system computes the required physical parameters and simulates the resulting motion. Surface characteristics such as normals and elasticity coefficients can also be automatically adjusted to provide a greater range of feasible motions, if the animator so desires. Because the entire simulation editing process runs at interactive speeds, the animator can rapidly design complex physical animations that would be difficult to achieve with existing rigid body simulators.
expand
|
|
|
Sampling plausible solutions to multi-body constraint problems |
| |
Stephen Chenney,
D. A. Forsyth
|
|
Pages: 219-228 |
|
doi>10.1145/344779.344882 |
|
Full text: PDF
|
|
Traditional collision intensive multi-body simulations are difficult to control due to extreme sensitivity to initial conditions or model parameters. Furthermore, there may be multiple ways to achieve any one goal, and it may be difficult to codify a ...
Traditional collision intensive multi-body simulations are difficult to control due to extreme sensitivity to initial conditions or model parameters. Furthermore, there may be multiple ways to achieve any one goal, and it may be difficult to codify a user's preferences before they have seen the available solutions. In this paper we extend simulation models to include plausible sources of uncertainty, and then use a Markov chain Monte Carlo algorithm to sample multiple animations that satisfy constraints. A user can choose the animation they prefer, or applications can take direct advantage of the multiple solutions. Our technique is applicable when a probability can be attached to each animation, with “good” animations having high probability, and for such cases we provide a definition of physical plausibility for animations. We demonstrate our approach with examples of multi-body rigid-body simulations that satisfy constraints of various kinds, for each case presenting animations that are true to a physical model, are significantly different from each other, and yet still satisfy the constraints.
expand
|
|
|
Conservative volumetric visibility with occluder fusion |
| |
Gernot Schaufler,
Julie Dorsey,
Xavier Decoret,
François X. Sillion
|
|
Pages: 229-238 |
|
doi>10.1145/344779.344886 |
|
Full text: PDF
|
|
Visibility determination is a key requirement in a wide range of graphics algorithms. This paper introduces a new approach to the computation of volume visibility, the detection of occluded portions of space as seen from a given region. ...
Visibility determination is a key requirement in a wide range of graphics algorithms. This paper introduces a new approach to the computation of volume visibility, the detection of occluded portions of space as seen from a given region. The method is conservative and classifies regions as occluded only when they are guaranteed to be invisible. It operates on a discrete representation of space and uses the opaque interior of objects as occluders. This choice of occluders facilitates their extension into adjacent opaque regions of space, in essence maximizing their size and impact. Our method efficiently detects and represents the regions of space hidden by such occluders. It is the first one to use the property that occluders can also be extended into empty space provided this space is itself occluded from the viewing volume. This proves extremely effective for computing the occlusion by a set of occluders, effectively realizing occluder fusion. An auxiliary data structure represents occlusion in the scene and can then be queried to answer volume visibility questions. We demonstrate the applicability to visibility preprocessing for real-time walkthroughs and to shadow-ray acceleration for extended light sources in ray tracing, with significant acceleration in both cases.
expand
|
|
|
Conservative visibility preprocessing using extended projections |
| |
Frédo Durand,
George Drettakis,
Joëlle Thollot,
Claude Puech
|
|
Pages: 239-248 |
|
doi>10.1145/344779.344891 |
|
Full text: PDF
|
|
Visualization of very complex scenes can be significantly accelerated using occlusion culling. In this paper we present a visibility preprocessing method which efficiently computes potentially visible geometry for volumetric viewing ...
Visualization of very complex scenes can be significantly accelerated using occlusion culling. In this paper we present a visibility preprocessing method which efficiently computes potentially visible geometry for volumetric viewing cells. We introduce novel extended projection operators, which permits efficient and conservative occlusion culling with respect to all viewpoints within a cell, and takes into account the combined occlusion effect of multiple occluders. We use extended projection of occluders onto a set of projection planes to create extended occlusion maps; we show how to efficiently test occludees against these occlusion maps to determine occlusion with respect to the entire cell. We also present an improved projection operator for certain specific but important configurations. An important advantage of our approach is that we can re-project extended projections onto a series of projection planes (via an occlusion sweep), and accumulate occlusion information from multiple blockers. This new approach allows the creation of effective occlusion maps for previously hard-to-treat scenes such as leaves of trees in a forest. Graphics hardware is used to accelerate both the extended projection and reprojection operations. We present a complete implementation demonstrating significant speedup with respect to view-frustum culling only, without the computational overhead of on-line occlusion culling.
expand
|
|
|
Adaptively sampled distance fields: a general representation of shape for computer graphics |
| |
Sarah F. Frisken,
Ronald N. Perry,
Alyn P. Rockwood,
Thouis R. Jones
|
|
Pages: 249-254 |
|
doi>10.1145/344779.344899 |
|
Full text: PDF
|
|
Adaptively Sampled Distance Fields (ADFs) are a unifying representation of shape that integrate numerous concepts in computer graphics including the representation of geometry and volume data and a broad range of processing operations such as rendering, ...
Adaptively Sampled Distance Fields (ADFs) are a unifying representation of shape that integrate numerous concepts in computer graphics including the representation of geometry and volume data and a broad range of processing operations such as rendering, sculpting, level-of-detail management, surface offsetting, collision detection, and color gamut correction. Its structure is uncomplicated and direct, but is especially effective for quality reconstruction of complex shapes, e.g., artistic and organic forms, precision parts, volumes, high order functions, and fractals. We characterize one implementation of ADFs, illustrating its utility on two diverse applications: 1) artistic carving of fine detail, and 2) representing and rendering volume data and volumetric effects. Other applications are briefly presented.
expand
|
|
|
Patching Catmull-Clark meshes |
| |
Jörg Peters
|
|
Pages: 255-258 |
|
doi>10.1145/344779.344908 |
|
Full text: PDF
|
|
Named after the title, the PCCM transformation is a simple, explicit algorithm that creates large, smoothly joining bicubic Nurbs patches from a refined Catmull-Clark subdivision mesh. The resulting patches are maximally large in the sense that one patch ...
Named after the title, the PCCM transformation is a simple, explicit algorithm that creates large, smoothly joining bicubic Nurbs patches from a refined Catmull-Clark subdivision mesh. The resulting patches are maximally large in the sense that one patch corresponds to one quadrilateral facet of the initial, coarsest quadrilateral mesh before subdivision. The patches join parametrically C2 and agree with the Catmull-Clark limit surface except in the immediate neighborhood of extraordinary mesh nodes; in such a neighborhood they join at least with tangent continuity and interpolate the limit of the extraordinary mesh node. The PCCM transformation integrates naturally with array-based implementations of subdivision surfaces.
expand
|
|
|
Out-of-core simplification of large polygonal models |
| |
Peter Lindstrom
|
|
Pages: 259-262 |
|
doi>10.1145/344779.344912 |
|
Full text: PDF
|
|
We present an algorithm for out-of-core simplification of large polygonal datasets that are too complex to fit in main memory. The algorithm extends the vertex clustering scheme of Rossignac and Borrel [13] by using error quadric information ...
We present an algorithm for out-of-core simplification of large polygonal datasets that are too complex to fit in main memory. The algorithm extends the vertex clustering scheme of Rossignac and Borrel [13] by using error quadric information for the placement of each cluster's representative vertex, which better preserves fine details and results in a low mean geometric error. The use of quadrics instead of the vertex grading approach in [13] has the additional benefits of requiring less disk space and only a single pass over the model rather than two. The resulting linear time algorithm allows simplification of datasets of arbitrary complexity.
In order to handle degenerate quadrics associated with (near) flat regions and regions with zero Gaussian curvature, we present a robust method for solving the corresponding underconstrained least-squares problem. The algorithm is able to detect these degeneracies and handle them gracefully. Key features of the simplification method include a bounded Hausdorff error, low mean geometric error, high simplification speed (up to 100,000 triangles/second reduction), output (but not input) sensitive memory requirements, no disk space overhead, and a running time that is independent of the order in which vertices and triangles occur in the mesh.
expand
|
|
|
Face fixer: compressing polygon meshes with properties |
| |
Martin Isenburg,
Jack Snoeyink
|
|
Pages: 263-270 |
|
doi>10.1145/344779.344919 |
|
Full text: PDF
|
|
Most schemes to compress the topology of a surface mesh have been developed for the lowest common denominator: triangulated meshes. We propose a scheme that handles the topology of arbitrary polygon meshes. It encodes meshes directly in their polygonal ...
Most schemes to compress the topology of a surface mesh have been developed for the lowest common denominator: triangulated meshes. We propose a scheme that handles the topology of arbitrary polygon meshes. It encodes meshes directly in their polygonal representation and extends to capture face groupings in a natural way. Avoiding the triangulation step we reduce the storage costs for typical polygon models that have group structures and property data.
expand
|
|
|
Progressive geometry compression |
| |
Andrei Khodakovsky,
Peter Schröder,
Wim Sweldens
|
|
Pages: 271-278 |
|
doi>10.1145/344779.344922 |
|
Full text: PDF
|
|
We propose a new progressive compression scheme for arbitrary topology, highly detailed and densely sampled meshes arising from geometry scanning. We observe that meshes consist of three distinct components: geometry, parameter, and connectivity information. ...
We propose a new progressive compression scheme for arbitrary topology, highly detailed and densely sampled meshes arising from geometry scanning. We observe that meshes consist of three distinct components: geometry, parameter, and connectivity information. The latter two do not contribute to the reduction of error in a compression setting. Using semi-regular meshes, parameter and connectivity information can be virtually eliminated. Coupled with semi-regular wavelet transforms, zerotree coding, and subdivision based reconstruction we see improvements in error by a factor four (12dB) compared to other progressive coding schemes.
expand
|
|
|
Spectral compression of mesh geometry |
| |
Zachi Karni,
Craig Gotsman
|
|
Pages: 279-286 |
|
doi>10.1145/344779.344924 |
|
Full text: PDF
|
|
We show how spectral methods may be applied to 3D mesh data to obtain compact representations. This is achieved by projecting the mesh geometry onto an orthonormal basis derived from the mesh topology. To reduce complexity, the mesh is partitioned into ...
We show how spectral methods may be applied to 3D mesh data to obtain compact representations. This is achieved by projecting the mesh geometry onto an orthonormal basis derived from the mesh topology. To reduce complexity, the mesh is partitioned into a number of balanced submeshes with minimal interaction, each of which are compressed independently. Our methods may be used for compression and progressive transmission of 3D content, and are shown to be vastly superior to existing methods using spatial techniques, if slight loss can be tolerated.
expand
|
|
|
Surface light fields for 3D photography |
| |
Daniel N. Wood,
Daniel I. Azuma,
Ken Aldinger,
Brian Curless,
Tom Duchamp,
David H. Salesin,
Werner Stuetzle
|
|
Pages: 287-296 |
|
doi>10.1145/344779.344925 |
|
Full text: PDF
|
|
A surface light field is a function that assigns a color to each ray originating on a surface. Surface light fields are well suited to constructing virtual images of shiny objects under complex lighting conditions. This paper presents ...
A surface light field is a function that assigns a color to each ray originating on a surface. Surface light fields are well suited to constructing virtual images of shiny objects under complex lighting conditions. This paper presents a framework for construction, compression, interactive rendering, and rudimentary editing of surface light fields of real objects. Generalization of vector quantization and principal component analysis are used to construct a compressed representation of an object's surface light field from photographs and range scans. A new rendering algorithm achieves interactive rendering of images from the compressed representation, incorporating view-dependent geometric level-of-detail control. The surface light field representation can also be directly edited to yield plausible surface light fields for small changes in surface geometry and reflectance properties.
expand
|
|
|
Dynamically reparameterized light fields |
| |
Aaron Isaksen,
Leonard McMillan,
Steven J. Gortler
|
|
Pages: 297-306 |
|
doi>10.1145/344779.344929 |
|
Full text: PDF
|
|
This research further develops the light field and lumigraph image-based rendering methods and extends their utility. We present alternate parameterizations that permit 1) interactive rendering of moderately sampled light fields of scenes with significant, ...
This research further develops the light field and lumigraph image-based rendering methods and extends their utility. We present alternate parameterizations that permit 1) interactive rendering of moderately sampled light fields of scenes with significant, unknown depth variation and 2) low-cost, passive autostereoscopic viewing. Using a dynamic reparameterization, these techniques can be used to interactively render photographic effects such as variable focus and depth-of-field within a light field. The dynamic parameterization is independent of scene geometry and does not require actual or approximate geometry of the scene. We explore the frequency domain and ray-space aspects of dynamic reparameterization, and present an interactive rendering technique that takes advantage of today's commodity rendering hardware.
expand
|
|
|
Plenoptic sampling |
| |
Jin-Xiang Chai,
Xin Tong,
Shing-Chow Chan,
Heung-Yeung Shum
|
|
Pages: 307-318 |
|
doi>10.1145/344779.344932 |
|
Full text: PDF
|
|
This paper studies the problem of plenoptic sampling in image-based rendering (IBR). From a spectral analysis of light field signals and using the sampling theorem, we mathematically derive the analytical functions to determine the minimum sampling rate ...
This paper studies the problem of plenoptic sampling in image-based rendering (IBR). From a spectral analysis of light field signals and using the sampling theorem, we mathematically derive the analytical functions to determine the minimum sampling rate for light field rendering. The spectral support of a light field signal is bounded by the minimum and maximum depths only, no matter how complicated the spectral support might be because of depth variations in the scene. The minimum sampling rate for light field rendering is obtained by compacting the replicas of the spectral support of the sampled light field within the smallest interval. Given the minimum and maximum depths, a reconstruction filter with an optimal and constant depth can be designed to achieve anti-aliased light field rendering.
Plenoptic sampling goes beyond the minimum number of images needed for anti-aliased light field rendering. More significantly, it utilizes the scene depth information to determine the minimum sampling curve in the joint image and geometry space. The minimum sampling curve quantitatively describes the relationship among three key elements in IBR systems: scene complexity (geometrical and textural information), the number of image samples, and the output resolution. Therefore, plenoptic sampling bridges the gap between image-based rendering and traditional geometry-based rendering. Experimental results demonstrate the effectiveness of our approach.
expand
|
|
|
An autostereoscopic display |
| |
Ken Perlin,
Salvatore Paxia,
Joel S. Kollin
|
|
Pages: 319-326 |
|
doi>10.1145/344779.344933 |
|
Full text: PDF
|
|
We present a display device which solves a long-standing problem: to give a true stereoscopic view of simulated objects, without artifacts, to a single unencumbered observer, while allowing the observer to freely change position and head rotation.
Based ...
We present a display device which solves a long-standing problem: to give a true stereoscopic view of simulated objects, without artifacts, to a single unencumbered observer, while allowing the observer to freely change position and head rotation.
Based on a novel combination of temporal and spatial multiplexing, this technique will enable artifact-free stereo to become a standard feature of display screens, without requiring the use of special eyewear. The availability of this technology may significantly impact CAD and CHI applications, as well as entertainment graphics. The underlying algorithms and system architecture are described, as well as hardware and software aspects of the implementation.
expand
|
|
|
Silhouette clipping |
| |
Pedro V. Sander,
Xianfeng Gu,
Steven J. Gortler,
Hugues Hoppe,
John Snyder
|
|
Pages: 327-334 |
|
doi>10.1145/344779.344935 |
|
Full text: PDF
|
|
Approximating detailed with coarse, texture-mapped meshes results in polygonal silhouettes. To eliminate this artifact, we introduce silhouette clipping, a framework for efficiently clipping the rendering of coarse geometry to the exact silhouette of ...
Approximating detailed with coarse, texture-mapped meshes results in polygonal silhouettes. To eliminate this artifact, we introduce silhouette clipping, a framework for efficiently clipping the rendering of coarse geometry to the exact silhouette of the original model. The coarse mesh is obtained using progressive hulls, a novel representation with the nesting property required for proper clipping. We describe an improved technique for constructing texture and normal maps over this coarse mesh. Given a perspective view, silhouettes are efficiently extracted from the original mesh using a precomputed search tree. Within the tree, hierarchical culling is achieved using pairs of anchored cones. The extracted silhouette edges are used to set the hardware stencil buffer and alpha buffer, which in turn clip and antialias the rendered coarse geometry. Results demonstrate that silhouette clipping can produce renderings of similar quality to high-resolution meshes in less rendering time.
expand
|
|
|
Surfels: surface elements as rendering primitives |
| |
Hanspeter Pfister,
Matthias Zwicker,
Jeroen van Baar,
Markus Gross
|
|
Pages: 335-342 |
|
doi>10.1145/344779.344936 |
|
Full text: PDF
|
|
Surface elements (surfels) are a powerful paradigm to efficiently render complex geometric objects at interactive frame rates. Unlike classical surface discretizations, i.e., triangles or quadrilateral meshes, surfels are point primitives without explicit ...
Surface elements (surfels) are a powerful paradigm to efficiently render complex geometric objects at interactive frame rates. Unlike classical surface discretizations, i.e., triangles or quadrilateral meshes, surfels are point primitives without explicit connectivity. Surfel attributes comprise depth, texture color, normal, and others. As a pre-process, an octree-based surfel representation of a geometric object is computed. During sampling, surfel positions and normals are optionally perturbed, and different levels of texture colors are prefiltered and stored per surfel. During rendering, a hierarchical forward warping algorithm projects surfels to a z-buffer. A novel method called visibility splatting determines visible surfels and holes in the z-buffer. Visible surfels are shaded using texture filtering, Phong illumination, and environment mapping using per-surfel normals. Several methods of image reconstruction, including supersampling, offer flexible speed-quality tradeoffs. Due to the simplicity of the operations, the surfel rendering pipeline is amenable for hardware implementation. Surfel objects offer complex shape, low rendering cost and high image quality, which makes them specifically suited for low-cost, real-time graphics, such as games.
expand
|
|
|
QSplat: a multiresolution point rendering system for large meshes |
| |
Szymon Rusinkiewicz,
Marc Levoy
|
|
Pages: 343-352 |
|
doi>10.1145/344779.344940 |
|
Full text: PDF
|
|
Advances in 3D scanning technologies have enabled the practical creation of meshes with hundreds of millions of polygons. Traditional algorithms for display, simplification, and progressive transmission of meshes are impractical for data sets of this ...
Advances in 3D scanning technologies have enabled the practical creation of meshes with hundreds of millions of polygons. Traditional algorithms for display, simplification, and progressive transmission of meshes are impractical for data sets of this size. We describe a system for representing and progressively displaying these meshes that combines a multiresolution hierarchy based on bounding spheres with a rendering system based on points. A single data structure is used for view frustum culling, backface culling, level-of-detail selection, and rendering. The representation is compact and can be computed quickly, making it suitable for large data sets. Our implementation, written for use in a large-scale 3D digitization project, launches quickly, maintains a user-settable interactive frame rate regardless of object complexity or camera position, yields reasonable image quality during motion, and refines progressively when idle to a high final image quality. We have demonstrated the system on scanned models containing hundreds of millions of samples.
expand
|
|
|
A fast relighting engine for interactive cinematic lighting design |
| |
Reid Gershbein,
Pat Hanrahan
|
|
Pages: 353-358 |
|
doi>10.1145/344779.344938 |
|
Full text: PDF
|
|
We present new techniques for interactive cinematic lighting design of complex scenes that use procedural shaders. Deep-framebuffers are used to store the geometric and optical information of the visible surfaces of an image. The geometric information ...
We present new techniques for interactive cinematic lighting design of complex scenes that use procedural shaders. Deep-framebuffers are used to store the geometric and optical information of the visible surfaces of an image. The geometric information is represented as collections of oriented points, and the optical information is represented as bi-directional reflection distribution functions, or BRDFs. The BRDFs are generated by procedurally defined surface texturing functions that spatially vary the surfaces' appearances.
The deep-framebuffer information is rendered using a multi-pass algorithm built on the OpenGL graphics pipeline. In order to handle both physically-correct as well as non-realistic reflection models used in the film industry, we factor the BRDF into independent components that map onto both the lighting and texturing units of the graphics hardware. A similar factorization is used to control the lighting distribution. Using these techniques, lighting calculations can be evaluated 2500 times faster than previous methods. This allows lighting changes to be rendered at rates of 20Hz in static environments that contain millions of objects of with dozens of unique procedurally defined surface properties and scores of lights.
expand
|
|
|
Relief texture mapping |
| |
Manuel M. Oliveira,
Gary Bishop,
David McAllister
|
|
Pages: 359-368 |
|
doi>10.1145/344779.344947 |
|
Full text: PDF
|
|
We present an extension to texture mapping that supports the representation of 3-D surface details and view motion parallax. The results are correct for viewpoints that are static or moving, far away or nearby. Our approach is very simple: a relief ...
We present an extension to texture mapping that supports the representation of 3-D surface details and view motion parallax. The results are correct for viewpoints that are static or moving, far away or nearby. Our approach is very simple: a relief texture (texture extended with an orthogonal displacement per texel) is mapped onto a polygon using a two-step process: First, it is converted into an ordinary texture using a surprisingly simple 1-D forward transform. The resulting texture is then mapped onto the polygon using standard texture mapping. The 1-D warping functions work in texture coordinates to handle the parallax and visibility changes that result from the 3-D shape of the displacement surface. The subsequent texture-mapping operation handles the transformation from texture to screen coordinates.
expand
|
|
|
Image-based visual hulls |
| |
Wojciech Matusik,
Chris Buehler,
Ramesh Raskar,
Steven J. Gortler,
Leonard McMillan
|
|
Pages: 369-374 |
|
doi>10.1145/344779.344951 |
|
Full text: PDF
|
|
In this paper, we describe an efficient image-based approach to computing and shading visual hulls from silhouette image data. Our algorithm takes advantage of epipolar geometry and incremental computation to achieve a constant rendering cost per rendered ...
In this paper, we describe an efficient image-based approach to computing and shading visual hulls from silhouette image data. Our algorithm takes advantage of epipolar geometry and incremental computation to achieve a constant rendering cost per rendered pixel. It does not suffer from the computation complexity, limited resolution, or quantization artifacts of previous volumetric approaches. We demonstrate the use of this algorithm in a real-time virtualized reality application running off a small number of video streams.
expand
|
|
|
Efficient image-based methods for rendering soft shadows |
| |
Maneesh Agrawala,
Ravi Ramamoorthi,
Alan Heirich,
Laurent Moll
|
|
Pages: 375-384 |
|
doi>10.1145/344779.344954 |
|
Full text: PDF
|
|
We present two efficient imaged-based approaches for computation and display of high-quality soft shadows from area light sources. Our methods are related to shadow maps and provide the associated benefits. The computation time and memory requirements ...
We present two efficient imaged-based approaches for computation and display of high-quality soft shadows from area light sources. Our methods are related to shadow maps and provide the associated benefits. The computation time and memory requirements for adding soft shadows to an image depend on image size and the number of lights, not geometric scene complexity. We also show that because area light sources are localized in space, soft shadow computations are particularly well suited to imaged-based rendering techniques. Our first approach—layered attenuation maps—achieves interactive rendering rates, but limits sampling flexibility, while our second method—coherence-based raytracing of depth images—is not interactive, but removes the limitations on sampling and yields high quality images at a fraction of the cost of conventional raytracers. Combining the two algorithms allows for rapid previewing followed by efficient high-quality rendering.
expand
|
|
|
Deep shadow maps |
| |
Tom Lokovic,
Eric Veach
|
|
Pages: 385-392 |
|
doi>10.1145/344779.344958 |
|
Full text: PDF
|
|
We introduce deep shadow maps, a technique that produces fast, high-quality shadows for primitives such as hair, fur, and smoke. Unlike traditional shadow maps, which store a single depth at each pixel, deep shadow maps store a representation ...
We introduce deep shadow maps, a technique that produces fast, high-quality shadows for primitives such as hair, fur, and smoke. Unlike traditional shadow maps, which store a single depth at each pixel, deep shadow maps store a representation of the fractional visibility through a pixel at all possible depths. Deep shadow maps have several advantages. First, they are prefiltered, which allows faster shadow lookups and much smaller memory footprints than regular shadow maps of similar quality. Second, they support shadows from partially transparent surfaces and volumetric objects such as fog. Third, they handle important cases of motion blur at no extra cost. The algorithm is simple to implement and can be added easily to existing renderers as an alternative to ordinary shadow maps.
expand
|
|
|
Tangible interaction + graphical interpretation: a new approach to 3D modeling |
| |
David Anderson,
James L. Frankel,
Joe Marks,
Aseem Agarwala,
Paul Beardsley,
Jessica Hodgins,
Darren Leigh,
Kathy Ryall,
Eddie Sullivan,
Jonathan S. Yedidia
|
|
Pages: 393-402 |
|
doi>10.1145/344779.344960 |
|
Full text: PDF
|
|
Construction toys are a superb medium for geometric models. We argue that such toys, suitably instrumented or sensed, could be the inspiration for a new generation of easy-to-use, tangible modeling systems—especially if the tangible modeling is ...
Construction toys are a superb medium for geometric models. We argue that such toys, suitably instrumented or sensed, could be the inspiration for a new generation of easy-to-use, tangible modeling systems—especially if the tangible modeling is combined with graphical-interpretation techniques for enhancing nascent models automatically. The three key technologies needed to realize this idea are embedded computation, vision-based acquisition, and graphical interpretation. We sample these technologies in the context of two novel modeling systems: physical building blocks that self-describe, interpret, and decorate the structures into which they are assembled; and a system for scanning, interpreting, and animating clay figures.
expand
|
|
|
Accessible animation and customizable graphics via simplicial configuration modeling |
| |
Tom Ngo,
Doug Cutrell,
Jenny Dana,
Bruce Donald,
Lorie Loeb,
Shunhui Zhu
|
|
Pages: 403-410 |
|
doi>10.1145/344779.344964 |
|
Full text: PDF
|
|
O ur goal is to em bed free-form constraints into a graphical m odel. W ith such constraints a graphic can m aintain its visual integrity— and break rules tastefully— while being m anipulated by a casualuser. A typicalparam eterized graphic ...
O ur goal is to em bed free-form constraints into a graphical m odel. W ith such constraints a graphic can m aintain its visual integrity— and break rules tastefully— while being m anipulated by a casualuser. A typicalparam eterized graphic does notm eet these needs because its configuration space contains nonsense im ages in m uch higher proportion than desirable im ages, and the casual user is apt to ruin the graphic on any attem pt to m odify oranim ate it.
W e therefore m odel the sm all subset of a given graphic's configuration space that m aps to desirable im ages. In our solution, the basic building block is a sim plicial complex— the m ost practical data structure able to accom m odate the variety of topologies that can arise. The configuration-space m odel can be built from a cross productofsuch com plexes. W e describe how to define the m apping from this space to the im age space. W e show how to invert that m apping, allow ing the user to m anipulate the im age without understanding the structure of the configuration-space m odel. W e also show how to extend the m apping when the originalparam eterization contains hierarchy, coordinate transform ations,and other non linearities.
O ur software im plem entation applies sim plicial configuration m odeling to 2D vector graphics.
expand
|
|
|
Example-based hinting of true type fonts |
| |
Douglas E. Zongker,
Geraldine Wade,
David H. Salesin
|
|
Pages: 411-416 |
|
doi>10.1145/344779.344969 |
|
Full text: PDF
|
|
Hinting in TrueType is a time-consuming manual process in which a typographer creates a sequence of instructions for better fitting the characters of a font to a grid of pixels. In this paper, we propose a new method for automatically hinting TrueType ...
Hinting in TrueType is a time-consuming manual process in which a typographer creates a sequence of instructions for better fitting the characters of a font to a grid of pixels. In this paper, we propose a new method for automatically hinting TrueType fonts by transferring hints of one font to another. Given a hinted source font and a target font without hints, our method matches the outlines of corresponding glyphs in each font, and then translates all of the individual hints for each glyph from the source to the target font. It also translates the control value table (CVT) entries, which are used to unify feature sizes across a font. The resulting hinted font already provides a great improvement over the unhinted version. More importantly, the translated hints, which preserve the sound, hand-designed hinting structure of the original font, provide a very good starting point for a professional typographer to complete and fine-tune, saving time and increasing productivity. We demonstrate our approach with examples of automatically hinted fonts at typical display sizes and screen resolutions. We also provide estimates of the time saved by a professional typographer in hinting new fonts using this semi-automatic approach.
expand
|
|
|
Image inpainting |
| |
Marcelo Bertalmio,
Guillermo Sapiro,
Vincent Caselles,
Coloma Ballester
|
|
Pages: 417-424 |
|
doi>10.1145/344779.344972 |
|
Full text: PDF
|
|
Inpainting, the technique of modifying an image in an undetectable form, is as ancient as art itself. The goals and applications of inpainting are numerous, from the restoration of damaged paintings and photographs to the removal/replacement of selected ...
Inpainting, the technique of modifying an image in an undetectable form, is as ancient as art itself. The goals and applications of inpainting are numerous, from the restoration of damaged paintings and photographs to the removal/replacement of selected objects. In this paper, we introduce a novel algorithm for digital inpainting of still images that attempts to replicate the basic techniques used by professional restorators. After the user selects the regions to be restored, the algorithm automatically fills-in these regions with information surrounding them. The fill-in is done in such a way that isophote lines arriving at the regions' boundaries are completed inside. In contrast with previous approaches, the technique here introduced does not require the user to specify where the novel information comes from. This is automatically done (and in a fast way), thereby allowing to simultaneously fill-in numerous regions containing completely different structures and surrounding backgrounds. In addition, no limitations are imposed on the topology of the region to be inpainted. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like dates, subtitles, or publicity; and the removal of entire objects from the image like microphones or wires in special effects.
expand
|
|
|
Interactive multi-pass programmable shading |
| |
Mark S. Peercy,
Marc Olano,
John Airey,
P. Jeffrey Ungar
|
|
Pages: 425-432 |
|
doi>10.1145/344779.344976 |
|
Full text: PDF
|
|
Programmable shading is a common technique for production animation, but interactive programmable shading is not yet widely available. We support interactive programmable shading on virtually any 3D graphics hardware using a scene graph library on top ...
Programmable shading is a common technique for production animation, but interactive programmable shading is not yet widely available. We support interactive programmable shading on virtually any 3D graphics hardware using a scene graph library on top of OpenGL. We treat the OpenGL architecture as a general SIMD computer, and translate the high-level shading description into OpenGL rendering passes. While our system uses OpenGL, the techniques described are applicable to any retained mode interface with appropriate extension mechanisms and hardware API with provisions for recirculating data through the graphics pipeline.
We present two demonstrations of the method. The first is a constrained shading language that runs on graphics hardware supporting OpenGL 1.2 with a subset of the ARB imaging extensions. We remove the shading language constraints by minimally extending OpenGL. The key extensions are color range (supporting extended range and precision data types) and pixel texture (using framebuffer values as indices into texture maps). Our second demonstration is a renderer supporting the RenderMan Interface and RenderMan Shading Language on a software implementation of this extended OpenGL. For both languages, our compiler technology can take advantage of extensions and performance characteristics unique to any particular graphics hardware.
expand
|
|
|
The WarpEngine: an architecture for the post-polygonal age |
| |
Voicu Popescu,
John Eyles,
Anselmo Lastra,
Joshua Steinhurst,
Nick England,
Lars Nyland
|
|
Pages: 433-442 |
|
doi>10.1145/344779.344979 |
|
Full text: PDF
|
|
We present the WarpEngine, an architecture designed for real-time imaged-based rendering of natural scenes from arbitrary viewpoints. The modeling primitives are real-world images with per-pixel depth. Currently they are acquired and stored off-line; ...
We present the WarpEngine, an architecture designed for real-time imaged-based rendering of natural scenes from arbitrary viewpoints. The modeling primitives are real-world images with per-pixel depth. Currently they are acquired and stored off-line; in the near future real-time depth-image acquisition will be possible, the WarpEngine is designed to render in immediate mode from such data sources.
The depth-image resolution is locally adapted by interpolation to match the resolution of the output image. 3D warping can occur either before or after the interpolation; the resulting warped/interpolated samples are forward-mapped into a warp buffer, with the precise locations recorded using an offset. Warping processors are integrated on-chip with the warp buffer, allowing efficient, scalable implementation of very high performance systems. Each chip will be able to process 100 million samples per second and provide 4.8GigaBytes per second of bandwidth to the warp buffer.
The WarpEngine is significantly less complex than our previous efforts, incorporating only a single ASIC design. Small configurations can be packaged as a PC add-in card, while larger deskside configurations will provide HDTV resolutions at 50 Hz, enabling radical new applications such as 3D television.
WarpEngine will be highly programmable, facilitating use as a test-bed for experimental IBR algorithms.
expand
|
|
|
Pomegranate: a fully scalable graphics architecture |
| |
Matthew Eldridge,
Homan Igehy,
Pat Hanrahan
|
|
Pages: 443-454 |
|
doi>10.1145/344779.344981 |
|
Full text: PDF
|
|
Pomegranate is a parallel hardware architecture for polygon rendering that provides scalable input bandwidth, triangle rate, pixel rate, texture memory and display bandwidth while maintaining an immediate-mode interface. The basic unit of scalability ...
Pomegranate is a parallel hardware architecture for polygon rendering that provides scalable input bandwidth, triangle rate, pixel rate, texture memory and display bandwidth while maintaining an immediate-mode interface. The basic unit of scalability is a single graphics pipeline, and up to 64 such units may be combined. Pomegranate's scalability is achieved with a novel “sort-everywhere” architecture that distributes work in a balanced fashion at every stage of the pipeline, keeping the amount of work performed by each pipeline uniform as the system scales. Because of the balanced distribution, a scalable network based on high-speed point-to-point links can be used for communicating between the pipelines.
Pomegranate uses the network to load balance triangle and fragment work independently, to provide a shared texture memory and to provide a scalable display system. The architecture provides one interface per pipeline for issuing ordered, immediate-mode rendering commands and supports a parallel API that allows multiprocessor applications to exactly order drawing commands from each interface. A detailed hardware simulation demonstrates performance on next-generation workloads. Pomegranate operates at 87-99% parallel efficiency with 64 pipelines, for a simulated performance of up to 1.10 billion triangles per second and 21.8 billion pixels per second.
expand
|
|
|
Illuminating micro geometry based on precomputed visibility |
| |
Wolfgang Heidrich,
Katja Daubert,
Jan Kautz,
Hans-Peter Seidel
|
|
Pages: 455-464 |
|
doi>10.1145/344779.344984 |
|
Full text: PDF
|
|
Many researchers have been arguing that geometry, bump maps, and BRDFs present a hierarchy of detail that should be exploited for efficient rendering purposes. In practice however, this is often not possible due to inconsistencies in the illumination ...
Many researchers have been arguing that geometry, bump maps, and BRDFs present a hierarchy of detail that should be exploited for efficient rendering purposes. In practice however, this is often not possible due to inconsistencies in the illumination for these different levels of detail. For example, while bump map rendering often only considers direct illumination and no shadows, geometry-based rendering and BRDFs will mostly also respect shadowing effects, and in many cases even indirect illumination caused by scattered light.
In this paper, we present an approach for overcoming these inconsistencies. We introduce an inexpensive method for consistently illuminating height fields and bump maps, as well as simulating BRDFs based on precomputed visibility information. With this information we can achieve a consistent illumination across the levels of detail.
The method we propose offers significant performance benefits over existing algorithms for computing the light scattering in height fields and for computing a sampled BRDF representation using a virtual gonioreflectometer. The performance can be further improved by utilizing graphics hardware, which then also allows for interactive display.
Finally, our method also approximates the changes in illumination when the height field, bump map, or BRDF is applied to a surface with a different curvature.
expand
|
|
|
Lapped textures |
| |
Emil Praun,
Adam Finkelstein,
Hugues Hoppe
|
|
Pages: 465-470 |
|
doi>10.1145/344779.344987 |
|
Full text: PDF
|
|
We present for creating texture over an surface mesh using an example 2D texture. The approach is to identify interesting regions (texture patches) in the 2D example, and to repeatedly paste them onto the surface until it is completely ...
We present for creating texture over an surface mesh using an example 2D texture. The approach is to identify interesting regions (texture patches) in the 2D example, and to repeatedly paste them onto the surface until it is completely covered. We call such a collection of overlapping patches a lapped texture. It is rendered using compositing operations, either into a traditional global texture map during a preprocess, or directly with the surface at runtime. The runtime compositing approach avoids resampling artifacts and drastically reduces texture memory requirements.
Through a simple interface, the user specifies a tangential vector field over the surface, providing local control over the texture scale, and for anisotropic textures, the orientation. To paste a texture patch onto the surface, a surface patch is grown and parametrized over texture space. Specifically, we optimize the parametrization of each surface patch such that the tangential vector field aligns everywhere with the standard frame of the texture patch. We show that this optimization is solved efficiently as a sparse linear system.
expand
|
|
|
Seamless texture mapping of subdivision surfaces by model pelting and texture blending |
| |
Dan Piponi,
George Borshukov
|
|
Pages: 471-478 |
|
doi>10.1145/344779.344990 |
|
Full text: PDF
|
|
Subdivision surfaces solve numerous problems related to the geometry of character and animation models. However, unlike on parametrised surfaces there is no natural choice of texture coordinates on subdivision surfaces. Existing algorithms for generating ...
Subdivision surfaces solve numerous problems related to the geometry of character and animation models. However, unlike on parametrised surfaces there is no natural choice of texture coordinates on subdivision surfaces. Existing algorithms for generating texture coordinates on non-parametrised surfaces often find solutions that are locally acceptable but globally are unsuitable for use by artists wishing to paint textures. In addition, for topological reasons there is not necessarily any choice of assignment of texture coordinates to control points that can satisfactorily be interpolated over the entire surface. We introduce a technique, pelting, for finding both optimal and intuitive texture mapping over almost all of an entire subdivision surface and then show how to combine multiple texture mappings together to produce a seamless result.
expand
|
|
|
Fast texture synthesis using tree-structured vector quantization |
| |
Li-Yi Wei,
Marc Levoy
|
|
Pages: 479-488 |
|
doi>10.1145/344779.345009 |
|
Full text: PDF
|
|
Texture synthesis is important for many applications in computer graphics, vision, and image processing. However, it remains difficult to design an algorithm that is both efficient and capable of generating high quality results. In this paper, we present ...
Texture synthesis is important for many applications in computer graphics, vision, and image processing. However, it remains difficult to design an algorithm that is both efficient and capable of generating high quality results. In this paper, we present an efficient algorithm for realistic texture synthesis. The algorithm is easy to use and requires only a sample texture as input. It generates textures with perceived quality equal to or better than those produced by previous techniques, but runs two orders of magnitude faster. This permits us to apply texture synthesis to problems where it has traditionally been considered impractical. In particular, we have applied it to constrained synthesis for image editing and temporal texture generation. Our algorithm is derived from Markov Random Field texture models and generates textures through a deterministic searching process. We accelerate this synthesis process using tree-structured vector quantization.
expand
|
|
|
Video textures |
| |
Arno Schödl,
Richard Szeliski,
David H. Salesin,
Irfan Essa
|
|
Pages: 489-498 |
|
doi>10.1145/344779.345012 |
|
Full text: PDF
|
|
This paper introduces a new type of medium, called a video texture, which has qualities somewhere between those of a photograph and a video. A video texture provides a continuous infinitely varying stream of images. While the individual ...
This paper introduces a new type of medium, called a video texture, which has qualities somewhere between those of a photograph and a video. A video texture provides a continuous infinitely varying stream of images. While the individual frames of a video texture may be repeated from time to time, the video sequence as a whole is never repeated exactly. Video textures can be used in place of digital photos to infuse a static image with dynamic qualities and explicit actions. We present techniques for analyzing a video clip to extract its structure, and for synthesizing a new, similar looking video of arbitrary length. We combine video textures with view morphing techniques to obtain 3D video textures. We also introduce video-based animation, in which the synthesis of video textures can be guided by a user through high-level interactive controls. Applications of video textures and their extensions include the display of dynamic scenes on web pages, the creation of dynamic backdrops for special effects and games, and the interactive control of video-based animation.
expand
|
|
|
Escherization |
| |
Craig S. Kaplan,
David H. Salesin
|
|
Pages: 499-510 |
|
doi>10.1145/344779.345022 |
|
Full text: PDF
|
|
This paper introduces and presents a solution to the “Escherization” problem: given a closed figure in the plane, find a new closed figure that is similar to the original and tiles the plane. Our solution works by using a simulated annealer ...
This paper introduces and presents a solution to the “Escherization” problem: given a closed figure in the plane, find a new closed figure that is similar to the original and tiles the plane. Our solution works by using a simulated annealer to optimize over a parameterization of the “isohedral” tilings, a class of tilings that us flexible enough to encompass nearly all of Escher's own tilings, and yet simple enough to be encoded and explored by a computer. We also describe a representation for isohedral tilings that allows for highly interactive viewing and rendering. We demonstrate the use of these tools—along with several additional techniques for adding decorations to tilings—with a variety of original ornamental designs.
expand
|
|
|
Shadows for cel animation |
| |
Lena Petrović,
Brian Fujito,
Lance Williams,
Adam Finkelstein
|
|
Pages: 511-516 |
|
doi>10.1145/344779.345073 |
|
Full text: PDF
|
|
We present a semi-automatic method for creating shadow mattes in cel animation. In conventional cel animation, shadows are drawn by hand, in order to provide visual cues about the spatial relationships and forms of characters in the scene. Our system ...
We present a semi-automatic method for creating shadow mattes in cel animation. In conventional cel animation, shadows are drawn by hand, in order to provide visual cues about the spatial relationships and forms of characters in the scene. Our system creates shadow mattes based on hand-drawn characters, given high-level guidance from the user about depths of various objects. The method employs a scheme for “inflating” a 3D figure based on hand-drawn art. It provides simple tools for adjusting object depths, coupled with an intuitive interface by which the user specifies object shapes and relative positions in a scene. Our system obviates the tedium of drawing shadow mattes by hand, and provides control over complex shadows falling over interesting shapes.
expand
|
|
|
Illustrating smooth surfaces |
| |
Aaron Hertzmann,
Denis Zorin
|
|
Pages: 517-526 |
|
doi>10.1145/344779.345074 |
|
Full text: PDF
|
|
We present a new set of algorithms for line-art rendering of smooth surfaces. We introduce an efficient, deterministic algorithm for finding silhouettes based on geometric duality, and an algorithm for segmenting the silhouette curves into smooth parts ...
We present a new set of algorithms for line-art rendering of smooth surfaces. We introduce an efficient, deterministic algorithm for finding silhouettes based on geometric duality, and an algorithm for segmenting the silhouette curves into smooth parts with constant visibility. These methods can be used to find all silhouettes in real time in software. We present an automatic method for generating hatch marks in order to convey surface shape. We demonstrate these algorithms with a drawing style inspired by A Topological Picturebook by G. Francis.
expand
|
|
|
Non-photorealistic virtual environments |
| |
Allison W. Klein,
Wilmot Li,
Michael M. Kazhdan,
Wagner T. Corrêa,
Adam Finkelstein,
Thomas A. Funkhouser
|
|
Pages: 527-534 |
|
doi>10.1145/344779.345075 |
|
Full text: PDF
|
|
We describe a system for non-photorealistic rendering (NPR) of virtual environments. In real time, it synthesizes imagery of architectural interiors using stroke-based textures. We address the four main challenges of such a system — interactivity, ...
We describe a system for non-photorealistic rendering (NPR) of virtual environments. In real time, it synthesizes imagery of architectural interiors using stroke-based textures. We address the four main challenges of such a system — interactivity, visual detail, controlled stroke size, and frame-to-frame coherence — through image based rendering (IBR) methods. In a preprocessing stage, we capture photos of a real or synthetic environment, map the photos to a coarse model of the environment, and run a series of NPR filters to generate textures. At runtime, the system re-renders the NPR textures over the geometry of the coarse model, and it adds dark lines that emphasize creases and silhouettes. We provide a method for constructing non-photorealistic textures from photographs that largely avoids seams in the resulting imagery. We also offer a new construction, art-maps, to control stroke size across the images. Finally, we show a working system that provides an immersive experience rendered in a variety of NPR styles.
expand
|