A Fast Geometric Multigrid Method for Curved Surfaces

We introduce a geometric multigrid method for solving linear systems arising from variational problems on surfaces in geometry processing, Gravo MG. Our scheme uses point clouds as a reduced representation of the levels of the multigrid hierarchy to achieve a fast hierarchy construction and to extend the applicability of the method from triangle meshes to other surface representations like point clouds, nonmanifold meshes, and polygonal meshes. To build the prolongation operators, we associate each point of the hierarchy to a triangle constructed from points in the next coarser level. We obtain well-shaped candidate triangles by computing graph Voronoi diagrams centered around the coarse points and determining neighboring Voronoi cells. Our selection of triangles ensures that the connections of each point to points at adjacent coarser and finer levels are balanced in the tangential directions. As a result, we obtain sparse prolongation matrices with three entries per row and fast convergence of the solver.

Figure 1: Illustration of our hierarchy construction for one level.Starting from a mesh or point cloud, A) we construct a neighbor graph on the surface.B) We then sample a spatially uniform set of nodes, C) compute the graph Voronoi diagram of the samples, and D) project unsampled points onto triangles formed by edges between Voronoi neighbors.This is repeated for every level.Right: Comparison of run time for solving a Laplace system on a triangle mesh.Our hierarchy construction is fast, while achieving similar solver performance to the state-of-the-art.

ABSTRACT
We introduce a geometric multigrid method for solving linear systems arising from variational problems on surfaces in geometry processing, Gravo MG.Our scheme uses point clouds as a reduced representation of the levels of the multigrid hierarchy to achieve a fast hierarchy construction and to extend the applicability of the method from triangle meshes to other surface representations like point clouds, nonmanifold meshes, and polygonal meshes.To build the prolongation operators, we associate each point of the hierarchy to a triangle constructed from points in the next coarser level.We obtain well-shaped candidate triangles by computing graph Voronoi diagrams centered around the coarse points and determining neighboring Voronoi cells.Our selection of triangles ensures that the connections of each point to points at adjacent coarser and finer levels are balanced in the tangential directions.As a result, we obtain sparse prolongation matrices with three entries

INTRODUCTION
Many geometry processing methods are based on variational problems and partial differential equations on curved surfaces.The discretization of these problems leads to sparse linear systems to be solved.One class of efficient solvers are Geometric Multigrid (GMG) methods, which use iterative solvers on a hierarchy of grids.They are more efficient than alternatives, such as sparse direct solvers, in many application scenarios [Liu et al. 2021].While geometric multigrid solvers are well-studied for regular grids in Euclidean domains, the construction of effective geometric multigrid hierarchies remains challenging for irregular meshes on curved domains.
We distinguish two approaches to the design of GMG methods on curved surfaces.The first approach is to construct a hierarchy of meshes by mesh coarsening and then mapping between the meshes.This approach obtains efficient prolongation operators that lead to fast convergence.A recent example is the intrinsic multigrid scheme by Liu et al. [2021].The downside of this approach is a costly hierarchy construction.The second approach is to represent levels by graphs constructed by coarsening the edge graph of the input mesh [Shi et al. 2006].This approach results in a fast construction but slower convergence.
We propose a new GMG method combining the strengths of both approaches.On the one hand, we use point clouds and neighbor graphs to represent levels, enabling a fast hierarchy construction.On the other hand, we use geometric operations to create local triangulations when constructing the prolongation operators for fast convergence.Our method solves linear systems as fast as the scheme of Liu et al. [2021], while reducing hierarchy-construction time by more than an order of magnitude.Moreover, our method is more generally applicable as it can be used not only for manifold triangular meshes but also for other discrete surface representations such as point clouds, non-manifold meshes, and polygonal meshes.Thus, we can solve systems set up with discrete differential operators for these representations, which were developed in recent years [Alexa and Wardetzky 2011;Liang and Zhao 2013;Sharp and Crane 2020].Our hierarchy construction is more expensive compared to Shi et al. [2006].Yet, the solving time is most often reduced more than the increase in hierarchy construction.This benefit increases for applications where multiple systems need to be solved.
The technical novelty of our method lies in a geometric multigrid method that is point-based, while still incorporating the geometry of the underlying surface.Our guiding idea is to construct intrinsic Delaunay triangulations on points sampled from the surface.Every other point can then be mapped from-and to the sampled points using barycentric coordinates in the intrinsic triangles.To get a fast and practical approach, we transfer this idea to a point-cloud setting.For every level in the hierarchy, we start by sampling points from the previous level using a fast uniform sampling strategy.Next, we compute graph Voronoi diagrams on the finer level using the sampled points as seeds and construct a neighborhood graph based on Voronoi cell adjacencies.Mimicking Delaunay triangulations, we construct triangles from the edges of the Voronoi adjacency graph.Each point of the finer level is projected to its closest triangle and barycentric coordinates are used for prolongation.This construction leads to sparse prolongation matrices with at most three entries per row and hence to fast prolongations and restrictions.The use of graph Voronoi cells ensures that the prolongation matrix and its transpose (the restriction matrix) contain entries corresponding to neighbors that are well-distributed over the tangential directions.We name our hierarchy construction Gravo MG, for graph Voronoi multigrid.
We evaluate Gravo MG in ablations and comparisons to [Liu et al. 2021], [Shi et al. 2006], and algebraic multigrid methods.Furthermore, we demonstrate the benefits of our scheme over sparse direct solvers in application scenarios.

RELATED WORK
Geometric multigrid.Multigrid methods [Bramble 1993] are among the most efficient iterative methods for solving linear systems.We call them geometric multigrid methods if the hierarchy construction is exclusively on the domain and no information is used about the system to be solved.GMG methods on regular grids are well studied [Hackbusch 1985] and used in graphics, e.g., for fluid simulation [Dick et al. 2016;McAdams et al. 2010], image processing [Krishnan and Szeliski 2011;Pérez et al. 2003], and surface reconstruction [Kazhdan and Hoppe 2019;Kazhdan et al. 2006].GMG methods for irregular grids on Euclidean domains are used for the simulation of cloth (2D) [Jeon et al. 2013;Wang et al. 2018] and elastic objects (3D) [Georgii and Westermann 2006;Otaduy et al. 2007].
In this work, we consider GMG methods for curved surfaces.Since the domain is no longer a Euclidean space but a curved manifold, methods from the Euclidean setting do not transfer directly and new methods are needed.Existing GMG methods on surfaces focus on triangle mesh representations of discrete surfaces.If the mesh is already equipped with a hierarchy, for example from a subdivision method, it can be used directly for a multigrid method [Green et al. 2002].However, usually, only a fine-scale mesh is given and a hierarchy must be built.Based on earlier work on multiresolution representations of triangle meshes, [Hoppe 1996;Kobbelt et al. 1998], edge collapses are used to create multigrid hierarchies in [Aksoylu et al. 2005;Ni et al. 2004;Ray and Lévy 2003].The prolongation operators are defined by weighted averaging with the one-ring neighbors.There are two approaches to guaranteeing that each vertex in a finer level has at least one neighbour in the coarser level: either the coarsening process is restricted to only collapsing edges, so that a maximal independent set of vertices (MIS) is removed [Aksoylu et al. 2005;Ni et al. 2004], or the edge collapses are restricted so that a MIS is preserved [Aksoylu et al. 2005].Liu et al. [2021] introduce an intrinsic multigrid scheme that uses edge coarsening to create the meshes for the different levels and maintains bijective mappings between the meshes on consecutive levels.The map between two meshes is used to define the prolongations.The map assigns to each vertex of the finer mesh a point in a triangle of the coarser mesh and linear interpolation in the triangle is used for prolongation.The resulting prolongation matrix has at most three entries per row.An alternative to mesh coarsening is to use graph coarsening for hierarchy construction [Shi et al. 2006[Shi et al. , 2009]].A multigrid scheme for the computation of Laplace-Beltrami eigenpairs on surfaces is introduced in [Nasikun and Hildebrandt 2022].The hierarchy used for the eigenproblem, however, is much coarser than the hierarchies used for solving linear systems: only two or three levels are used.A multigrid solver for the computation of harmonic foliations on surfaces is introduced in [Wang et al. 2022].
Algebraic multigrid.Algebraic multigrid (AMG) methods [Brandt 1986;Stüben 2001] are an alternative to GMG.They use the matrix of the linear system to be solved to build the hierarchy instead of using the domain.This has the advantage that AMG can be used for ALGORITHM 1: Multigrid solver Input: Matrix  ∈ R × , initial vector  ∈ R  , right-hand side  ∈ R  , error tolerance , number of levels , numbers of pre/post-relaxations steps   ,   Output: Solution  ∈ R  to the linear system  =  Function Multigrid(, , , , ,   ,   ): problems coming from arbitrary domains.Nevertheless, AMG methods need to rebuild the hierarchy when the system matrix changes, whereas GMG methods only need to rebuild the hierarchy when the domain changes.An efficient multigrid preconditioner specifically for Laplace systems on images and meshes was introduced in [Krishnan et al. 2013].Although fast, it has the disadvantage of requiring the Laplace matrices to have only non-positive off-diagonal entries, which is often not satisfied by mesh Laplacians, such as the cotangent-Laplacian [Pinkall and Polthier 1993].
Direct solvers.Sparse direct solvers [Davis et al. 2016] are reliable, accurate, and commonly used for Laplace systems in geometry processing.Once a factorization of a matrix is computed, these solvers can solve multiple systems with the same matrix but different righthand sides.In special cases, such as low-rank changes of the matrix, the factorization can be updated efficiently [Chen et al. 2008;Herholz and Alexa 2018;Herholz and Sorkine-Hornung 2020].However, substantial changes require a new factorization.A disadvantage of these solvers is that they do not scale well neither in terms of memory requirements nor computation time.In Section 5, we compare the performance of our method to direct solvers in different scenarios.

BACKGROUND: MULTIGRID SOLVER
Multigrid solvers use a hierarchy of grids to solve systems of equations.Iterative solvers converge at different speeds for different scales, depending on the resolution of the grid on which they operate.Thus, by performing iterations on different grids, a multigrid scheme extends the range in which the solver converges particularly fast.Here, we describe a multigrid solver, which will later be used to evaluate our proposed hierarchy and prolongation operators.
We consider the multigrid solver in Algorithm 1.To solve an -dimensional linear system  =  for , it operates on a multigrid hierarchy with  levels, where level 1 is the finest and level  is the coarsest level.A function on the  ℎ grid is represented by a vector in R   , where the grid has   degrees of freedom.The mappings between the grids are realized by prolongation and restriction matrices.The prolongation matrices   ∈ R   ×  +1 map from level  + 1 to level .We use the transposed matrices of the prolongation matrices  ⊤  as restriction matrices.The advantage is that a symmetric matrix  implies that the linear systems in the coarse grid correction, which involve the restricted matrices  ⊤      , are also symmetric.
The multigrid solver first builds the prolongation matrices.We keep this step abstract at this point but discuss it in detail in the following section.In the next step, lines 3-6, the restricted matrices for all levels are constructed.After the precomputation, multigrid iterations are executed until convergence of the solution.The multigrid iterations traverse the hierarchy from fine to coarse and back.This process is called a  -cycle and is simple but effective.Alternatively, instead of directly going up to the coarsest grid, one could first go back to finer grids.Such strategies can help to counteract error accumulation when several levels are traversed and thereby reduce the required number of multigrid iterations.On the other hand, the V-cycle is fast.The multigrid iterations, Algorithm 2, apply relaxation steps before and after the coarse grid correction.We use Gauss-Seidel iterations for this.Alternatives are schemes such as Jacobi iterations or conjugate gradient iterations.The number of Gauss-Seidel iterations applied in the pre and post relaxations is specified by the parameters   and   .For  -cycles, one sets   =   .
The norm used for the convergence test in line 9 of Algorithm 2 depends on the context.A common choice is the standard norm of R  .For the Poisson and smoothing problems, we use a massweighted 2-norm [Wardetzky et al. 2007].

HIERARCHY CONSTRUCTION
Our goal is to design a hierarchy construction that is faster than the intrinsic multigrid method by Liu et al. [2021].It should be compatible with point clouds and general surface representations, while maintaining fast convergence during solving.Before giving an overview of our method, we revisit the idea that guided our design.
In the scheme of Liu et al. [2021], each level is represented by a mesh and mappings to adjacent levels.An intrinsic multigrid approach can alternatively represent levels by multiple intrinsic triangulations of the same surface.For example, each level can be a point sampling with corresponding intrinsic Delaunay triangulation.This idea, however, does not reflect a fast and more general construction.To achieve this, we transfer the idea into a point-cloud setting.

Overview
Our approach takes as input a set of point locations  sampled from a surface and a set of edges  between these points denoting local neighborhoods.For a mesh { , ,  }, we use the vertices  and edges .For a point cloud, edges could be taken from a radius graph or the 1-ring in a local Delaunay triangulation [Sharp and Crane 2020].
The algorithm outputs a sequence of sparse prolongation matrices   , mapping signals from  +1 points to   points, where  +1 <   .Relating  +1 and   , we refer to points in level  +1 as coarse points and points in   as fine points.Level 1 are the input points.
Algorithm overview.The hierarchy is constructed one level at a time.For each level, the algorithm takes the input graph from level , {  ,   }, and outputs the graph in level  + 1, { +1 ,  +1 }.The algorithm also produces the prolongation matrix   .This is repeated until level  is reached.Here, we describe one such step.Figure 1 provides a corresponding visual overview.
First, the point cloud is subsampled using a fast greedy algorithm that aims to enforce a minimum edge length in the next graph (subsection 4.2).Next, we create a graph Voronoi diagram, where the coarse points (sampled points) act as Voronoi centers and the fine points are the loci of the Voronoi cells.We then seek a mapping from the coarse points to the fine points.Mimicking the construction of a Delaunay triangulation as the dual of a Voronoi diagram, we construct a neighbor relation of the graph Voronoi cells (subsection 4.3) and compute all the triangles formed by the edges between Voronoi cell centers (subsection 4.4).Finally, the fine points are projected onto these triangles to find the triangle closest to the fine point.The neighbor relations of the graph Voronoi diagram are then used as edges for the next level  +1 .

Sampling
Each new level contains fewer points than the previous and the samplings should be spatially uniform [Liu et al. 2021;Shi et al. 2006].In other words, we seek a dense sample set , in which no pair of points is closer than a prescribed distance  .To find such a set, we use an algorithm based on the maximum independent set: we sweep once over  , keeping track if points are eligible for addition to .Initially, all points are eligible.If a point  is eligible, we add it to  and mark the points within geodesic distance  of  as ineligible.The radius  is based on the fraction  < 1 of points we wish to keep and the average edge length ê In our experiments, we set  = 1/8 and stop coarsening at 1000 points, yielding roughly log 8 ( /1000) levels.We also limit the search for nearby points to the 2-ring, as this strikes a good balance between construction speed and sampling quality.In section 5.3 we experimentally validate that we indeed approximately reach the desired fraction of samples and Figure 10 demonstrates the uniformity of the resulting sampling and corresponding triangles.

Neighbor graph
We use graph Voronoi diagrams [Erwig 2000] to define neighborhoods for the sampled points.Since we build the levels successively from fine to coarse, the neighbor graph {  ,   } is already built on level  when level  + 1 is visited.The points  +1 are the seeds of the graph Voronoi diagram in {  ,   }.For each seed  ∈  +1 , the Voronoi cell consists of the points in   that are closer in graph distance to  than to all other points of  +1 .The graph Voronoi diagram can be efficiently computed by a multisource Dijkstra algorithm.For points ,  ∈  +1 , we add an edge {,  } to  +1 , if there is an edge in   that connects a point of the Voronoi cell of  with a point of the Voronoi cell of .

Prolongation
Prolongation operators map functions on level  + 1 to functions on level , by matrices   ∈ R   ×  +1 .The restrictions, mapping from level  to level  + 1, are given as the transpose matrices  ⊤  .Important for the design of the prolongation matrices is their sparsity.The sparser the prolongation matrices, the sparser the restricted matrices  ⊤    , and the faster the mappings between the levels.To construct the prolongation, we use linear interpolation in triangles.Hereby, we get very sparse prolongations matrices.Other interpolation methods, such as radial basis functions or spline interpolations, would result in much denser prolongation matrices.
First, a set of candidate triangles on the coarse points is constructed.Every coarse point has a Voronoi cell on the finer level.Two coarse points ,  are connected by an edge {,  } ∈  +1 in the coarser level if their corresponding Voronoi cells are neighbors.We consider all triangles that can be constructed from these edges: all triplets {, ,  } such that {,  }, { ,  }, {, } ∈  +1 .
The motivation to use these edges is the duality between (intrinsic) Voronoi diagrams and Delaunay triangulations (two points in a Delaunay triangulation are connected by an edge iff their Voronoi cells are adjacent).Since graph Voronoi cells are not continuous, but approximations computed from a sampling, the triangles we obtain are not necessarily Delaunay triangles, and they do not necessarily form a manifold.However, as illustrated in Figure 10 and 11, we mostly get well-shaped triangles and a good coverage of the surface, even for point clouds.
To get the prolongation weights for a fine point , we search for the closest candidate triangle.For efficiency, we restrict this search to the triangles that include the coarse point closest to .The weights are the barycentric coordinates of the closest point to  in the selected triangle (this can be on an edge or a vertex).The barycentric coordinates of the projected point are then entered into the prolongation matrix.
Edge-cases.In some cases, a suitable triangle cannot be found within the Voronoi neighborhood of the closest point.This might happen, for example, if all points in the neighborhood are (nearly) co-linear, or if the fine point falls outside of the triangles formed in the neighborhood.In these cases, we resort to finding the closest three points within the neighborhood and use inverse-distance weights.This is preferable over projecting to a single vertex, as it helps the spread of information during prolongation.In practice, this only happens in a fraction of cases (roughly 0.25%).
Reducing single-entry rows.The resulting prolongation matrices are very sparse with maximally three non-zero entries per row.Since the coarse points are created by subsampling the fine points, the fine points that are sampled transfer their function value directly to the corresponding coarse point during prolongation.Therefore, there is only one entry in the corresponding rows.We obtain prolongation matrices with fewer single-entry rows by moving each sampled point to the mean of the points that form its graph Voronoi cell before we compute the closest point projections.In our experiments, we obtained a slight improvement of solving times with this strategy over not moving the coarse points.

EXPERIMENTS
We evaluate Gravo MG and compare to state-of-the-art GMG methods and AMG approaches.For reference, we provide results for direct solvers.We also provide insights into design choices via ablation studies.

Implementation
Our multigrid-solver implementation builds on Liu et al. [2021]'s code, where the prolongation matrix definition is exchanged.In every experiment, we set the number of pre/post-relaxation steps   =   = 2.The hierarchy construction uses custom routines built around Eigen [Guennebaud et al. 2010] and only requires a matrix of points and an array of edges.The code for our solver is available as a C++ library and Python package, along with scripts to replicate the main tables and figures in this paper: https://graphics.tudelft.nl/gravo_mg.
We use an Intel®Core™i9-9900 CPU (3.10GHz, 32GB memory).The code does not employ multithreading but could be parallelized.None of the methods in our comparisons are parallelized, except Pardiso.A discussion on the potential for parallelization of the solver we used can be found in Section 7 of [Liu et al. 2021].

Problems
In our ablations and comparisons, we test our approach on two standard problems that can be written as linear systems: data smoothing and Poisson problems.For meshes, both problems involve the cotan Laplace matrix  and the lumped mass matrix , see [Wardetzky et al. 2007].In the case of point clouds and non-manifold meshes, we use the robust Laplacian by Sharp and Crane [2020].Data can be smoothed by solving where  is the noisy input function and  a parameter that determines how much the data is smoothed.The Poisson problem is where  is a random vector.The term  is added to obtain a positive-definite system matrix.The parameter  is chosen to be very small, for example  = 1 × 10 −6 .Our solver terminates when tolerance  is reached (line 9 of algorithm 1) or after a maximum number of iterations.

Ablation Studies
We would like to understand the effects of our design choices on the hierarchy construction and subsequent solving steps.We structure these experiments along three themes: sampling, prolongation selection, and weighting.In each ablation, we compare variants of our approach on a fixed set of meshes and point clouds and run a data smoothing problem as detailed above.We smooth a random function with  = 1 × 10 −3 and tolerance 1 × 10 −4 .Each variant is then evaluated in terms of time to construct the hierarchy, the number of iterations required to reach the target tolerance, and the total time.
Sampling.We seek a sampling method that balances run-time during hierarchy construction and sampling quality.To validate that our approach effectively balances these demands, we compared our approach to random sampling, Poisson-disk-sampling (PDS), geodesic farthest point sampling (FPS), and maximal independent set (MIS) selection.For every method, we set the target ratio between levels to 1/8th.In Table 1, we observe that our approach is faster than the others when solving.When considering the full hierarchy construction, our approach is faster than every other sampling approach, because we perform part of the graph Voronoi diagram construction during sampling and have fewer points per level to consider than MIS.
We also compare decay rate between the maximal -independent set, used in [Shi et al. 2006], and our sampling.In Figure 2, we see that it is possible to decay much faster with our approach than MIS, because it must always be a superset of the MIS.This is an advantage of our approach; we can perform faster and fewer iterations, while having a fast sampling time.Prolongation selection.Our approach uses triangles of coarse points for the prolongation operator.This results in sparse prolongation matrices that spread information in each tangential direction.In this ablation, we seek to support this choice.In Table 2, we compare our approach to the following variants that do not explicitly work with triangles: simply prolonging to the closest two, three, or four points from the graph Voronoi neighbors and picking three random points.We also test a variant that considers triangles without restricting to Voronoi edges, 'closest tri'.This results in less consistent triangulations, as shown in Figure 3.For each of the non-triangle selection approaches, we use inverse distance weights.We observe that our approach solves faster than all other variants, while increasing the hierarchy construction time only a bit.
Weighting.We project the fine points onto the triangles formed by coarse points and use barycentric weights as predictors for the value of the fine point.Previous works suggest that the choice of weighting schemes has little effect on convergence times [Aksoylu et al. 2005;Shi et al. 2006], while Liu et al. [2021] argue that the weighting scheme is crucial for some shapes.In Table 3, we compare our approach with uniform weights and inverse distance weights alongside a variant where we do not shift the coarse points to the barycenters.We observe that our approach works best with barycentric coordinates (Ours).Inverse-distance weights are not far behind.Shifting coarse points has benefits for some, but not all shapes.This is not the core contribution of our work and could be left out in some cases.A benefit of not shifting coarse points is that each iteration is faster because the prolongation matrix contains more single-entry rows.
Table 3: Timings for solving on data smoothing with variations of the weighting scheme.We compare barycentric coordinates (Ours) to uniform weights, inverse distance weights, and barycentric coordinates without chaging the positions of the coarse points before projection (No shift).All timings are in seconds.

Comparisons
We compare our approach on a wide range of meshes and point clouds for a Poisson problem with  =1 × 10 −6 and target tolerance of 1 × 10 −4 .The input function  is a random vector sampled from N (0, 1).The shapes were selected to have at least 100k vertices and exhibit a wide variety: uniform meshes (e.g., Nefertiti), non-uniform meshes (e.g., Alfred Jacquemart, Indonesian statue), broken and nonmanifold meshes.The meshes also exhibit detailed features (e.g., XYZ dragon) and complex curvature (e.g., Murex Romosus).All the shapes are shown in Figure 12.We make no use of additional pre-processing steps, such as remeshing or fixing non-manifold edges: every mesh is used as-is in the highest resolution available from the respective sources.For the point clouds, we opted for highresolution scanned data.The point clouds come from the Tanks and Temples benchmark dataset [Knapitsch et al. 2017] and from range scans in the AIM@Shape repository [Falcidieno 2007].
Gravo MG is compared to the GMG solvers by Liu et al.
[2021] and Shi et al. [2006], and the AMG methods Ruge-Stuben and Smoothed Aggregation.For reference, we list the timings of direct solvers.For Liu et al., we use their provided implementation.We reimplemented Shi et al. based on their paper.The latter mentions multiple weighting schemes, including uniform weights and inverse distance weights.We tested both and report the best-performing approach: inverse distance weights.For the AMG approaches, we use the implementation provided in PyAMG [Bell et al. 2022] with default settings provided by the package.We set the maximum number of iterations for all iterative solvers to 100, since more iterations would not change the overall picture regarding which  Our approach yields faster solving times for the majority of input meshes (Table 4).More results for manifold meshes are listed in the supplement in Table 1.On average, our construction is 36x faster than Liu et al. and only 1.8x  ).This is balanced out in most cases by a higher iteration count and the overall solving times are similar when we use a decay rate of 1/4.GMG methods are most beneficial in settings where a user would iterate on the system matrix, but the benefit of using Gravo MG is already noticeable starting with the first solve.For all meshes larger than 100k vertices, our approach is faster than Liu et al. for both the Poisson problem and data smoothing.The same holds for Shi et al. for the Poisson problem.For data smoothing, we are faster for one solve in 83% of cases and for three solves in 93% of cases.Compared to the Pardiso solver, we are faster for one solve of the Poisson problem in 92% of cases and for three solves in 95% of cases (data smoothing, 1x: 95%, 3x: 98%).Note, however, that our solver stops at a higher residual error than direct solvers.The strength of multigrid approaches is in settings where one needs a quick and relatively accurate solution.A direct solver is often preferable in settings where high accuracy is required.
To provide insight into the convergence of our approach compared to the other GMG schemes, we plot convergence for a data smoothing problem with  =1 × 10 −3 for the Murex Romosus shape in Figure 1 and the same plot against number of iterations in Figure 9. Again we see that our approach is on par with Liu et al. [2021] and beats Shi et al. [2006] with a high margin.More convergence plots for data smoothing, including plots over the number of iterations, can be found in the supplement.These confirm our results.There are some outliers: for Red Circular Box, Shi et al. [2006] converges faster than the other GMG approaches and for Moses, Gravo MG slows down around a residual of 1 × 10 −6 .

Applications
We evaluate our solver in three scenarios: data smoothing, a geometric flow, and physical simulation.We compare solving times to a sparse Cholesky solver, commonly used for these problems.
Data smoothing.For data smoothing, we consider an input function y on a surface and compute a smoother function x by minimizing a quadratic objective The first term is a data term that penalizes deviation from the input function, the second and third terms are Laplace and bi-Laplace smoothing energies and ,  ∈ R ≥0 are parameters.Results are shown in Figures 4 and 5.The figures list timings for solving the linear systems with our method and Eigen's sparse Cholesky solver.When changing parameter  to adjust the amount of smoothing, the direct solver needs to compute a new matrix factorization resulting in significant solving-time differences compared to our solver, in particular, when the bi-Laplacian energy is included.
Conformal flow.As an example of a nonlinear geometric flow, we consider the conformal flow [Kazhdan et al. 2012].For robustness, we use an implicit time-integration that requires solving a linear Laplace system for every time step.We show results in Figures 6  and 7 and compare our solving times to those of Eigen's sparse Cholesky solver.Since the system matrix changes every time step, the direct solver constantly needs to compute new factorizations, resulting in substantial differences when performing multiple steps.
Balloon inflation.As an example of a physical simulation, we consider the balloon inflation from [Skouras et al. 2012].A surface mesh represents a thin-layered rubber balloon that undergoes membrane deformation subject to air pressure.For time-integration an implicit Euler scheme is used and the resulting nonlinear equations are solved using a Newton scheme.To find the descent direction a sparse linear system is solved.As for the geometric flow, due the simulation's nonlinearity, the system matrix changes with every time step, forcing the direct solver to compute a new factorization in every time step.Results and timings are shown in Fig 8.

CONCLUSION
We introduce Gravo MG, a surface multigrid method that features fast hierarchy construction, applicability to general surface representations, and fast convergence.Our experiments demonstrate excellent performance compared to other GMG and AMG methods and direct solvers.
Conceptually, our method deviates from the common paradigm of GMG to represent levels via watertight meshes obtained by edge collapse.We use the geometry of the surface, while AMG ignores it for hierarchy construction.This opens a new direction for GMG on manifolds, which are generally applicable and fast to build, hereby improving the scalability of geometry processing methods.
In future work, graph Voronoi diagrams could be used for point cloud processing.We are excited about the quality of the triangles we generate from the graph Voronoi diagrams and see a potential use, when fast triangulations or uniform samples on point clouds are needed.Regarding theory, it would be interesting to explore under which conditions this approach can provide guarantees regarding the triangulation.Further acceleration is still possible.An important aspect is parallelization of both the hierarchy construction and the solver.Our solver could also become a preconditioner for a Krylov method like GMRES or CG to accelerate convergence.

Initial shape
Step 5

A EXTENDED COMPARISONS
In Table 5 (second page), we show more results for the Poisson problem on manifold meshes.Note that some entries are listed as NaN.This arises from the Gauss-Seidel smoothing step, where a divison by the diagonal of the system matrix is performed.In the cases where a NaN arises, the restriction of the system matrix results in zero-entries on the diagonal.This is not a fundamental issue for the Gauss-Seidel solver.The issue could, for example, be addressed by including a pivoting strategy.We did not include these entries for the conclusions listed in the main paper.
We also report a comparison with a data smoothing problem with  =1 × 10 −3 for manifold meshes in Table 6 and for non-manifold meshes and point clouds in Table 7. Convergence plots for data smoothing on the manifold meshes in the main paper are shown in Figure 13 and Figure 14.

Figure 2 :
Figure 2: Comparison of the decay rate between MIS used byShi et al. [2006] and our approach on XYZ dragon.The y-axis is in log 2 scale.

Figure 3 :
Figure 3: Using dual Voronoi triangles results in a more consistent set of candidate triangles than using all triangles in the coarse point's 1-ring.

Figure 4 :
Figure 4: Smoothing of scalar data on a surface mesh with various parameter settings (Model: Nefertiti, 1m vertices) using the Dirichlet energy as smoothness energy.

Figure 5 :Figure 7 :
Figure 5: Smoothing of scalar data on a surface mesh (Model:Oilpump, 570k vertices) using a weighted sum of the Dirichlet energy and a bi-Laplacian energy as smoothness energy.

Figure 11 :
Figure 11: Input point clouds and considered triangles for the last two levels of the hierarchy.From left to right: Caesar, Ignatius, Truck, and Dancing Children.

Figure 12 :
Figure 12: All (non)manifold triangular meshes used in our experiments.Mosaic generated with code from Qingnan Zhou.

Table 1 :
Data smoothing timings with variations of the sampling step (Smp).We compare our approach to random sampling, Poisson Disk Sampling (PDS), geodesic Farthest Point Sampling (FPS), and Maximum-Independent Set (MIS).All timings are in seconds, unless otherwise specified.

Table 2 :
Timings for hierarchy construction and solving on data smoothing with variations of the entries in the prolongation operator.Ours only considers triangles formed by Voronoi edges.The other variants either pick  closest points, pick  random points or pick the three Voronoi neighbors that form the closest triangle.All timings are in seconds.

Table 4 :
Comparison of our hierarchy construction and solver for a Poisson problem with  =1 × 10 −6 mass matrix coefficient and tolerance of 1 × 10 −4 .Missing entries are not available for the given method.The maximum number of iterations for iterative solvers is set to 100.
slower than Shi et al.'s method.With regards to solving time, Liu et al. takes 3% more time on average for the Poisson problem and 7% for data smoothing and Shi et al. takes 274% more time for the Poisson problem and 81% for data smoothing.Note that we require less time for one iteration than Liu et al., because we use a higher decay rate (1/8 vs. 1/4

Table 5 :
Comparison of our hierarchy construction and solver for a Poisson problem with  =1 × 10 −6 mass matrix coefficient and tolerance of 1 × 10 −4 .The maximum number of iterations for iterative solvers is set to 100.

Table 6 :
Comparison of our hierarchy construction and solver for data smoothing of a random function with smoothing coefficient  =1 × 10 −3 and tolerance of 1 × 10 −4 .The maximum number of iterations for iterative solvers is set to 100.

Table 7 :
Comparison of our hierarchy construction and solver for data smoothing of a random function with smoothing coefficient  =1 × 10 −3 and tolerance of 1 × 10 −4 on non-manifold meshes and point clouds.The maximum number of iterations for iterative solvers is set to 100.