Analysis and Synthesis of Digital Dyadic Sequences

We explore the space of matrix-generated (0, m, 2)-nets and (0, 2)-sequences in base 2, also known as digital dyadic nets and sequences. In computer graphics, they are arguably leading the competition for use in rendering. We provide a complete characterization of the design space and count the possible number of constructions with and without considering possible reorderings of the point set. Based on this analysis, we then show that every digital dyadic net can be reordered into a sequence, together with a corresponding algorithm. Finally, we present a novel family of self-similar digital dyadic sequences, to be named $\xi$-sequences, that spans a subspace with fewer degrees of freedom. Those $\xi$-sequences are extremely efficient to sample and compute, and we demonstrate their advantages over the classic Sobol (0, 2)-sequence.

the best so far.Low-discrepancy patterns, however, introduced by Shirley [1991] to graphics and popularized mostly by Keller and collaborators, are arguably leading the competition for use in rendering.Of special interest are so-called dyadic nets and sequences, mainly the Hammersley net and the Sobol sequence.These combine exceptionally high production rates with excellent convergence behavior when used in Monte Carlo integration, for which they are specifically designed.They are very versatile, as described in the discussion by Keller [2013].Furthermore, they are easy to understand thanks to their modular geometric "multi-stratified" structure [Pharr et al. 2016].In a nutshell, a dyadic net is concerned with a "fair" 2D distribution of  = 2  points, so that specifically constructed rectangles of equal area contain exactly one point.These rectangles are obtained by partitioning the unit square [0, 1) 2 into rectangular cells of sizes 1 2  × 1, 1 2 −1 × 1 2 , . .., 1 × 1 2  (see Fig. 2).A dyadic sequence is a sequence of sample points such that all leading sub-sequences of points of size 2  , for all , are dyadic nets as well as all subsequent sub-sequences of size 2  (see Fig. 3).We would like to remark that dyadic nets and sequences fulfill the weaker -rooks and Latin hypercube conditions so that they are proper subsets.In computer graphics, the research effort in the area of dyadic nets and sequences was spearheaded by Keller and collaborators, who early on used and analyzed dyadic distributions in rendering [Grünschloß et al. 2008;Grünschloß and Keller 2009;Keller 2006;Kollig and Keller 2002].
In this paper, we set out to analyze the design space of matrixgenerated, or digital, dyadic nets and sequences.One possible reason why this design space is still not fully explored is that the focus of the Monte Carlo community might have been on proving discrepancy bounds and not the actual construction of dyadic sequences and dyadic nets.Therefore, important contributions in the area of sample construction were made even recently.Examples include a compact parametrization of the whole dyadic space by Ahmed and Wonka [2021], a compact parametrization of a sub-space comprising dyadic sequences, along with a very efficient generation algorithm, by Helmer et al. [2021], and an all-new construction of a high-dimensional distribution that has pair-wise two-dimensional projections by Paulin et al. [2021].
Our paper makes the following contributions: • a comprehensive analysis of digital dyadic nets and sequences.We introduce the missing ingredient (Thm.3.2) that enables us to map the analysis of matrices generating digital dyadic sequences to the language of linear algebra.• the computation of the exact dimension of the design space for both digital dyadic nets and sequences.• an algorithm that reorders any digital dyadic net into a digital dyadic sequence.For example, we can reorder the Hammersley net or the Larcher-Pillichshammer net into digital dyadic sequences.• the discovery of self-similar dyadic sequences, also called -sequences.They are more efficient to construct than the Sobol sequence, and they are easily invertible.The subspace of -sequences is large enough to contain examples with low discrepancy and a large minimal distance between two sample points.
The rest of the paper is organized as follows.In Section 2, we highlight the most relevant literature.In Section 3, we define and analyze digital dyadic nets and sequences, recall some known and state our new theoretical results: compute the exact dimension of the design space for both and give an algorithm that reorders any digital dyadic net into a digital dyadic sequence.In Section 4, we apply our algorithm for several examples: we reorder the Hammersley, the Larcher-Pillichshammer, and the Gray nets into digital dyadic sequences.In Section 5, we introduce and study new self-similar dyadic sequences and give an explicit parametrization of their design space.

RELATED WORK
Our work has interesting connections to different fields of study in computer graphics and mathematics.In this section, we try to highlight the most relevant literature, organized into three categories.

Low-Discrepancy Sampling
The study of uniform sample distributions is much older than computer graphics.An early 1D sequence was presented by van der Corput [1935], who established the digit-reversal principle that underlies almost all Low-Discrepancy (LD) constructions.Roth [1954] and Hammersley [1960], respectively, presented techniques for extending the van der Corput principle to two and higher dimensions.
In a parallel line of study, Sobol [1967] pioneered the digital construction of LD sets and sequences, generalized by Niederreiter [1987;1992], who coined the terms (, , )-nets and (, )-sequences, and initiated a new field of study of these constructions.We defer definitions to Section 3, but point out that nets are finite LD sets, while sequences are extensible sets that admit more points while maintaining a low discrepancy.Most of these constructions are digital, i.e. sample coordinates are derived from the sequence number via binary matrix vector multiplication and vector addition modulo 2. A thorough study of common construction methods is given by Dick and Pillichshammer [2010].Not all nets and sequences, however, are necessarily digital, and there are known algorithms for constructing non-digital nets [Ahmed and Wonka 2021;Grünschloß and Keller 2009].
Aside from direct construction techniques of nets and sequences, Owen [1995;1998] presented a powerful technique to randomize (, , )-nets and (, )-sequences while preserving their favorable properties; in fact improving their quality.The original concept by Owen, while quite simple, is very memory intensive, consuming as many random bits-per dimension-as the number of samples.Much work followed Owen's, mainly aimed at simplifications for more efficient implementation [Burley 2020;Friedel and Keller 2002;Kollig and Keller 2002], at the cost of different compromises.However, there are also variants that are conceptually different; most notably the techniques of Faure and Tezuka [2002] aimed at shuffling the order of the samples rather than scrambling their locations.
Owen's scrambling works only on existing nets and sequences, so a method for generating the underlying sets is still required.Ahmed and Wonka [2021] presented a technique for the direct construction of arbitrary 2D nets, but their method can only produce nets and consumes even more random bits than Owen's scrambling.

Recursive Tiling Techniques
Ostromoukhov [2007] is credited for importing the digit-reversal principle of LD constructions (Section 2.1) and using it to distribute blue-noise samples over a self-similar set of tiles.The idea was followed up by Wachtel et al. [2014], and matured in Ahmed et al. [2017], whose tile set, ART, has a low granularity and tuneable properties, presenting a comparable blue-noise competitor to LD sequences [Ahmed and Wonka 2021;Christensen et al. 2018;Kopf et al. 2006].

Low-Discrepancy Blue Noise
A recent trend of research tries to inject blue noise properties into LD sets and/or sequences.Early ideas along this line of research may be traced back to Keller [2006], who speculated on the superiority of LD sets that bear a large minimum distance between points, showcasing the very-low-discrepancy family of nets by Larcher and Pillichshammer [2003] that also attain a large minimum distance between points.Grünschloß and colleagues subsequently presented examples of small-sized digital nets with maximized minimum distance between points obtained via a search algorithm [2008], as well as non-digital constructions developed heuristically [2009].The recent work of Christensen et al. [2018], along with [Helmer et al. 2021;Pharr et al. 2016], may be seen as an extension of this search approach that targets sequences instead of nets.
A different line of research in this category was that of Ahmed et al. [2016], with an algorithm for proactively imposing spectral control over LD nets, at the cost of compromising the quality of the net.Perrier et al. [2018] extended this idea to sequences.See a survey by Singh et al. [2019] for details.Ahmed and Wonka [2021] demonstrated the possibility of incorporating spectral control into nets without compromising their quality but highlighted the difficulty of extending the idea to sequences.

ANALYSIS OF DIGITAL DYADIC NETS AND SEQUENCES
In this section, we define and analyze digital dyadic nets and sequences, and state our new theoretical results: computation of the exact dimension of the design space for both and an algorithm that reorders any digital dyadic net into a digital dyadic sequence.

Definitions
There is a general notion of (, , )-nets in base .In this paper, we are focused on the particular case of 2D dyadic nets.They are known as (0, , 2)-nets in base 2 (in short, -nets, or 2  -point nets).Further, we specifically study dyadic sequences, also known as (0, 2)-sequences in base 2.
A digital collection of points that is also a dyadic net (sequence) is called a digital dyadic net (sequence).
Equivalently, based on the analysis from [Ahmed and Wonka 2021], a digital collection of points is a digital dyadic net, if it contains a unique point  (,  ) with any given last  bits of  and last  −  bits of  , for each  = 0, 1, . . ., .Likewise, a digital collection of points is a digital dyadic sequence, if the first and all the subsequent blocks of 2  points form a digital dyadic net after removing the last  −  bits from both  and  , for each  < .
This notation enables us to express well-known sequences and nets in succinct form; the map "" is usually omitted.Table 1 summarizes explicit parametrizations of digital dyadic nets and sequences, and the analysis of how many different constructions are possible by each parametrization.The GFaure construction introduced by Tezuka [1994] describes a family of sequences obtained by setting   =   and   =   , with   and   arbitrary lower unitriangular matrices and  the binary Pascal matrix.By a unitriangular matrix we mean an upper or lower triangular matrix with all ones on the diagonal.The entries of the Pascal matrix  are    =  −1  −1 mod 2, with  row and  column index, respectively: Hereafter a zero entry is in gray for readability.Table 2 summarizes our newly introduced (see Section 4) as well as known constructions for reference, e.g. the Hammersley net is obtained by   = ,   =  , with  the anti-diagonal matrix with all ones on the anti-diagonal, and the Larcher-Pillichshammer (LP) net Table 1.Overview of constructions of different digital dyadic nets and sequences.  ,   are lower unitriangular matrices,   ,   are upper unitriangular matrices,  is the anti-diagonal matrix with all ones on the anti-diagonal,  is the binary Pascal matrix,  is any invertible matrix,  0 ,  0 are binary bit vectors, and  is the index of the generated point ( ,  ) in binary bit vector form.See Section 4 for details.
In the following, we first discuss digital dyadic nets and then digital dyadic sequences.In order to determine if two pairs of matrices (  ,   ) and ( ′  ,  ′  ) generate the same set of points in a different order, we use the concept of the characteristic matrix.Definition 3.3.For a pair of invertible matrices (  ,   ), the matrix  =    −1  is called the characteristic matrix of the pair.
If we want to check if two pairs of matrices generate the same set of points, we simply compare their characteristic matrices.The characteristic matrix directly associates the -coordinate of a point with its -coordinate, i.e  =  when  and  are written in appropriate binary form.We also note that changing a pair of matrices (  ,   ) to another pair (  ,   ) with an invertible matrix  preserves the characteristic matrix  and enumerates all possible transformations that change the order of points but keep the final net the same.This transformation is equivalent to replacing  by  in (1), i.e., to a reordering of the bit vectors .
In particular, if  is upper unitriangular, then for each  = 1, . . .,  − 1 this reordering preserves the decomposition of the sequence of 2  possible bit vectors  into blocks of 2  consecutive vectors: only the vectors within each block and the whole blocks are reordered.Such reordering preserves the dyadic sequence property: a digital dyadic sequence is reordered to another digital dyadic sequence.Conversely, one can show that a matrix  having this property must be upper unitriangular.
We will also need the following definition for a type of matrix that is integral to understanding the digital construction.Definition 3.4.A progressive matrix is a square matrix whose leading principal sub-matrices are invertible.
In this context, we should also comment on an essential fact of linear algebra in  (2).A matrix is progressive if and only if it can be factored into the product of a lower unitriangular matrix  and an upper unitriangular matrix  [Horn and Johnson 1985, Corollary 3.5.5].The factorization is also unique.So we only have (−1) bits to determine a progressive matrix.This is  bits less than required to determine an arbitrary  ×  matrix.Finally, there are some interesting facts about the Pascal matrix that will be helpful: the Pascal matrix is upper unitriangular;  is its own inverse; mirroring both rows and columns by multiplying with  on each side gives    , which is a lower unitriangular matrix; and    =    [Kajiura et al. 2018, Lemma 2.5].See Appendix B for short proofs.
Also some comments on (numerical) linear algebra in  (2).Matrix inversion generally works by adapting the Gauss-Seidel algorithm.The LU-factorization algorithm can also be adapted.See Algorithm 5 in the appendix.The QR-factorization using Gram-Schmidt does not work, because vectors can be self-orthogonal.We make heavy use of determinant calculations, in particular, the Laplace expansion is employed in many proofs.As mathematics books treat linear algebra over general fields, we believe that a specific introduction to linear algebra over  (2) is more likely to be found in engineering textbooks, e.g.[Bard 2009].We also found packages for the numerical implementations in Python and C++, but ultimately reimplemented all algorithms from scratch.

Digital Dyadic Nets
We first would like to establish the conditions that need to be satisfied for a pair of matrices (  ,   ) to form a digital dyadic net.We introduce the term dyadic pair of matrices for the following discussion.Definition 3.5.A dyadic pair of matrices (  ,   ) is a pair of matrices such that the hybrid matrices comprising the first  −  rows from   = (   ) and the first  rows from   = (   ), are invertible for all  ≤ .
It is well established [Grünschloß et al. 2008;Larcher and Pillichshammer 2001] that two binary matrices (  ,   ) form a dyadic pair if and only if they produce a digital dyadic net with Eq. ( 1).However, even though the conditions are known, they are not sufficient to derive an explicit construction or a complete description of the design space which we will now recall.
Theorem 3.1.(See [Hofer and Suzuki 2019, Lemma 2.6]) A pair of matrices (  ,   ) is dyadic if and only if   is invertible, and   =    for some lower unitriangular matrix  and some upper unitriangular matrix  .
This explicit parametrization enables the possibility to enumerate all possible constructions directly, and to sample from possible constructions without having to check that the condition for a dyadic pair is satisfied.While it is also possible to construct and sample from general invertible matrices   over  (2), this introduces unwanted complexity that will be eliminated shortly.Luckily, for nets we do not need to distinguish between constructions that create the same points in a different order.Therefore, nets can be conveniently constructed in this canonical order, i.e. by ignoring   or by using the parametrization of Hammersley-like nets.
We remark that the parametrization in Theorem 3.1 is actually symmetric in   and   : if   =    , then   =  ′  ′   with  ′ =  −1  and  ′ =   −1  .Here  ′ and  ′ are lower-and upperunitriangular respectively because the left-and right-multiplication by  reverses the order of rows and columns respectively.Now we ask the question of how many constructions are possible by the original construction.We can determine that there are possible constructions.The first factor of the product refers to the choices due to enumerating the matrices  and  , and the rest to enumerating all invertible matrices   .Such enumeration is given by the following known construction algorithm for invertible matrices over  (2): As the first column, pick any of the 2  − 1 nonzero -bit vectors.As the second column, pick any of the 2  − 2 binary vectors not proportional to the first column.As the third column, pick any of the 2  − 2 2 vectors not contained in the linear span of the first two columns, etc.
Next, we wish to answer the question of how many unique sets of points can be generated.This means we are interested in the number of constructions without considering (counting) permutations.We can determine that there are 2  (−1) (5) Fig. 4. A visualization to support the proof of Theorem 3.2.If an  ×  generating matrix for a 1D digital dyadic sequence is multiplied with the matrix that encodes the sequence of integers [0, 1, 2, 3, . ..] in binary, the output matrix must ensure that specific blocks in the output form 2 1 , 2 2 , . . ., 2 point nets.However, to establish the net property for a 2  -point net, we only need to look at the first  bits ( rows) of the output.In the output we visualize the blocks of bits that need to be checked to verify the 2 1 -point (blue color), 2 2 -point (cyan color), and 2 3 -point (green color) net property.
possible constructions by observing that the characteristic matrix has the form  =   .The number of constructions directly follows from the number of bits we can choose for designing arbitrary unitriangular matrices  and  .These results are included in Table 1.
We would like to relegate the detailed proofs to Appendix A. While the proofs are not necessary to understand the main results in the paper, the ones for dyadic nets provide important insights for the case of dyadic sequences (the more difficult case and main focus of the paper).

Digital Dyadic Sequences
To discuss digital dyadic sequences we first need the following definition.
Definition 3.6.A progressive pair of matrices is a pair of matrices such that the  ×  leading principal sub-matrices form dyadic pairs for all .In other words, a pair are invertible for all  ≤  ≤ .
Note that this definition implies that each of the two matrices is progressive.The concept of a progressive pair of matrices is the missing link that is required to fully map the analysis of digital dyadic sequences to the language of linear algebra.We establish the link as follows.
Theorem 3.2.A pair of matrices (  ,   ) creates a digital dyadic sequence with (1) if and only if (  ,   ) is a progressive pair.
This new theorem follows from our definitions: the last constraint in the paragraph after Definition 3.2 is a system of linear equations with the matrix being exactly hybrid matrix (6) from Definition 3.6.However, the resulting assertion is nontrivial; let us illustrate the point.We first start by analyzing the generation of a 1D digital dyadic sequence by a single matrix   .This matrix is then multiplied by a matrix   that consists of all possible inputs, i.e. all  bit integers in sequence as binary numbers [0, 1, 2, 3, 4, 5, . ..].Now we analyze the output matrix shown in Fig. 4. In order for the matrix   to generate a valid 1D dyadic sequence, the blue boxes need to contain permutations of 0, 1, the cyan permutations of 00, 10, 01, 11, the green boxes permutations of 000, 100, 010, . .., and so on for all  bit integers up to .This analysis can be extended to a pair of generating matrices (  ,   ) by observing that the first 2  points need to form a 2  -point net.All subsequent blocks of 2  points are also dyadic nets; this is a property of XOR-scrambling, which is a bit harder to see.We can now analyze the design space by analyzing the properties of progressive pairs of matrices.
The known explicit parametrization of the design space of digital dyadic sequences can be elegantly stated in terms of progressive pairs as follows.A new short proof of the theorem is given in Appendix B. Just like in the case of dyadic nets, this explicit parametrization enables the possibility to enumerate all possible constructions directly and to sample from possible constructions.Again, we ask the question of how many constructions are possible.We can determine that there are 2 3 (−1)/2 possible constructions.The number of constructions directly follows from the number of bits we can choose for designing arbitrary unitriangular matrices   ,   , and  .Next, we wish to answer the question of how many unique sets of points can be generated.This means we are interested in the number of constructions without considering permutations.We can determine that there are 2  (−1) (7) possible constructions by observing that the characteristic matrix has the form  =    −1  .The number of constructions follows from the number of bits we can choose for designing arbitrary unitriangular matrices   and   .We therefore propose to use the GFaure construction to explore the complete design space of digital dyadic sequences in case the order in which the points are generated is not important (the GFaure construction ensures that the canonical order is a sequence order, though).While the construction was proposed before, it was not clear however that this particular construction was able to enumerate all possible sequences up to the order of the points.In addition, for the reduced parameterization to be valid, we need to ensure that distinct choices of the pair (  ,   ) lead to distinct characteristic matrices  =    −1  .This is a somewhat nontrivial property proved in Appendix B, along with the other results of this subsection.

Converting Digital Dyadic Nets to Digital Dyadic Sequences
An important discovery of our work is that all digital nets can be reordered to become digital sequences.We think this is quite striking, since digital nets like the Hammersley net have generally been considered to be non-extensible [Keller 2013;Pharr et al. 2016], whereas they actually are.Also, important digital nets like the LPnet can actually be reordered to become a sequence, a much more useful order for sampling applications.
When comparing the number of unique point sets that can be generated by the digital dyadic net construction (5) and the digital dyadic sequence construction (7), we notice that they can generate the same number of unique point sets.Therefore, we can conclude that every digital net can be reordered into a digital sequence.We would like to note that this conclusion may be a bit more complex than it seems.The conclusion is only possible because our enumeration is proven to be exhaustive and proven not to contain any duplicates.This justifies the proofs given in the appendix and also requires the joint treatment of digital nets and sequences, even though the original goal was just to understand sequences.
We first describe the process in the form of linear algebra, and later comment more explicitly on an algorithmic implementation.
Theorem 3.4.For each dyadic pair (  ,   ) there is an invertible matrix  such that (  ,   ) is a progressive pair.
In other words, the dyadic and progressive pairs have the same sets of characteristic matrices, and each digital dyadic net can be ordered to provide a digital dyadic sequence.
The matrix  is explicitly constructed from the decomposition   =    provided by Theorem 3.1.A possible choice is The resulting pair has the form by the identity   =    .Here   =  −1   and   =    are lower unitriangular matrices because for any upper unitriangular matrix  ′ the matrix  ′  is lower unitriangular.So we arrive at the decomposition (  ,   ) = (  ,   ) from Theorem 3.3 and therefore the pair is progressive.
Based on this result, we propose the following construction for dyadic pairs, called GS-net: where   ,   are arbitrary lower unitriangular matrices and  is an arbitrary invertible matrix.This makes it clear that every digital dyadic net is simply a digital dyadic sequence reordered by an arbitrary invertible matrix.We make the reordering explicit in Algorithm 1.It requires the computation of an inverse matrix and the computation of the LUfactorization in  (2).We implemented this ourselves, but libraries are available.Finally, we would like to recall that each dyadic sequence can be reordered by an upper unitriangular matrix  .Therefore, the reordering of a net into a sequence is not unique and the matrices returned by our proposed algorithm can still be rightmultiplied by an arbitrary upper triangular matrix  .

Affine Digital Dyadic Nets and Sequences
Seeking to enlarge the space of digital dyadic nets and sequences, we apply XOR scrambling introduced by Kolleg and Keller [2002].
Namely, instead of (1), we use equation with some fixed bit vectors  0 ,  0 .This gives 2 additional degrees of freedom if we take the order of points into account.
Then we rewrite (9) as for some constant bit-vector  ′ 0 .This means that for any pair ( 0 ,  0 ) of constant bit-vectors used to XOR-scramble a digital dyadic sequence, there exists a single bit vector  ′ 0 that would produce the same set of points by scrambling only along the  axis.This means that the actual dimension of xor-scrambling is only .Note that this  ′ 0 vector may not be absorbed within any matrix multiplication, so it represents a new degree of freedom.Incidentally, it exactly compensates the  diagonal bits we lost earlier.This makes the dimension of affine digital dyadic sequences exactly  2 , allocated as ( − 1)/2 bits below the diagonal of each of the   and   matrices, and  bits for XOR scrambling of .

SYNTHETIC SEQUENCES
In this section, we demonstrate examples of applying the knowledge gained through Section 3 to synthesizing dyadic sequences with favorable properties, and in the following section we give a third demonstration of a whole class of sequences.

Designing Dyadic Nets and Sequences by Search and Optimization
For applications of Monte Carlo integration, such as rendering, one often searches for digital dyadic nets and sequences that are optimal in a sense.Common optimization targets include minimum distance between points and star discrepancy [Keller 2006].Star discrepancy  * measures how the fraction (, )/2  of points of an -net in a rectangle [0, ) × [0, ) deviates from the area of the rectangle; formally,  * = sup | − (, )/2  |, where the supremum is over all ,  in [0, 1].There are multiple simple ways to search for dyadic nets and sequences with specific properties.For example, for a small bit depth ( = 4,  = 16), it is feasible to perform an exhaustive search over all digital dyadic nets or sequences.For larger bit depths, one can use heuristic search, e.g.hill climbing.At each iteration, all possible one bit mutations to an existing net or sequence can be evaluated and the optimum mutation can be chosen until no mutation brings an improvement in the objective.One interesting possibility is a hierarchical search.Due to the incremental nature of nets and sequences we may optimize progressively, e.g.first searching for a  bit solution and then optimizing for additional bits while retaining the leading samples (bits) in the net or sequence.
The important aspect of our work is that any of these explorations can be done efficiently.The complete characterization of the design space for nets and sequences enables us to only visit candidates, e.g.generator matrices for a digital construction, that are viable.This improves over previous work that explored a much larger set of constructions and then filtered out invalid candidates.E.g.Grünschloß et al. [2008] use heuristic search for alternative   matrices to produce Hammersley-like nets.Since the percentage of viable candidates can be small, a search using our explicit construction can bring orders of magnitude speed-up.
Instead of searching for, e.g.new pairs (  ,   ), different dyadic nets are typically derived from these, especially Sobol, by applying the dyadic-preserving Owen's scrambling [1998].Faithful application of Owen's scrambling is computationally challenging, and common implementations offer different trade-offs between speed, quality, and flexibility [Burley 2020;Helmer et al. 2021;Kollig and Keller 2002].Our explicit characterization and understanding of sequences provide multiple advantages.It is hard to predict the properties of an Owen scrambled sequence.The above-mentioned hierarchical search would not be possible.Also, there are advantages when creating nets from sequences and distributing them over pixels, see e.g.PBRT [Pharr et al. 2016] or Ahmed and Wonka [2020].In Fig. 1, we show example digital dyadic sequences.

Hammersley Sequences
A Hammersley net is generated by a pair of matrices (,  ).Executing Algorithm 1 by manual computation, we arrive at the progressive pair of matrices (  ,   ).For example, the 256-point Hammersley

LP Sequences
While the Hammersley nets have the worst known star discrepancy among digital dyadic nets, the LP nets of Larcher and Pillichshammer [2003] are believed to have the best one according to numerical experiments measuring the star discrepancy.We use 2 2  -point LP nets as a second example of converting a net into a sequence.

Converting Point Sets to Digital Dyadic Sequences
2 can be used to test if an arbitrary set of points (including sets of points that are known dyadic nets) allows for a digital construction.If it is possible, the algorithm produces the pair of matrices that generate the given set of points as a digital dyadic sequence.

Gray Sequences
We applied the conversion algorithm above to the Gray Code family of nets introduced by Ahmed and Wonka [2021], obtained by using a Gray Code-ordered van der Corput sequence to supplement the -offsets of the points along the columns and the -offsets along the rows.These nets turned out to be digital.

Algorithm 2: Converting a point set into a digital dyadic sequence
Input: an arbitrary 2  -point set in [0, 1) 2 ; Output: a pair of matrices that creates a digital dyadic sequence with the same points using (1), if it exists.1 Order the points sequentially along the -axis and set   =  ; 2 Use the -coordinates of the points at power-of-two indices as columns of the prospected   ; 3 Conduct one pass through the other points to validate if they coincide with the ones given by (1), and check if   is progressive; 4 If this step works, we have a digital dyadic net that can subsequently be converted to a sequence using Algorithm 1.

Validation
Let us report the following experiments that clearly demonstrate the utility of sequences over nets for progressive sampling: We considered the 256-point LP net.First, we took a random ordering of the points and computed the star discrepancies of the first 16 points, the first 32 points, the first 48 points, etc.Then we took the sequence ordering given by our algorithm (see Table 2) and computed similar star discrepancies.Finally, we computed the ratios of the 16 numbers for the random ordering to the 16 numbers for the sequence ordering, as depicted in Table 3.We obtained that the first number has become 2.22 times smaller, the second one 2.25 times smaller, etc.Note that the original order would have worse results than the randomly reordered one.

SELF-SIMILAR DYADIC SEQUENCES
In this section, we propose a family of digital dyadic sequences that is efficient to compute and invert.At the same time, the family is large enough to allow for a meaningful exploration of the design space  We start with the geometric design requirements.After extensive experiments in trying to design a self-similar sequence, we settled for the following idea.We are looking for an infinite dyadic sequence such that every fourth point belongs to the bottom-left quadrant [0, 1/2) 2 and for each  = 0, 1, 2, . . . the first 2  points appearing in this quadrant form a scaled version of the first 2  points of the whole sequence in [0, 1) 2 , with the same order.See Fig. 6 to the top-left.One can formulate this design requirement as This means that a point with four times the sequence number is scaled by the factor 1/2.In practice, we restrict ourselves to finite digital dyadic sequences.A digital dyadic sequence  0 ,  1 , . . .,  2  −1 is called self-similar if for each  < 2 −2 we have (15) with the right side truncated to  digits (i.e., if the ( + 1)-th digit after the point in the dyadic decomposition of one of the coordinates of   /2 is 1 then it is replaced by 0).This simple equation is all that we need to realize the sequence digitally.We break it down into three elements.First, we note that multiplying the sequence number by 4 means shifting the bit vector  down two steps, inserting two leading zeros that effectively wipe out two columns of the progressive matrix   .Second, (15) means that the subsequent columns of the matrix should bear similar information to the wiped ones.Finally, dividing the coordinates by 2 is equivalent to pushing all these subsequent columns one row down, inserting zeros.This leads to the following self-similar matrix structure for some column bit vectors  = ( 0 ,  1 , • • • ) and  = ( 0 ,  1 , • • • ) that, by definition, represent the -coordinates of the second and third point in the sequence (equal to 0. 0  1 . . .and 0. 0  1 . . .respectively).We take  as a design parameter that can be freely chosen by the user, except for  0 which has to be 1.It then remains to set or compute the bits of  so as to ensure a progressive matrix.We can iteratively expand the matrix.The key insight is that each subsequent expansion includes only one new unknown bit from , except for the first step where we have two bits when expanding from a 1 × 1 to a 2 × 2 matrix.Formally, we have the following statement: Lemma 5.1.For each bit vector  = (1,  1 ,  2 , • • • ,  −1 ) starting with bit 1 there are exactly two bit vectors  = ( 0 ,  1 , • • • ,  −1 ) such that matrix (16) is progressive.
In other words, the -coordinate of the third point in the sequence is obtained by the carryless multiplication of the -coordinate of the second point by one of numbers ( 19)-( 20), followed by usual multiplication by 2.
Since the first row of ( 6) is (1 0 . . .0) in our case, it follows that removing the first row and the first column preserves the determinant: In the resulting matrix, the ( −  )-th row (highlighted) becomes (1 0 . . .0).Now removing this row and the first column, we get The resulting matrix has the same structure as initial matrix (6), only  and  are reduced by 1 and 2 respectively.We now repeat the procedure, each time removing the row (1 0 . . .0) and the first column in the current matrix until we get  = 0 or  = .The resulting determinant will be a leading principal minor of   ( ) or   + ( ) .It equals 1 because   ( ) or   + ( ) are themselves progressive.
The converse assertion follows from Lemma 5.2 because the pairs (  ( ) ,   ( ) ) and (  + ( ) ,   + ( ) ) are never progressive.□ We find these results striking, since this constant  handles all the effort of matrix derivation and pairing we encountered in Section 3. Thus, the resulting self-similar sequences generated by (  ( ) ,   + ( ) ) for various  and  are deservedly called sequences.See Fig. 6 and 7. (We do not need to consider the pairs (  + ( ) ,   ( ) ) because this leads just to the swap of -and coordinates.)The sequence generated by (  ,   + ) is referred as  0 -sequence.On the practical side, we truncate the coordinates of the 2  points not to  bits as required by the digital construction, but to the machine precision of  = 32 (or  = 64) bits.In other words, we always consider the first 2  points of a 2  -point -sequence.We note that the special structure of  reduces the carryless multiplication to  (log ) from  ().In 32-bit precision, it takes precisely 5 shift-and-xor operations.This makes setting up a -sequence, given (,  ), incredibly low-cost, even cheaper than generating a random sample for (,  ).Algorithm 3 summarizes the preceding derivations into a set of steps for creating a -sequence.
To our knowledge, -sequences are the fastest way to construct a dyadic sequence and draw different dyadic nets.They enable constructing a pseudo-random sequence for each pixel.However, the fast construction turns out to be not the only advantage: the self-similar structure actually embodies a wealth of versatility that we are going to discuss in the following subsections.

Retrieving the Samples
Points from -sequences may be generated from the matrices as any standard digital sequence.The self-similar structure of the matrices, however, offers a short-hand, as outlined in Algorithm 4.
Algorithm 3: Constructing the first 4 points of a -sequence using 32-bit precision from two bit-vectors  = ( 0 , . . . ) and  = ( 0 , . . . ) with  0 =  0 = 1.All additions and multiplications are carryless.Input : Two bit-vectors  and  (the coordinates of  1 ); Output : The first 4 points  0 , . . .,  3 of the sequence.Thus, besides fast construction, drawing the samples is an aspect where -sequences significantly stand out compared to all alternatives we are aware of.The simple loop structure of -sequences, and the few parameters, make it a good candidate for low-level optimization, using registers, for example, but our plain C code already delivers high performance, as depicted in Table 4.While the straightforward implementation of -sequences shown in Algorithm 4 already leads the competition, the self-similar structure of the sequence permits ever further speed up through a small lookup table.In the basic implementation we derive the basis vectors from the first four points, but in fact any power-of-four number of points may be used in the same way as basis for retrieving subsequent points.The  256 entry in Table 4 refers to an implementation that uses a 256-vector look-up table, and it leads to 2-6× speed up.

Memory Footprint
In addition to their speed performance, -sequences consume very little memory, and offer a flexible trade-off between memory and speed.The sequence may be compressed to a single 2D bit vector  1 , a pair of vectors  1 ,  2 , or the standard four vectors  0 , . . .,  3 , and may be expanded to a 16, 256, or even 64K lookup table for two-step retrieval.This flexibility with memory makes -sequences quite GPU friendly.Indeed, providing random numbers in a GPU context is a standing challenge, and -sequences help in this direction by making it possible to run a unique random sequence in each thread, passing a table of parameters that identify the sequences.We implemented a test of this, and generated over 17G points per second, roughly 1T per minute, on a TITAN Xp GPU.

Morton Ordering
Instead of generating the  and  coordinates separately, they can be obtained jointly as a Morton-ordering [Morton 1966] index and then we can split (unshuffle [Warren 2012]) the bits later.This can be done easily by where  = 0, . . ., 15, z (0) = 0, s (0) is the point index in the sequence,   [] is obtained by interleaving (shuffling [Warren 2012]) the bits of -and -coordinates of the -th point  [] for  = 0, . . ., 3, and addition is carryless.This may not be very useful in generating the samples, because, in addition to the cost of unshuffle, it requires 64-bit processing to obtain the full 32-bit resolution of each axis.Mapping to Morton indices, however, is useful in the reverse direction, where we are given the coordinates of a stratum, and asked to retrieve the exact location of the (first) sample in that stratum, and/or its sequence number, as encountered in stippling and importance sampling.Note that the coordinates of the stratum prefix the coordinates of points inside it.Thus, obtaining a point sequence number, given the stratum, is a partial inversion of the sequence, which we discuss in more detail next.

Inverting the Sequence
A straightforward approach to inverting a -sequence, mapping a stratum index z into a sequence number s, is to undo the iterations in Eq. ( 26).Let encode the sequence index of a point, its Morton index, and the first 4 points of the sequence written in base 4. Note the reversed indexing of the digits of the sequence number.Given z (0) = z, we may find s iteratively starting from s (0) = 0 as follows: in the -th iteration.The idea is that, if we examine Eq. ( 26), we see that only one vector contributes to the most significant two bits of z.We can therefore deduce which of the four vectors was added in the first iteration of computing z, and insert its index as a digit in the sequence number, as in Eq. ( 31).We then subtract, or equivalently add, the originally added vector to z.Now, only one vector is responsible for the following pair of bits.By shifting the bits up we can recursively apply the undo process.Note that this process iteratively zeros z while building s.
We implemented this and obtained the inversion rate of ~38M points per second (cf.Table 4) that, while faster than any sampling technique we are aware of, is significantly slower than the forward production rate.A much faster approach takes advantage of Morton ordering discussed in Section 5.3, noting that the inversion matrix bears a similar staggered pair-of-columns structure.This gives a speed up of up to 70% over the forward rate, since only one value is evaluated rather than two.

Coding Complexity
Again, this is an aspect where -sequences have an advantage.Not only the fact that their implementation follows straightforward steps, as listed in Algorithm 3, but the intuition of these steps has a clear geometric interpretation, as we discussed earlier.

Atlas
The relatively small two-dimensional space of -sequences, along with their easy implementation on GPUs, makes it possible to scan them exhaustively and build an atlas for exploring these patterns and finding ones with favorable properties.Fig. 7 shows example maps for the star discrepancy, minimum distance between points, and average distance to the nearest neighbor.

Rendering Examples
We show the impact of our work in two rendering examples.
The first example (Fig. 8) demonstrates that self-similar sequences provide better anti-aliasing than the standard Sobol sequence.To make a meaningful comparison we integrated the sequences into the Global Sobol Sampler coming with PBRT.The -sequences offer: (1) 3× speedup in sample generation and 5× in inversion, leading to up to 37% observed reduction in rendering times; (2) no inversion lookup tables, cf.[Grünschloß et al. 2012]; (3) notable anti-aliasing with randomized -sequences, and (4) they are free; emphasizing the value of our in-depth study.
Let us make a few comments on the implementation.The sampler uses a higher-dimensional Sobol sequence, generated by certain matrices (, ,  2 ,  3 , . . .); the first two are used to sample the image plane.We transform the matrices so that the first two become generating matrices (  ( ) ,   + ( ) ) of the desired -sequence.For that purpose, we decompose   ( ) =    and   + ( ) =    as in Theorem 3.3.Then we right-multiply all the original matrices (, ,  2 ,  3 , . . . ) by  (which means just reordering of the sequence) and left-multiply the first two matrices by   and   respectively (which means scrambling).The resulting sequence is used for sampling.
The second example (Fig. 9) also shows slightly better anti-aliasing.Here we integrated the sequences into the Z-Sampler [Ahmed and Wonka 2020] that uses the samples as is, without any scrambling.

Discrepancy and Spectral Evaluation
The new class of self-similar sequences introduced in the paper has several advantages, that we have evaluated in the preceding subsections, such as speed, memory usage, anti-aliasing, and versatility.Towards that, we show discrepancy and spectral comparison in Fig. 10 and 11, both demonstrating that -sequences are a good representative sample of the space of digital dyadic sequences.Note that all digital dyadic sequences exhibit some structure when looking at a single instance and the perceived structure changes with  (see Fig. 1).

DISCUSSION AND CONCLUSION
In this paper, we presented new insights on digital dyadic sequences.
Our work provides a comprehensive understanding of the design space and how to navigate it.We believe our work will lay the foundation for future work in systematically exploring the design space of higher-dimensional sequences and their impact on rendering.The importance of 2D sequences is that they often serve as building blocks for synthesizing high-dimensional sequences.Throughout the paper, we already discussed multiple implications of our work on rendering applications, and we are hopeful that the proposed -sequences and Gray sequences will find their way into existing rendering frameworks.each being a swap of two columns or adding a column to another one.The transformation (  ,   ) ↦ → (  ,   ) is equivalent to applying those simultaneously to the columns of   and   .This leads to elementary transformations of any hybrid matrix (4) formed by the first  − rows of   and the first  rows of   .Since elementary transformations preserve the determinant, the hybrid matrix remains invertible, hence the pair remains dyadic.□ Applying this lemma for  =  −1  , we can make the first matrix in the pair the identity matrix.In the latter case, we have the following characterization of dyadic pairs [Kajiura et al. 2018 Proof.We use the same idea as in the proof of Lemma A.1.The right multiplication by an upper unitriangular matrix is equivalent to a sequence of elementary transformations, each being adding a column to another one located to the right from it.The transformation (  ,   ) ↦ → (   ,    ) is equivalent to applying those simultaneously to the columns of   and   .Since a column is always added to a one to the right, this leads to elementary transformations of any hybrid matrix formed by the top-left corner  ×  submatrix of   and the top-left corner ( −  ) ×  submatrix of   .Since elementary transformations preserve the determinant, the hybrid matrix remains invertible, hence the pair remains progressive.
Similarly, the left multiplication by a lower unitriangular matrix is equivalent to a sequence of elementary transformations, each being adding a row to another one located below it.The transformation (  ,   ) ↦ → (    ,     ) now means different elementary transformations of   and   .But since a row is always added to a one below, this still leads to elementary transformations of any hybrid matrix.Hence the pair remains progressive.□ Applying this lemma, we can make the first matrix in the pair the identity matrix and the second one an upper unitriangular matrix.In the latter case, we have the following characterization of progressive pairs in terms of minors.A shifted leading minor is a minor formed by the first  rows and some  consecutive columns for some  .Lemma B.2.A pair of matrices (,   ) is progressive if and only if all shifted leading minors of   are nonzero.
Proof.This is similar to Lemma A.2, but now the numbers  and  of rows taken from the matrices  and   need not sum up to .Let   = (   ).The determinants of all the possible hybrid matrices We now show that the condition that all shifted leading minors are non-zero is very restrictive [Hofer and Larcher 2010, Proposition 4].This allows us to simplify the original proof of Theorem 3.3.Lemma B.3.There is a unique upper unitriangular matrix  such that the pair (,  ) is progressive.
Proof of Lemma B.3.The proof is by induction on the matrix size .The base  = 1 is obvious.
To prove the inductive step, use the characterization of progressive pairs (,  ) from Lemma B.2. Assume there is a unique upper unitriangular ( − 1) × ( − 1) matrix  = (   ) with all shifted leading minors nonzero (this is the inductive hypothesis).It suffices to prove that there is a unique way to add the entries  , for  = 1, . . .,  so that the resulting new shifted leading minors are also nonzero (the other added entries  ,1 = • • • =  ,−1 = 0).
We have expressed  +1, in terms of the entries  1, , . . .,  , and the entries of the top-left (−1)×(−1) submatrix.By the inductive hypothesis, all those entries are uniquely determined.Then  +1, is also determined, and the lemma follows by induction.□ This argument does not construct the unique matrix  explicitly.But we actually already know the answer.Proof.The identity  2 =  by [Faure 1982] is a compact form of the well-known identity [Graham et al. 1989, (5.21) and (5.12)] The identity    =    by [Kajiura et al. 2018] is a compact form of the well-known identity [Graham et al. 1989, (5.14)  LP =  LP by the identity [Graham et al. 1989, (5.21) and (5.12)] Second,    LP =  LP  by the identity [Graham et al. 1989, (5.26)] where the latter holds since  is a power of 2. So    −1 LP =  LP  .□

Fig. 1 .
Fig. 1.Generating matrices, points, and periodograms of various digital dyadic sequences.The first 2  points of the sequences are shown for  = 0, . . ., 9 in separate squares of increasing sizes.The digital dyadic sequences in (b) and (c) are obtained by reordering the known digital dyadic nets with the same names; such reordering is one of the main discoveries of the paper.The sequence in (d) is obtained from a 256-point Gray net by first reordering it into a sequence and then extending the sequence in one of the possible ways.The sequence in (e) represents a new class of self-similar sequences introduced in the paper.

.
(10)    SeeFig.5(a)  for the layout of the first 16-point net in the 256-point sequence with possible indexing.

Fig. 6 .
Fig. 6.Structure, generating matrices, points, and periodograms of various self-similar -sequences.The plots to the left depict the first few points; larger circles show the points closer to the beginning of the sequence.The top plots show the  0 -sequence generated by  =  = (1, 0, . . ., 0).
Fig. 7.An exhaustive exploration of -sequences containing 256 points.Each pixel in the top-right square [1/2; 1) 2 represents a possible position of the second point of the sequence, which uniquely determines the whole -sequence.There are 128 × 128 pixels in total and 16K possible -sequences.The color depicts one of the (suitably normalized) quality measures (a)-(d) of the resulting -sequence.In (c), the average distance to the closest neighbor is shown only for the sequences such that the minimum distance is greater than 0.3; the remaining pixels are brown.A few best-quality representatives are in the insets; each red bold point depicts the second point in the sequence.For comparison, the same quality measures for the Sobol sequence are shown in the legends.

Fig. 8 .
Fig. 8. Physically-based renderings using various digital dyadic sequences: (a) The Global Sobol sampler based on the Sobol sequence; (b) our adapted global sampler using a random -sequence; and (c) low-discrepancy (0, 2)sequence sampler, which uses independently shuffled Sobol 2D samples for each pixel and pair of dimensions.We placed our sampler in the middle for easier comparison.The figure shows that the -based Global sampler is less aliased than Sobol and less noisy than the pixel-based sampler.The insets show selected parts of the scene at different sampling rates.

Fig. 9 .
Fig. 9. Renderings obtained with the Z-sampler from Ahmed and Wonka [2020] using Sobol and  0 sequences for the generation of the samples at difference sampling rates (samples per pixel).The figure shows that the -based Z-Sampler is less noisy than the Sobol-based one for the shown scene.

Fig. 11 .
Fig. 11.Two-dimensional and radial average spectral profiles of different dyadic sequences, showing (top) the frequency power spectrum of the point process, obtained by averaging 16k periodograms of 256 points, and (bottom) a periodogram of a randomly selected realization.We use radial profiles advocated by [Ahmed et al. 2022],where we average values for every distinct radius.The set for -sequences includes all distinct sequences (up to XOR-scrambling) at 8-bit resolution.A randomly selected digital or -sequence has a smoother periodogram than the XOR-Scrambled Sobol sequence but not Burley's implementation of Owen-scrambling.However, the average of sufficiently many periodograms for -sequence is smoother than both, and visually as smooth as Owen's scrambling.

Corollary B. 1 .
The Pascal matrix  is the unique upper unitriangular matrix such that the pair (, ) is progressive.Proof.The uniqueness has just been proved in Lemma B.3.The pair (, ) is progressive by Theorem 3.2 and Table 2 but let us give a direct proof here.Consider the integer matrix    =  −1  −1 .Applying [Gessel and Viennot 1985, Lemma 9] repeatedly, we conclude that each shifted leading minor of the matrix (   ) equals 1. Hence by Lemma B.2 the pair (, ) is progressive.□ Proof of Theorem 3.3.Assume that (  ,   ) is a progressive pair.Then both   and   are progressive matrices.Hence   =     and   =     for some upper unitriangular matrices   ,   and lower unitriangular matrices   ,   .By Lemma B.1, the pair( −1     −1  ,  −1     −1  ) = (,    −1  ) is progressive.By Corollary B.1,    −1  = .We get   =     and   =     , as required.Conversely, each pair (    ,     ) is progressive by Corollary B.1 and Lemma B.1.□ To count the number of possible characteristic matrices  =    −1  , we need the following results.Lemma B.4.We have  2 = (  ) 3 =  and    =    .

.
The identity (  ) 3 =  is a consequence of the previous two.□ Lemma B.5.For distinct pairs (  ,   ) of lower unitriangular matrices, the matrices    −1  are pairwise distinct.Proof.The matrix  =    −1  uniquely determines   and   because  =    −1   =       •    −1    is an  -decomposition of  , which is unique.Here we used the identity  =      (see Lemma B.4) and observed that for any upper unitriangular matrix  ′ the matrix  ′  is lower unitriangular.□ Corollary B.2.If  is a power of 2, then under notation (3) and (11) we have    −1 LP =  LP  .Proof.First, analogously to the proof of Lemma B.4, we get  −1

Table 2 .
Overview of constructions of particular digital dyadic nets and sequences.and  are the identity and anti-diagonal matrices, respectively. is the binary Pascal matrix. LP and  LP are given by (3) and (11). is the index of the generated point ( ,  ) in binary bit vector form.
[Hofer and Suzuki 2019and Suzuki 2019, Theorem 1.2].)A pair of matrices (  ,   ) is progressive if and only if   =    and   =    for some upper unitriangular matrix  and some lower unitriangular matrices   ,   .Equivalently, (  ,   ) = (    ,     ) is progressive, if and only if    −1  = , where we  -decompose each matrix.The latter is equivalent to    −1  =  because  =  −1 , thus the condition is symmetric in   and   .

Table 3 .
The ratio of the star discrepancy of the first  points of the 256-point LP net for the random ordering to the one for the sequence ordering Ratio 2.22 2.25 2.23 2.60 2.59 2.71 3.04 3.94 2.98 3.16 2.65 2.25 1.85 1.54 1.25 1.00

Table 4 .
Speed performance, in million points per second, of -sequences and other state-of-the-art sampler generators.
Discrepancy comparison of XOR-scrambled -sequences to unscrambled Sobol sequence.Hammersley and Larcher-Pillichshammer (LP) nets are shown for comparison.For Owen and  we measured the discrepancy over a randomly drawn 64K sample.The yellow line refers to the  0 -sequence.The dashed line  q refers to the lowest measured discrepancy of the sampled sequences when the coordinates of the points are truncated to log 2 (Number of Points) bits.
[Faure and Tezuka 2002]A pair of matrices (,   ) is dyadic if and only if all leading principal minors of    are nonzero.Proof.Let   = (   ).The determinants of all the possible hybrid matrices (formed by  rows of  and  −  rows of   ) −, +1 ...−, are exactly the leading principal minors of    , for  ≠ .Here we applied the Laplace expansion along the first  rows and used that    is obtained from   by reversing the order of the columns.□Proof of Theorem 3.1.ByLemmaA.1,(,) is dyadic if and only if (,    −1  ) is dyadic.By Lemma A.2, the latter is dyadic if and only if    −1  is progressive.This means that    −1   =  for some upper unitriangular matrix  and lower unitriangular matrix .The latter is equivalent to   =    with   invertible.We give a new short elementary proof of Theorem 3.3.In contrast to the one by[Hofer and Suzuki 2019], it does not involve 3D nets and works entirely in 2D.It is based on properties of progressive pairs stated as lemmas below.Again, we start with the invariance under certain transformations, discovered by[Faure and Tezuka 2002].Lemma B.1.If a pair of matrices (  ,   ) is progressive,  is an upper unitriangular matrix, and   ,   are lower unitriangular matrices, then (     ,      ) is also progressive.