A family of matrix-tree multijections

For a natural class of $r \times n$ integer matrices, we construct a non-convex polytope which periodically tiles $\mathbb R^n$. From this tiling, we provide a family of geometrically meaningful maps from a generalized sandpile group to a set of generalized spanning trees which give multijective proofs for several higher-dimensional matrix-tree theorems. In particular, these multijections can be applied to graphs, regular matroids, cell complexes with a torsion-free spanning forest, and representable arithmetic matroids with a multiplicity one basis. This generalizes a bijection given by Backman, Baker, and Yuen and extends work by Duval, Klivans, and Martin.

which maps each B ∈ B(D) to a positive integer. In this context, we get the following theorem, which is a reframing of Theorem 8.1 from [11]. When D is associated with a regular matroid, m(B) = 1 for all B ∈ B(D) and thus Theorem 1.1 implies that |S(D)| = |B(D)| (this is Theorem 4.6.1 from [21]). In 2017 (published in 2019), Backman, Baker, and Yuen define a family of geometric bijections between S(D) and B(D) for the regular matroid case [2,24]. However, their construction does not easily generalize to the case where not all bases have multiplicity 1.
Our main result is Theorem 6.10, which gives the analogue of a bijection for an arbitrary standard representative matrix. In particular, we define a family of geometrically meaningful maps f : S(D) → B(D) such that for any B ∈ B(D), we have |f −1 (B)| = m(B) 2 . We call these maps sandpile multijections.
Our general construction is geometric, as in [2]. We associate each basis with a parallelepiped of volume m(B) 2 . These parallelepipeds do not intersect and their union produces a non-convex polyhedron that periodically tiles R |E| . Using our shifting vector, we associate m(B) 2 points of Z |E| to each parallelepiped. Furthermore, we show that these points are all distinct in S(D).
For the sake of brevity, we restrict our attention to standard representative matrices in this paper. For a more complete story which explores the connection between different kinds of sandpile groups and focuses on orientable arithmetic matroids, which were recently defined in [23], see the first nine chapters of the author's dissertation [20]. This paper consists primarily of material from the seventh and eight chapters. The ninth chapter shows how to obtain multijections on a larger class of matrices when the sandpile group is replaced with its Pontryagin dual.
In Section 2, we go over some notational conventions we will use throughout the paper. In Section 3, we give background on lattices and define standard representative matrices. In Section 4, we motivate our future results by constructing a standard representative matrix from a graph. In Section 5, we show how to construct a periodic tiling of R n from any standard representative matrix. In Section 6, we use this tiling to construct a family of sandpile multijections. In Section 7, we demonstrate how to generate lower-dimensional tilings which produce equivalent multijections. In Section 8, we show how a choice of shifting vector corresponds to a choice of chamber from a hyperplane arrangement. In Section 9, we associate certain important points with {0, 1} n vectors in the same equivalence class of S(D). Finally, in Section 10, we provide some open questions for further study.

Notational Conventions
We will write Z for the integers and R for the real numbers. We write [a, b] for the set {x ∈ Z | a x b} and [b] for [1, b]. We denote a vector of all zeros by 0. We use the variable D for an r × n integer matrix which, starting in Section 4, will always be a standard representative matrix (see Definition 3.9). We write D and D for the dual matrix and full matrix of D respectively (again, see Definition 3.9). We will always write the determinant of a square matrix A as det(A) and use | · | for set cardinality or absolute value. We also write A T for the transpose of a matrix A and I k for the k × k identity matrix. We will frequently be working with polyhedra embedded in R k Proof. The first equality follows immediately from the fact that im R ( D T ) is generated by the rows of D which also generate ker R (D) by definition.
For the second equality, by Lemma 3.7, ker R ( D) is the orthogonal complement of im R ( D T ) which we established is equal to ker R (D). By a second application of Lemma 3.7, ker R (D) is the orthogonal complement of im R (D T ). Since the composition of two orthogonal complements is the identity, we conclude that ker R ( D) = im R (D T ). We call D the dual matrix of D and D the full matrix of D. We will show in Lemma 3.11 that our notation for D is consistent with Corollary 3.8.
Remark 3.10. The term standard representative matrix, which appears in [22,Section 2.2], is named for the fact that every representable matroid can be represented by a matrix of this form (after rearranging columns). However, it is worth noting that we can only represent oriented arithmetic matroids using a matrix of this form if they have a basis of multiplicity one (see [20,Corollary 4.3.13]). (1) In [20,Chapter 5], the set of representations for an arbitrary oriented arithmetic matroid are classified.
Notice that by Lemma 3.11, the sandpile group of D is the cokernel of the sandpile lattice of D. Remark 3.14. These definitions come from the theory of arithmetic matroids. In [11], the authors work with cell complexes instead of standard representative matroids (although they note in Remark 4.2 that their ideas can be translated to an integer matrix context). Our bases correspond to what they call cellular spanning forests, basis multiplicity correspond to the size of the torsion subgroup of a certain relative homology, and the sandpile group corresponds to what they call the cutflow group. See [20, Section 6.6] for more discussion on the sandpile group of a cell complex and how this relates to the sandpile group of a standard representative matrix.
(1) We also need to restrict to oriented arithmetic matroids satisfying the strong GCD property or else not all oriented arithmetic matroids are representable (see [20,Section 4.2]). For this paper, whenever we mention oriented arithmetic matroids, we will always assume this property.

Algebraic Combinatorics, draft (28th April 2021)
Recall that the sandpile matrix-tree theorem for standard representative matrices (Theorem 1.1) says that: In the following example, we give a demonstration of this theorem.
Definition 3.16. An m-multijection between sets S and T is a map f : An m-multijection can also be thought of as a bijection between S and a multiset consisting of m(t) copies of each t ∈ T . In this paper, we give an explicit procedure for constructing m-multijections between S(D) and B(D) for m(B) = m(B) 2 . To do this, we use a geometric construction, which also produces a periodic tiling of R n .

Graphs and Standard Representative Matrices
In this section, we show how to obtain a standard representative matrix from a graph G and one of its spanning trees. The results for this section will not be necessary for understanding future sections, but they are intended to provide some context for the relevance of standard representative matrices. For a more thorough analysis of the connection between standard representative matrices and other objects, see [20,.
Throughout this section, we will fix a finite connected undirected graph G with edges E(G) and spanning trees T (G) (i.e. maximal collections of edges containing no cycles). Let n = |E(G)| and r = |T | for every T ∈ T (G) (it is a classical result that all spanning trees of a graph contain the same number of edges). Furthermore, we will write the edges of G as {e 1 , . . . , e n } such that {e 1 , . . . , e r } forms a spanning tree which we call T .
• A circuit of G is a minimal (by inclusion) subset of E(G) not contained in any spanning tree. • A cocircuit of G is a minimal (by inclusion) subset of E(G) intersecting every spanning tree. These definitions come from matroid theory. In the graphic context, circuits are also called cycles and cocircuits are also called bonds or minimal cuts. • For any e ∈ E(G) T , the set of edges T ∪ {e} contains a unique circuit.
• For any e ∈ T , the set of edges (E(G) T ) ∪ {e} contains a unique cocircuit.  • For any e ∈ E(G) T , the circuit contained in T ∪ {e} is called it fundamental circuit of e and is denoted C e . • For any e ∈ T , the cocircuit contained in (E(G) T ) ∪ {e} is called the fundamental cocircuit of e and is denoted C e .
Choose an arbitrary orientation for the edges of G. Note that the orientation is for bookkeeping purposes and one should not think of G as a directed graph. Each circuit on a graph corresponds to a cyclic set of edges (ignoring orientation). For e i ∈ E(G) T and e j ∈ T ∩ C e , we say that e i matches the orientation of C ej if the edges of C ej can be cyclically oriented in a way that matches the orientation of both e i and e j . We define an r × n matrix D in the following way. It follows immediately from construction that the matrix D is always a standard representative matrix. Notice that the construction of D does not require information about the vertices of G. This property means that the construction is matroidal. From Definition 3.12, it is logical to define the sandpile group of G as: a subgroup of the free abelian group on the edges of G. In Proposition [20, 4.1.8], we show that this definition does not depend on the choice of spanning tree T .
The usual definition of sandpile group of a graph is a subgroup of the free abelian group on the vertices of G. We will not define this group here (see e.g. [17]), but we will call it the vertex sandpile group of G. The following proposition follows from results in [1,7]  Proposition 4.5. The boundary map between edges and vertices of G (with respect to the orientation we used to define D) induces an isomorphism between Z E(G) /(im Z (D T ) ⊕ ker Z (D)) and the vertex sandpile group of G.
We can also define an integral basis for ker Z (D) in terms of the fundamental cocircuits of e for e ∈ T .
Choose the same orientation on G that we used for defining D. Each cocircuit on G corresponds to a minimal set of edges which partition the vertices of G into subsets V 1 and V 2 . For e i ∈ T and e j ∈ (E(G) T ) ∩ C ei , we say that e i matches the orientation of C ej if e i and e j are both oriented from V 1 to V 2 or both oriented from V 2 to V 1 . We define an (n − r) × n matrix D in the following way.
if e i+r ∈ C ej and e i+r matches the orientation of C ej , −1 if e i+r ∈ C ej and e i+r does not match the orientation of C ej , 0 otherwise.
We show in [20,Lemma 4.5.12] that D is the dual matrix of D, so our notation is consistent with the notation given in Definition 3.9.
Remark 4.6. The construction of D given above can be applied to any regular matroid, and a version of this construction was used in [2]. We can also generalize this definition to any cell complex with a torsion-free spanning forest or representable arithmetic matroid with at least one multiplicity one basis. See [20, for more discussion of this generalization.

A Tiling of R n
For the remainder of this paper, we will always let D = I r M be an r × n standard representative matrix. Furthermore, we let D = −M T I n−r be the dual matrix of D and be the full matrix of D. Recall from Definition 3.13 that B(D) is the set of r element subsets of the columns of D with nonzero determinant and for B ∈ B(D), m(B) is the magnitude of the corresponding determinant. In this section, we will associate each B ∈ B with a lattice parallelepiped and then show that the non-convex polytope formed by their union periodically tiles R n . In the next section, we will show how to use this tiling to construct a family of multijections.
We think of B ∈ B(D) as a set {k 1 , . . . , k r } of column indices. These simultaneously describe a set of columns of D, D or D. Because we are working in R n , it will be useful to allow for a version of the sandpile group whose representatives are real vectors.
Definition 5.1. The continuous sandpile group of D is the group: We will also make heavy use of the following lemma, which follows immediately from the definition of sandpile groups and continuous sandpile groups of standard representative matrices.
Algebraic Combinatorics, draft (28th April 2021) Lemma 5.2. Let D be an r × n standard representative matrix. Two vectors z, z ∈ Z n (resp. R n ) are equivalent as elements of S(D) (resp. S(D)) if and only if z − z ∈ im Z (D T ).
We introduce some definitions and notation that can be found in [4].
• The fundamental parallelepiped of a square matrix A with column vectors {x 1 , . . . , x k } is the set of points: • The half-open fundamental parallelepiped of a square matrix A with column vectors {x 1 , . . . , x k } is the set of points:  • P 1 (B) is the fundamental parallelepiped of D restricted to columns in B.
• P 2 (B) is the fundamental parallelepiped of D restricted to columns not in B.
• P (B) is the direct product of P 1 (B) and P 2 (B). We can also describe P (B) in the following way. For each column of D, if this column corresponds to an index of B, replace the last (n − r) entries with 0's. If this column does not correspond to an index of B, replace the first r entries with 0's. The fundamental parallelepiped of this matrix is P (B). See Example 5.6.
Example 5.6. Consider the matrix As we saw in Example 3.15, there are 3 bases of B(D), one for every pair of columns. The associated parallelepipeds are given below: Here is a plot of the three parallelepipeds from Example 5.6 in 3-dimensional space. The cube is P ({1, 2}), the smaller of the two remaining parallelepipeds is P ({1, 3}), and the larger is P ({2, 3}). We will see in Corollary 5.11 that the union of these parallelepipeds periodically tiles the plane.
See Figure 2 for a plot of these three parallelepipeds. Notice that they only intersect at their boundaries. We show that this is true in general. P (B 1 ) and P (B 2 ) have intersecting interiors if and only if P 1 (B 1 ) and P 1 (B 2 ) have intersecting interiors and P 2 (B 1 ) and P 2 (B 2 ) have intersecting interiors. Assume that P 1 (B 1 ) and P 1 (B 2 ) have intersecting interiors. Then, for some coefficients a 1 , . . . , a n , b 1 , . . . , b n , we have the following equality If we subtract the second sum from the first and define The above equation implies that Similarly, if P 2 (B 1 ) and P 2 (B 2 ) have intersecting interiors then for some coefficients a 1 , . . . , a n , b 1 , . . . , b n , we have the following equality: It follows that: where the last equality follows from Lemmas 3.11. Lemma 3.7 says that im R (D T ) and ker R (D) are orthogonal. This means, For each i, there are 4 possibilities: B 1 and B 2 are the same size and distinct, so cases 2 and 3 must each occur at least once. This means that This is a contradiction.
Definition 5.8. T (D), the tile associated with D, is Corollary 5.11 will justify why we call this non-convex polyhedron a tile.
The following corollary follows directly from Lemma 5.5 which gives the size of each P (B) and Proposition 5.7 which says that they don't intersect.
Note that this sum is also equal to |S(D)| by Theorem 1.1.
When considering all of T (D), we can strengthen Proposition 5.7 to the following: Proof. First, we show that two points of T (D) can only be equivalent as elements of S(D) if they are each on the boundary of some P (B). For some B 1 , B 2 ∈ B(D), let p 1 and p 2 be interior points of P (B 1 ) and P (B 2 ) respectively. Using the notation and reasoning from Proposition 5.7, we can write p 1 − p 2 as the vector whose first r entries are given by and whose last n − r entries are given by By Lemma 5.2, p 1 and p 2 are equivalent as elements of S(D) if and only if: Let s i be the restriction of the i th row of D to the first r entries and s i be the restriction of the i th row of D to the last n − r entries. Then, the first r entries of This means that the first r entries of and the last (n − r) entries are given by Hence the points p 1 and p 2 are equivalent as elements of S(D) if and only if we have: By the same logic that we used for Proposition 5.7, the coefficients of the first sum form an element of ker R (D) while the coefficients of the second form an element of im R (D T ). Lemma 3.7 again tells us that their dot product is 0. In other words: Algebraic Combinatorics, draft (28th April 2021) For each i, there are 4 possibilities: Otherwise, the two factors have a different sign and the product is negative.
Otherwise, the two factors have a different sign and the product is negative.
In all four cases the product is negative, unless we are always in case 1 or case 4 and z i = 0 for all i. However, if z i = 0 for all i, then p 1 = p 2 . Thus, our claim holds by contradiction.
We showed that two distinct points p 1 and p 2 of T (D) that are equivalent as elements of S(D) must each lie on the boundary of some P (B). We now show by contradiction that they are on both on the boundary of T (D).
Assume that p 1 is an interior point of T (D). Since T (D) is the union of nondegenerate parallelepipeds, there is some vector w ∈ R n such that for all sufficiently small ε > 0, p 2 + εw is in T (D) but not on the boundary of any P (B). If we make ε small enough, p 1 + εw must be in T (D) as well, since p 1 is an interior point of T (D) by assumption. Moreover, p 1 + εw and p 2 + εw are equivalent as elements of S(D). We get a contradiction because both points are in T (D), but p 2 + εw is not on the boundary of any P (B). This means that p 1 and p 2 must both be on the boundary of T (D).
The next corollary shows that copies of T (D) can be used to periodically tile R n .
Corollary 5.11. The set of translates T (D) + D T (z 1 , . . . , z n ) T for all (z 1 , . . . , z n ) ∈ Z n cover all of R n and only intersect at their boundaries.
Proof. Consider any point p ∈ R n . By Lemma 5.2, the points which are equivalent to p as elements of S(D) are those of the form p + D T (z 1 , . . . , z n ) for (z 1 , . . . , z n ) ∈ Z n . Since these are exactly the translates of T (D), the condition that the translates do not intersect except at their boundaries follows directly from Proposition 5.10.
We also have to show that the translates cover all of R n given that they do not overlap except at their boundaries. We first note that Π • (D T ) must tile R n under the same translation because for every p ∈ R n , there is a unique solution to (D T )p = p (in particular p = (D T ) −1 p). We can map each point of T (D) to a point in Π • (D T ) by translating it by an integer combination of columns of D T . Let t be this piecewise translation from T (D) → Π • (D T ). Each translation preserves the volume of the region we transform and the only overlap is from the boundary of T (D), which is a 0 volume set. It follows that the volume of the image of t is equal to the volume of T (D). Since Let p be a point of Π • (D T ) that is not in the image of t. The preimage of p is the collection of points in the same equivalence class with respect to S(D). By assumption, none of these points are in T (D). Since T (D) is closed, this means that none of these points are limit points of T (D) either, so there is a neighborhood of p that is also not in the image of t. However, this neighborhood must have positive volume, which is a contradiction.
Example 5.12. The simplest case is when r = 1 and n = 2. Here, D is of the form: for some integer k. When k = 3, we get the pattern in Figure 3.
Remark 5.13. Because our tiling is of n-dimensional space, it is difficult to present more complicated examples. However, in Section 7, we will show that we can take an r-dimensional or (n − r)-dimensional slice of our tiling and get many of the same results. This will allow us to present more interesting tilings of 2-dimensional space (see Figure 8).

Constructing the Sandpile to Basis Multijections
In order to define our multijections, we will need T (D) and an appropriate R n direction vector.
Definition 6.1. A shifting vector w = (w 1 , . . . , w n ) of D is a vector in R n that is not in the span of a facet of P (B) for any B ∈ B(D).
In Section 8, we will show that a choice of shifting vector is equivalent to a choice of chamber from a certain hyperplane arrangement. We use the same notation that we used in the previous section and D is still an r × n standard representative matrix.
Algebraic Combinatorics, draft (28th April 2021) It will sometimes be useful to split our shifting vector into two smaller vectors. Consider the vectors w = (w 1 , . . . , w r ) ∈ R r and w = ( w 1 , . . . , w n−r ) ∈ R n−r . We write (w, w) for their concatenation, which is an R n vector. Proof. By definition, a point is in P (B) if and only if it is in P 1 (B) when restricted to the first r coordinates and P 2 (B) when restricted to the last (n − r) coordinates. The lemma follows from the fact that z + εw is (z 1 , . . . , z r ) + ε(w 1 , . . . , w r ) when restricted to the first r coordinates and ( z 1 , . . . , z n−r ) + ε( w 1 , . . . , w n−r ) when restricted to the last (n − r) coordinates. Definition 6.3. Let w = (w, w) be a shifting vector. • Proof. For any ε > 0, the first r entries of (v, v) + εw are given by v + εw, and the last n − r entries are given by v + ε w. The lemma follows from the fact that P 1 (B) is P (B) restricted to its first r coordinates while P 2 (B) is P (B) restricted to its last n − r coordinates. Proof. Since w-representatives of S(D) are also w-representatives of S(D), it suffices to prove the result for S(D). Let p be a w-representative of S(D). Because T (D) = P (B), we know that p + εw ∈ P (B) for some B ∈ B(D). Since w is not in the span of any facet of P (B), p + εw must be in the interior of P (B). By Proposition 5.7, this is true for a unique B. Proposition 6.6. For any shifting vector w, there is exactly one w-representative in R n for each equivalence class of S(D) and exactly one w-representative in Z n for each equivalence class of S(D).
Proof. The second result is a direct corollary of the first (and could also be proven with an enumerative argument). By Corollary 5.11, every point p ∈ R n lies on some translation of T (D) by an integer linear combination of the rows of D. We can translate this point to a point on T (D) without changing the equivalence class with respect to S(D). If p maps to an interior point p of T (D), then by Proposition 5.10, this is the unique point on T (D) that is equivalent to p. Furthermore, since p is in the interior of T (D), p is always a w-representative of S(D) regardless of w.
If p maps to a boundary point of T (D), then by Proposition 5.10, any point of T (D) that is in the same S(D) equivalence class must also lie on the boundary of T (D). Label these points as {p 1 , . . . , p k }. We need to show that exactly one of these points is a w-representative.
By the condition that w is not in the span of any facet of T (D), for all sufficiently small ε > 0, p i + εw must not lie on the boundary of T (D) for any i. If p i + εw and p j + εw are both in T (D) for i = j, then these are two distinct points in the interior of T (D) that are equivalent as elements of S(D). This is impossible by Proposition 5.10.
We have shown uniqueness, so we just need existence. Because w is not in the span of any facet of T (D), we can choose ε > 0 so that all points between p and p+wε map to interior points of T (D). Let p be the point mapped to by p + wε. Then, p − εw must be equivalent to p with respect to S(D). By our condition on ε, we see that this point is a w-representative. To prove this result, we apply the following lemma from Ehrhart Theory: Proof of Proposition 6.7. For some B ∈ B(D), let {x k1 , . . . , x kr } be the columns of D corresponding to B. Decompose w into the pair (w, w) with w ∈ R r and w ∈ R n−r . A point v ∈ P 1 (B) can be written as with 0 a i 1 for all i. Because the x ki are linearly independent (otherwise B would not be a basis), there is a unique way to write w in the form: such that each b i ∈ R. By Lemma 6.2, w is not in the span of any facet of P (B). This means that that b i = 0 for all i ∈ [r]. For any ε ∈ R, we have: From here, we see that v is w-associated with B if and only if a i ∈ (0, 1] for b i < 0 and a i ∈ [0, 1) for b i > 0. This region is the integer translation of a halfopen fundamental parallelepiped with volume equal to the volume of P 1 (B). By an analogous line of reasoning, the points which are w-associated with B form the integer translation of a half-open fundamental parallelepiped with volume equal to the volume of P 2 (B). It follows that the set of points that are w-associated with B is the direct product of these two regions: the integer translate of a half open parallelepiped with volume equal to the volume of P (B).
By Lemma 6.8, the number of integer points in this region is equal to this volume, and the integer translation does not change the number of integer points. Finally, by Lemma 5.5, the volume is m(B) 2 , completing the proof.
We now define a function f w from S(D) → B(D) given a shifting vector w. For any s ∈ S(D), we first take the w-representative z of s (which is unique by Proposition 6.6). Then, we let f w (s) = B, where B is the w-associated basis of z (which is unique by Lemma 6.5). Definition 6.9. f w is f w (as defined above) but with its domain restricted to S(D).
The following theorem is the main result of this paper. where each w-representative is shorthand for "the equivalence class of S(D) containing this w-representative". We can confirm that f w is a multijection by noting that: If we use a different shifting vector, some of our representatives may change. For example, for w = (−1, 2, −2), we have: Note that interior points of P (B) are always associated with B, but boundary points depend on the shifting vector.

Lower-Dimensional Representatives
In Section 5, we showed how to construct a tiling of R n and then in Section 6, we used this tiling to produce a set of representatives for S(D) (see Theorem 6.10). In this section, we show how to use the tiling of R n to produce a tiling of R r or R n−r that also (given a shifting vector) produces a set of representatives of S(D). The representatives associated with the tiling of R r all have zero in their last n − r entries while the representatives associated with the tiling of R n−r all have zero in their first r entries. However, even though the representatives of S(D) change, the multijection does not.
One benefit of this alternate construction is that it is often easier to work in lower dimensional space. In particular, we are now able to produce a wide variety of tilings of R 2 (see Figure 8). With our original map, all tilings of R 2 were similar to the one given in Example 5.12.
The main tool we use in this section is the following lemma. and let z = (z 1 , . . . , z r , z 1 , . . . , z n−r ) T ∈ Z n . Then, z is equivalent, with respect to S(D), to the vector whose first r entries are given by and whose last (n − r) entries are zero. z is also equivalent, with respect to S(D), to the vector whose first r entries are zero and whose last (n − r) entries are given by We also introduce two alternative integral bases for S(D) which will be useful when working in lower dimensions.
Proof. Consider the following matrices: Recall from Definition 5.4 that for any B ∈ B(D), we have parallelepipeds P 1 (B), P 2 (B), and P (B), where P (B) is the direct product of P 1 (B) and P 2 (B). Consider the vectors w = (w 1 , . . . , w r ) ∈ R r , w = ( w 1 , . . . , w n−r ) ∈ R n−r , and w = (w, w). Recall from Lemma 6.2 that (w, w) is a shifting vector if w is not in the span of any facet of P 1 (B) and w is not in the span of any facet of P 2 (B).
By a slight adjustment of Proposition 6.7, one can show that there are m(B) integer vectors w-associated with P 1 (B) and m(B) integer vectors w-associated with P 2 (B). We now show how to construct an r-dimensional tile and an (n − r)-dimensional tile. For both constructions, we use a standard representative matrix D and a shifting vector (w, w) = (w 1 , . . . , w r , w 1 , . . . , w n−r ).   Figure 6 gives an example of T (D).
The following theorem says that T (D) and T (D) have many similar properties to T (D). This is the main result of this section. Proof. The general strategy for every part of this proof is to apply Lemma 7.1 to results from Section 6 about T (D). The first 2 parts follow from Proposition 5.7 and Lemma 7.1. For the next 2 parts, Proposition 7.2 implies that two R n vectors that end with (n − r) zeros are equivalent if and only if their difference when restricted to the first r entries is in im Z (DD T ). Similarly, two R n vectors that begin with r zeros are equivalent if and only if their difference when restricted to the last (n − r) entries is in im Z ( D D T ). The results follow from this observation as well as Corollary 5.11 and Lemma 7.1.  Finally, for the last 2 parts, the integer points we obtain are exactly the wrepresentatives of T (D) translated by Lemma 7.1 so that either the first r or last (n − r) coordinates are 0. Thus, we can just apply Theorem 6.10. In Example 5.6, we gave a perspective drawing for the 3-dimensional T (D). In Example 6.11, we gave the set of w-representatives when w = (1, 1, 1). Here, we will show how to construct T (D) and T (D) and find a set of w-representatives for these lower-dimensional tiles.
To construct T (D), we first look at P 2 (B) for each B ∈ B(D). Because n − r = 1, these are intervals. Then, we multiply each of these by (3, 2) T and shift P 1 (B) by these amounts. The resulting tile is given in Figure 4.
Finally, to find a set of representatives for S(D), we take all of points (z 1 , z 2 ) ∈ Z 2 such that for all sufficiently small ε > 0, (z 1 , z 2 ) + ε(1, 1) ∈ T (D) (where the shifting vector (1, 1) is from the first two elements of w). Let f w be the map that sends S(D) to B(D) by mapping the lattice points in Figure 5 to bases associated to the parallelograms they are shifted into. We get the following set of representatives for S(D): Note that these are the same representatives that we get if we apply the first part of Lemma 7.1 to the representatives we obtained in Example 6.11 with the same shifting vector.
We can also find a set of representatives by using the tiling T (D) of R. For each B ∈ B(D), we find the set of lattice points that are mapped into P 1 (B) by the shifting vector (1, 1). Finally, to find a set of representatives for S(D), we take all points z such that for all sufficiently small ε > 0, z + ε(1) ∈ T (D). Let f (w, w) be the map that sends S(D) → B(D) by mapping the lattice points in Figure 7 to bases associated to the intervals they are shifted into. We get the following set of representatives for S(D): Note that these are the same representatives that we get as if we apply the second part of Lemma 7.1 to the representatives we obtained in Example 6.11 with the same shifting vector. Figure 8 gives some examples of tiles in R 2 computed using Sage. On the left is the tile with different colors indicating different bases and on the right is 9 copies of the tile to show how the tiling works.

Shifting vectors and hyperplane arrangements
In this section, we associate classes of shifting vectors producing the same multijection with chambers of a hyperplane arrangement. We also show that for a shifting vector w, each basis is w-associated with a unique corner point. In the next section, we will show that each corner point is equivalent with respect to S(D) to a {0, 1} n vector.
Recall that a standard representative matrix is a matrix of the form Definition 8.1. For a positive integer k, a central hyperplane in R k is a (k − 1)dimensional linear subspace of R k . An affine hyperplane is a translated central hyperplane. We use the blanket term hyperplane when we allow both central and affine hyperplanes. For a hyperplane H and vector v ∈ R k , we define the affine hyperplane A hyperplane arrangement H is a collection of hyperplanes in R k . A chamber of a hyperplane arrangement H is a connected component of Let S be a subset of [n] and x S be the corresponding columns of D. We write span(x S ) for the subspace of R r generated over R by the vectors in x S . Let rk(S) be the dimension of the space span(x S ). We will be primarily working with the case where rk(S) = r − 1, in which case span(x S ) is a central hyperplane in R r .
Proof. Let x B = {x k1 , . . . , x kr }. Since B is a basis, we can write any point p ∈ R n uniquely in the form:

Algebraic Combinatorics, draft (28th April 2021)
For each x ki ∈ x B , span(x B x ki ) and span(x B x ki )+x ki are parallel hyperplanes. Furthermore, for x kj ∈ x B with j = i, the vector x kj is parallel to both hyperplanes. This means that we can determine whether or not p is between span(x B x ki ) and span(x B x ki ) + x ki while only considering a i . If a i = 0, p lies on the first hyperplane, while if a i = 1, p lies on the second hyperplane. It follows that p lies between the two hyperplanes precisely when 0 a i 1. Since this is true for every i, we conclude that p lies in the region bounded by the hyperplanes precisely when 0 a i 1 for all i ∈ [r]. This is the same condition that determines whether or not p ∈ P 1 (B).
Definition 8.4. Fix some B ∈ B(D). Let φ B be the map from P 1 (B)×x B to {0, 1, 2} defined in the following way: This map is well-defined since a point cannot lie in two parallel hyperplanes. x ki .
Since each x ki is in Z r , the point p is also in Z r .
We recover analogous results and definitions as above when we replace D with D, B(D) with B( D), r with n − r, and P 1 (B) with P 2 (B). In particular, we get a hyperplane arrangement H( D) whose hyperplanes are spanned by sets of n − r − 1 columns of D. Corner points of P 2 (B) are defined analogously to corner points of P 1 (B).
Definition 8.7. A corner point of P (B) is a Z n vector whose first r entries form a corner point of P 1 (B) and whose last n − r entries form a corner point of P 2 (B).
Consider the vectors w = (w 1 , . . . , w r ) ∈ R r and w = ( w 1 , . . . , w n−r ) ∈ R n−r . We write (w, w) for their concatenation, which is an R n vector.
Recall from Definition 6.1 that (w, w) is a shifting vector if and only if for all B ∈ B(D), (w, w) is not in the span of any facet of P (B). By Lemma 6.2, this is equivalent to the condition that for all B ∈ B(D), w is not in the span of any facet of P 1 (B) and w is not in the span of any facet of P 2 ([n] B). Proof. By Lemma 8.3, each facet of P 1 (B) is contained in the hyperplane span(x B x) for some x ∈ x B (or its translation). Furthermore, every hyperplane of this form is the span of a facet of P 1 (B). It follows that w satisfies the conditions for a shifting vector if and only if w does not lie in any of the hyperplanes: We claim that these are exactly the hyperplanes that make up H(D). This is true because x B x is always a set of r − 1 linearly independent columns of D and every set of r − 1 linearly independent columns of D can be extended to form a basis. It is analogous to show that the spans of the facets of P 2 ([n] B) over all B ∈ B( D) correspond to the hyperplanes in H( D). The lemma follows.
From Lemma 8.8, we see that if (w, w) is a shifting vector, w must lie in a chamber of H(D) and w must lie in a chamber of H( D). Let B ∈ B(D), z = (z 1 , . . . , z r ) ∈ Z r , and z = ( z 1 , . . . , z n−r ) ∈ Z n−r . Recall from Definition 6.3 that for any v ∈ R r (resp. v ∈ R n−r ), v (resp. v) is w-associated with B if v +εw ∈ P 1 (B) (resp. v +ε w ∈ P 2 (B)) for all sufficiently small ε > 0. Proposition 8.9. For any shifting vector (w, w) and any choice of B ∈ B(D), there is a unique corner point of P 1 (B) that is w-associated with B, a unique corner point of P 2 (B) that is w-associated with B, and a unique corner point of P (B) that is w-associated with B.
Proof. The proof of this Proposition is similar to the proof of Proposition 6.7.
Let {x k1 , . . . , x kr } be the columns of D corresponding to B. An integer point z ∈ P (B) can be written as with 0 a i 1 for all i. Because the x ki are linearly independent (otherwise B would not be a basis), there is a unique way to write w in the form: for b i ∈ R. Because (w, w) is a shifting vector, b i = 0 for all i ∈ [r]. For any ε ∈ R, we have: From here, we see that z is w-associated with B if and only if 0 < a i 1 for b i < 0 and 0 a 1 < 1 for b i > 0. Furthermore, z can only be a corner point if a i ∈ {0, 1} for all i. Thus, the unique corner point w-associated with B is given by taking a i = 0 for b i > 0 and a i = 1 for b i < 0.
The proof is analogous for P 2 (B) and w. From here, the fact that P (B) has a unique w-associated corner point follows from Lemma 6.4. Proposition 8.11. Let w = (w, w) and w = (w , w ) be shifting vectors. The following are equivalent: (1) w and w are equivalent (in the sense of Definition 8.10).
(2) For every B ∈ B(D), the lattice points w-associated to B and the lattice points w -associated to B coincide. This quantity is known to depend only on the oriented matroid represented by D (not depending on basis multiplicities) and can be calculated using Zaslavsky's Theorem (see [25]). what Backman, Baker, and Yuen use to define their bijections when restricting to regular matroids. 9. Corner Points as {0, 1} n Vectors Let D be a standard representative matrix and (w, w) be a shifting vector. We showed in Proposition 8.9 that there is a unique corner point of P 1 (B) that is w-associated with B and a unique corner point of P 2 (B) that is w-associated with B. Using ideas from the proof of Proposition 8.9, we can explicitly construct this corner point. We can also construct a {0, 1} n vector that is in the same sandpile group equivalence class as this corner point.
Let x B = {x k1 , . . . , x kr } be the columns of D corresponding to B and x B = { x k1 , . . . , x kn−r } be the columns of D corresponding to B = E B. Because B is a basis, x k1 · · · x kr and x k1 · · · x kn−r are both invertible matrices. It follows that there is a unique vector a = (a k1 , . . . , a kr ) such that x k1 · · · x kr a T = w T and a unique vector a = ( a k1 , . . . , a kn−r ) such that x k1 . . . x kn−r a T = w T . The shifting vector condition tells us that for all i ∈ [n], a i = 0 and a i = 0.
Let v ∈ Z r be the sum, x ki , and let v ∈ Z n−r be the sum, Let p (B,(w, w)) be the concatenation (v, v) ∈ Z n . The following lemma is immediate from the proof of Proposition 8.9.
Lemma 9.1. p (B,(w, w)) is the unique corner point of P (B) that is (w, w)-associated with B.
We also construct the following point in {0, 1} n which we call p (B,(w, w)) .
The i th entry of p (B,(w, w)) =          0 if i ∈ B and a i > 0, or if i ∈ B and a i > 0. 1 if i ∈ B and a i < 0, or if i ∈ B and a i < 0.
Proof. In the construction of p (B,(w, w)) , when we add x ki for i r, this adds 1 to the i th coordinate. When we add x ki for i > r, we can subsequently add the i th row of D without changing the equivalence class of S(D). The net effect is that we add 1 to the i th coordinate. Similarly, when we add x ki for i > r, this adds 1 to the i th coordinate. When we add x ki for i r, we can subsequently add the i th row of D and the net effect is that we add 1 to the i th coordinate. This procedure adds rows of D to p (B,(w, w)) and produces the point p (B,(w, w)) . Corollary 9.3. Let B ∈ B(D) and z be a {0, 1} n vector. There is a choice of shifting vector w such that the corner point of P (B) that is w-associated to B is equivalent to z with respect to S(D).
Proof. We can choose almost any a ∈ R r andâ ∈ R n−r that satisfy the correct sign pattern such that p (B,w) = z. The only restriction is that we need to make sure that w is not in the span of any facet, but these exceptions form a set of measure 0. We can always convert to a shifting vector without affecting the sign pattern of a orâ (since we already require these vectors to contain no zeros).
Remark 9.4. Consider the r-dimensional zonotope Z D formed by the Minkowski sum of the columns of D. Every {0, 1} n vector z is associated with the vertex D · z T . It follows that for every B ∈ B(D), the point D · p T (B,(w, w)) is inside of Z D . In [2], the authors use a zonotopal tiling argument to show that each p (B,(w, w)) is in a different equivalence class of S(D). Proposition 9.2 (along with results from Section 6) gives an alternative proof of this fact.

Further Questions
The main purpose of our map was to associate each equivalence class of the sandpile group to a basis. However, in constructing this map, we also give a representative for each equivalence class. In particular, this is the set of w-representatives.
Question 10.1. What are some properties of the w-representatives that we get from different choices of distinguished basis or shifting vector? Are they generalizations of any known sets of representatives of the graphical sandpile group (such as superstable or critical configurations)? What about the lower dimensional representatives from Section 7?
In [20,Chapter 9], the multijections in this paper are generalized to a larger class of objects. However, the sandpile group must be replaced with its Pontryagin dual. Note that the Pontryagin dual of the cokernel of a lattice generated by the rows of a matrix is the cokernel of the lattice generated by its columns. In the case of standard representative matrices, the sandpile group is canonically isomorphic to its Pontryagin dual. In general, the the groups are isomorphic, but these isomorphisms are non-canonical.
Question 10.2. What are some properties of this Pontryagin dual sandpile group and why does it allow for more natural multijections?
In this paper, we focus on standard representative matrices, but the ideas can naturally be restated in terms of representable arithmetic matroids (more precisely orientable arithmetic matroids with the strong GCD property) which is the framework used in [20]. However, it is essential for our definition that these matroids are representable.
Question 10.3. Is there a reasonable way to define the sandpile group of some class of non-representable matroids?