On symmetric association schemes and associated quotient-polynomial graphs

Let $\Gamma$ denote an undirected, connected, regular graph with vertex set $X$, adjacency matrix $A$, and ${d+1}$ distinct eigenvalues. Let ${\mathcal A}={\mathcal A}(\Gamma)$ denote the subalgebra of Mat$_X({\mathbb C})$ generated by $A$. We refer to ${\mathcal A}$ as the {\it adjacency algebra} of $\Gamma$. In this paper we investigate algebraic and combinatorial structure of $\Gamma$ for which the adjacency algebra ${\mathcal A}$ is closed under Hadamard multiplication. In particular, under this simple assumption, we show the following: (i) ${\mathcal A}$ has a standard basis $\{I,F_1,\ldots,F_d\}$; (ii) for every vertex there exists identical distance-faithful intersection diagram of $\Gamma$ with $d+1$ cells; (iii) the graph $\Gamma$ is quotient-polynomial; and (iv) if we pick $F\in \{I,F_1,\ldots,F_d\}$ then $F$ has $d+1$ distinct eigenvalues if and only if span$\{I,F_1,\ldots,F_d\}=$span$\{I,F,\ldots,F^d\}$. We describe the combinatorial structure of quotient-polynomial graphs with diameter $2$ and $4$ distinct eigenvalues. As a consequence of the technique from the paper we give an algorithm which computes the number of distinct eigenvalues of any Hermitian matrix using only elementary operations. When such a matrix is the adjacency matrix of a graph $\Gamma$, a simple variation of the algorithm allow us to decide wheter $\Gamma$ is distance-regular or not. In this context, we also propose an algorithm to find which distance-$i$ matrices are polynomial in $A$, giving also these polynomials.


Introduction
A matrix algebra is a vector space of matrices which is closed with respect to matrix multiplication. Let X denote a finite set and Mat X (C) the set of complex square matrices with rows and columns indexed by X (or full algebra denoted by C |X| ). The subalgebras of Mat X (C) that are closed under (elementwise) Hadamard multiplication, and containing the all-ones matrix J, are known as coherent algebras. The concept was developed independently by Weisfeiler and Lehman in [71] and by Higman in [33,34]. A good introduction to the topic may be found in [41]. In the literature, a rich theory has been built up around this concept, and much more can be found in [37,38,40,59,60,61,62,70]. It is well known that every coherent algebra C is semisimple (see, for example, [31,Section 2]) and that has a standard basis {N 0 , N 1 , . . . , N r } consisting of the primitive idempotents of C viewed as a subalgebra of Mat X (C) with respect to Hadamard multiplication (see [34]). Each basis matrix N i of a coherent algebra C = N 0 , N 1 , . . . , N r can be regarded as the adjacency matrix A = A(Γ i ) of a graph Γ i = (X, R i ). Then Γ i and R i are called a basis graph and a basis relation, respectively, of the coherent algebra C. The basis relations of a coherent algebra give rise to a coherent configuration in the sense of [33].
A special subfamily of coherent configurations are commutative association schemes also known as homogeneous coherent configurations [21]. Let R = {R 0 , R 1 , . . . , R n } denote a set of nonempty subsets of X × X. For each i, let A i ∈ Mat X (C) denote the adjacency matrix of the (in general, directed) graph (X, R i ) . The pair (X, R) is an association scheme with n classes if (AS1) A 0 = I, the identity matrix.
(AS2) n i=0 A i = J, the all-ones matrix.
(AS4) A i A j is a linear combination of A 0 , A 1 , . . . , A n for 0 ≤ i, j ≤ n.
By (AS1) and (AS4) the vector space M spanned by the set {A 0 , A 1 , . . . , A n } is an algebra; this is the Bose-Mesner algebra of (X, R). We say that (X, R) is commutative if M is commutative, and that (X, R) is symmetric if the matrices A i are symmetric. A symmetric association scheme is commutative. The concept of (symmetric) association schemes can also be viewed as a purely combinatorial generalization of the concept of finite transitive permutation groups (famously said as a "group theory without groups" [5]). The Bose-Mesner algebra was introduced in [7], and the monumental thesis of Delsarte [17] proclaimed the importance of commutative association schemes as a unifying framework for coding theory and design theory. There are a number of excellent articles and textbooks on the theory of (commutative) association schemes and Delsarte's theory; see, for instance, [4,8,18,22,39,50]. The following are some of the books which include accounts on commutative association schemes: [10,32,48,43]. As an example of commutative association scheme, let Γ denote a distance regular graph of diameter D. It is well known (and not hard to prove it) that the vector space spanned by distance-i matrices {A 0 , A 1 , . . . , A D } of Γ, is closed under both ordinary multiplication (A, B) → AB and Hadamard multiplication (A, B) → A • B (see, for example, [5,Chapter III] or [8,Chapter 4]). This is one of the main reasons why the theory of distance-regular graphs is so rich in the study of algebraic and combinatorial structures.
In this paper we consider the following problem (we always assume that our graphs are finite, simple, and connected; see Section 2 for formal definitions). Problem 1.1 Let Γ denote a regular graph with vertex set X. Using the algebraic or combinatorial structure of Γ, is it possible to find a set {F 0 , F 1 , . . . , F d } ⊂ Mat X (C) (for some d ∈ N) such that the following hold?
(ii) There exist Ω ⊂ {0, 1, . . . , d} such that α∈Ω F α = I, the identity matrix.  (vi) span{F 0 , F = F 1 , . . . , A basis {F 0 , F 1 , . . . , F d } of some subalgebra C of the matrix algebra Mat X (C) for which properties (i)-(v) of Problem 1.1 hold, is known as the standard basis of C. As consequence of property (v) we have that F i F j is a linear combination of F 0 , F 1 , . . . , Our main results are the Theorems 1.
Note the similarity between [8, Theorem 2.6.1] and our Theorem 1.1. As a consequence of Theorem 1.1, we see that if the adjacency algebra A of Γ is closed under Hadamard multiplication then it produces a symmetric association scheme. The property (ii) of Theorem 1.1 tell us that if we want to get property (ii) of Problem 1.1, for |Ω| > 1, we should consider a directed graph Γ. By Theorem 1.1(iv), we also need a directed graph to get non-symmetric F i 's. Using the technique from the proof of Theorem 1.1, in Subsection 3.1, we give an algorithm which yields the number of distinct eigenvalues of A without computing them.
The next question we want to answer is what is the combinatorial structure of Γ for which the vector space A = span{I, A, . . . , A d } is closed under Hadamard multiplication. Theorem 1.2 Let Γ denote a regular graph with d + 1 distinct eigenvalues. If the vector space A = span{I, A, . . . , A d } is closed under Hadamard multiplication, then, for every vertex x, there exists an x-distance-faithful intersection diagram with d + 1 cells. Moreover, this intersection diagram is the same around every vertex.
For the converse of Theorem 1.2, see Theorem 1.3. The first author in [26] defined quotientpolynomial graphs, as graphs for which the adjacency matrices of walk-regular partition belong to adjacency algebra A. In the same paper some combinatorial properties of these graphs were studied. In Section 5 we recall some old, and prove some new, properties of quotient-polynomial graphs. We also consider graphs which have the same distance-faithful intersection diagram around every vertex, and we propose a method for deciding is their distance-i matrix A i is polynomial in A. Theorem 1.3 Let Γ denote a graph with vertex set X, x-distance-faithful intersection diagram π x , and assume that π x has r + 1 cells P i with P 0 = {x}: π x = {P 0 , P 1 , . . . , P r }. Let w ij denote the number of i-walks (0 ≤ i ≤ r) from y to x for any y ∈ P j (0 ≤ j ≤ r). Let P = [w ij ] 0≤i,j≤r denote (r +1)×(r +1) matrix with entries w ij . If Γ has the same x-distance-faithful intersection diagram around every x ∈ X then Γ has exactly rank(P ) distinct eigenvalues. Moreover, if rank(P ) = r + 1 then Γ is a quotient-polynomial graph.
For the moment assume that Γ is a distance-regular graph with diameter D. Note that intersection diagram of a distance partition around x of Γ has D + 1 cells, and is the same for every x ∈ X (also it is x-distance-faithful). So as an immediate corollary of Theorem 1.3, the number of distinct eigenvalues of a distance-regular graph Γ is ≤ D + 1. Also note that the nonnegative integer w ij from the Theorem 1.3 can be computed from the x-distance-faithful intersection diagram.
In Theorem 1.4 we establish a connection between the structure of Γ and Problem 1.  Section 5).
In Theorem 1.5 we consider quotient-polynomial graphs with diameter 2, and 4 distinct eigenvalues. Quotient-polynomial graphs with diameter 2, and 3 distinct eigenvalues are known as strongly-regular graphs. Theorem 1.5 Let Γ denote a regular connected graph with diameter 2 and 4 distinct eigenvalues. Then the vector space A = span{I, A, A 2 , A 3 } is closed under Hadamard multiplication if and only if either (i) or (ii) bellow hold.
(i) Any two nonadjacent vertices have a constant number of common neighbours, and the number of common neighbours of any two adjacent vertices takes precisely two values.
(ii) Any two adjacent vertices have a constant number of common neighbours, and the number of common neighbours of any two nonadjacent vertices takes precisely two values. Note that Theorems 1.1, 1.4 and 1.6 give a solution of Problem 1.1. The paper is organized as follows: in Section 2 we recall some notation and definitions. In Section 3 we prove Theorem 1.1, in Subsection 3.1 we give an algorithm which gives the number of different eigenvalues of a Hermitian matrix without computing them, and in Subsection 3.2 we propose a simple algorithm to check distance-regularity. In Section 4 we prove Theorem 1.2. In Section 5 we re-prove some old and obtain some new results about quotient-polynomial graphs, and we prove Theorem 1.3. In Subsection 5.1 we give an algorithm which computes the polynomial p i (t) so that A i = p i (A) (if such polynomial exists). In Section 6 we prove Theorems 1.4 and Theorem 1.5. In Section 7 we prove Theorem 1.6. Finally, in the last Section 8 we propose some open problems.

Definitions and preliminaries
A graph (or an undirected graph) Γ is a pair (X, R), where X is a nonempty set and R is a collection of two element subsets of X. The elements of X are called the vertices of Γ, and the elements of R are called the edges of Γ. When xy ∈ R, we say that vertices x and y are adjacent, or that x and y are neighbors. A graph is finite if both its vertex set and edge set are finite. If we allow for an edge to start and to end at the same vertex, then an edge with identical ends is called a loop, and a graph is simple if it has no loops and no two of its edges join the same pair of vertices. For any two vertices x, y ∈ X, a walk of length h from x to y is a sequence . We say that Γ is connected if for any x, y ∈ X, there is a walk from x to y. From now on, we assume that Γ is finite, simple and connected.
For any x, y ∈ X, the distance between x and y, denoted dist(x, y), is the length of the shortest walk from x to y. The diameter D = D(Γ) is defined to be Let Γ = (X, R) be a graph with diameter D. For a vertex x ∈ X and any non-negative integer h not exceeding D, let Γ h (x) denote the subset of vertices in X that are at distance h from x. Let Γ(x) = Γ 1 (x) and Γ −1 (x) = Γ D+1 (x) := ∅. For any two vertices x and y in X at distance h, let We say Γ is regular with valency k, or k-regular, if each vertex in Γ has exactly k neighbours. A graph Γ is called distance-regular if there are integers b i , c i (0 ≤ i ≤ D) which satisfy c i = |c i (x, y)| and b i = |b i (x, y)| for any two vertices x and y in X at distance i. Clearly such a graph is regular of valency k := b 0 , b D = c 0 = 0, c 1 = 1 and is the number of neighbours of y in Γ i (x) for x, y ∈ X (dist(x, y) = i). For more information about distance-regular graphs, we refer a reader to [16]. Some excellent articles that contains algebraic approach to the theory of distance-regular graphs are [1,2,25,29,54,66]. An (n, k, λ, µ) strongly-regular graph is a distance-regular graph Γ = (X, R) of diameter 2 with |X| = n, b 0 = k, a 1 = λ and c 2 = µ.
A partition around x of Γ, is a partition {P 0 = {x}, P 1 , . . . , P s } of the vertex set X, where s is a positive integer. The eccentricity of x, denoted by ε(x), is the maximum distance between x and any other vertex y of Γ. A distance partition around x, is a partition {Γ 0 (x), Γ(x), . . . , Γ ε(x) (x)} of X. A x-distance-faithful partition {P 0 , P 1 , . . . , P s } with s ≥ ε is a refinement of the distance partition around x. An equitable partition of a graph Γ is a partition π = {P 1 , P 2 , . . . , P s } of its vertex set into nonempty cells such that for all integers i, j (1 ≤ i, j ≤ s) the number c ij of neighbours, which a vertex in the cell P i has in the cell P j , is independent of the choice of the vertex in P i . We call the c ij 's the corresponding parameters.
The intersection diagram of a equitable partition π of a graph Γ is the collection of circles indexed by the sets of π with lines between them. If there is no line between P i and P j , then it means that there is no edge yz for any y ∈ P i and z ∈ P j . If there is a line between P i and P j , then a number on the line near circle P i denote corresponding parameter c ij . A number above or bellow a circle P i denote the corresponding parameter c ii (see Figure 1 for an example).

The adjacency algebra
Let C denote the complex number field, and let Γ denote a graph with vertex set X and diameter D.
We call A i the distance-i matrix of Γ. We abbreviate A := A 1 and call this the adjacency matrix of Γ. Observe that , where I denotes the identity matrix (respectively, all-ones matrix) in Mat X (C).
Let V = C X denote the vector space over C consisting of column vectors whose coordinates are indexed by X and whose entries are in C. We call V the standard module. We endow V with the Hermitian inner product ·, · V that satisfies u, v V = u ⊤ v for u, v ∈ V , where "⊤" denotes transpose and " " denotes complex conjugation. Recall that We observe that Mat X (C) acts on V by left multiplication, and since A is a real symmetric matrix, A can be interpret as a self-adjoint operator on V. This yield that V has an orthogonal basis consisting of eigenvectors of A (see, for example, [3,Chapter 7]). Assume that Γ has d + 1 distinct eigenvectors. For each eigenvalue λ i (0 ≤ i ≤ d) of Γ let U i be the matrix whose columns form an orthonormal basis of its eigenspace V i := ker(A − λ i I), and let m i := dim(V i ). The primitive idempotents of A are the matrices Some well-known properties of the primitive idempotents are the following: .
Proofs of properties (e-i)-(e-x) can be found, for example, in [55,Chapter 2]. Recall that, the number of walks of length ℓ ≥ 0 between vertices u and v of Γ is the (u, v)-entry of A ℓ , and that the eigenvalues of a real symmetric matrix are real numbers (see, for example, [67]). From this fact, together with (e-iv) and (e-viii), we have the following result: Now, using the above notation, the vector space is an algebra, with the ordinary product of matrices and orthogonal basis {E 0 , E 1 , . . . , E d }, called the adjacency algebra. Moreover, the vector space forms an algebra with the Hadamard product '•' of matrices, defined by (M•N) uv = (M) uv (N) uv . We call D the distance •-algebra. Note that, when Γ is regular, I, A, J ∈ A ∩ D, and thus dim(A ∩ D) ≥ 3 assuming that Γ is not a complete graph (in this exceptional case, J = I + A). In this algebraic context, an important result is that Γ is distance-regular if and only if A = D, which is therefore equivalent to dim(A ∩ D) = d + 1 (and hence d = D); see, for example, [6,8,57]. A related concept was introduced by Weichsel In general the algebras A and D are different from the algebra N = ( A 0 , A 1 , . . . , A D , +, ·) generated by the set of distance-i matrices {A 0 , A 1 , . . . , A D } with respect to the ordinary product of matrices. Figure 2 shows a diagram with some inclusion relationships when A is closed under Hadamard multiplication.

The symmetric association scheme
In this section we prove Theorem 1. Let us call two (0, 1)-matrices B, C disjoint if B • C = 0. For the moment, let F denote a vector space of symmetric n × n matrices. In [8, Theorem 2.6.1(i)] it was proved that F has a basis of mutually disjoint (0, 1)-matrices if and only if F is closed under Hadamard multiplication. In [8, Theorem 2.6.1(iii)] it was proved that F is the Bose-Mesner algebra of an association scheme if and only if I, J ∈ F and F is closed under both ordinary and Hadamard multiplication. Thus, in some sense, our Theorem 1.1 is a re-proof of [8, Theorem 2.6.1] using a different technique. We emphasize that the notation and technique that we use it the proof of Theorem 1.1 is important for the application in Subsection 3.1, as well as for the rest of the paper.
It is not hard to see that the vector space A is isomorphic to the vector space Using elementary row operation on B, we compute C as the reduced row echelon form of the matrix B. That is, Note that the set of nonzero vectors c i (0 ≤ i ≤ d) are linearly independent. Finally, we can use row vectors {c i } d i=0 to construct our matrices F i in the following way. If We claim that the set {F 0 , F 1 , . . . , F d } have the required properties. By construction, it is routine to show that the matrices F 0 , F 1 , . . . , F m are linearly independent.
is a basis of the vector space A, which is closed under both ordinary multiplication and Hadamard multiplication, there exists are zeros and ones. On a similar way as above, we can show that If |Ω| > 1 then we can pick α ∈ Ω, y, z ∈ X, such that (I α ) yy = 1 and (I α ) zz = 0. For an algebra A we have that for any B, C ∈ A, BC = CB, and since J ∈ A we have I α J = JI α . If we compute (y, z)-entry of I α J and JI α we get (I α J) yz = 1, (JI α ) yz = 0, a contradiction. The result follows.
(iii) Since Γ is a regular connected graph we have J ∈ A. On the other hand, by (i) the set are real symmetric matrices, the result follows.

The number of different eigenvalues of a Hermitian matrix
As shown in this subsection, the technique used in the proof of Theorem 1.1 can be used to find the number of distinct eigenvalues of a symmetric (or Hermitian) matrix. The motivation for this algorithm is that, for the solution of some problems that deal with eigenvalues, we only need to know the number of different ones. Moreover, if we have a large matrix (or a set of large matrices), computing all the eigenvalues is time consuming.
Also there is problem of distinct two different eigenvalues when we work with computer programs. All computers work only with rational numbers. So, if we deal with a large number of eigenvalues, even when we compute them in the usual way, we always have the problem of distinguishing two of them, because their values are often close to each other, up to some decimal place. Our method can avoid this.
Our method is especially applicable in algebraic and spectral graph theory, since symmetric (0, 1)-matrix represent adjacency matrix of a graph. Also, for example, see Corollary 5.11. For more information about algebraic and spectral graph theory we recommend [6,67].
As before, let X denote a set with |X| = n elements, Mat X (C) the set of n × n matrices over C with rows and columns indexed by X, and A ∈ Mat X (C) a Hermitian matrix. In this subsection we describe a simple algorithm to find the number d + 1 of distinct eigenvalues of A (without computing them). Notice that it suffices to find the dimension of the vector space A spanned by the powers of A. With this aim, we can consider the set {A 0 , A 1 , . . . , A k } for some positive integer k. Then, as in the proof of Theorem 1.1, we construct the matrix B and compute C = (c ij ) as its reduced row echelon form. Then, notice that the set of nonzero row vectors c i (0 ≤ i ≤ k) are linearly independent. Thus, we only need to find the smallest k so that c k = 0 to conclude that A has d + 1 = s + 1 different eigenvalues. The problem with this approach is that to decide what initial number k to pick. Of course, k = n will always work, but, in this case, we need to compute all A i (0 ≤ i ≤ n) which is not the best choice if the number of distinct eigenvalues is small compared with n.
To overcome the above problem, we propose an algorithm based on the Gram-Schmidt method. Recall that this method produces a set of orthogonal (and, hence, linearly independent) vectors in an inner product space, which, in our case C n = Mat X (C) is equipped with the scalar product where sum(M) denotes the sum of all entries of M (the term 1 n is a normalization factor to get I Cn = 1) . Then, if we apply the method from the matrices I, A, A 2 , . . ., we get a sequence A 0 , A 1 , . . ., where A i is a polynomial of degree i in A, for i = 0, . . . , d, A 0 , . . . , A d are orthogonal, and A i = 0 for i > d. Consequently, we only need to apply the process until we reach the first zero matrix. Moreover, notice that if, when computing A k+1 , instead of the power A k+1 , we Then, the algorithm is as follows:
Thus, if A is a Hermitian matrix such that A = span{A 0 , A, . . . , A d } is closed under Hadamard product, we can use Algorithm 3.1 to compute the standard basis {F 0 , F 1 , . . . , F d } of A by following the proof of Theorem 1.1(i). Just apply the algorithm to get a set {A 0 , A 1 , . . . , A d } of non-zero matrices such that d + 1 is the number of distinct eigenvalues of A, and, starting from them, proceed as in the proof.
Remark 3.2 For application purposes, assume that the entries of A are integers, and we want to work only with integers in the whole procedure avoiding numerical computations. Then, instead of the scalar product in (2), we can use the inner product A, B Cn = tr(AB) and change Algorithm 3.1 accordingly (modifying step 2 also, to avoid devision).

Checking distance-regularity
If fact, if A is the adjacency matrix of a graph Γ with d + 1 eigenvalues, the above inner product (2) is denoted as ·, · Γ , and the obtained matrices A 0 , A 1 , . . . , A d coincide, up to a multiplicative constant, with the so-called predistance matrices of Γ, see [30]. In turn, such matrices are obtained by evaluating at A the predistance polynomials p 0 , . . . , p d , introduced in [27]. In particular, if Γ is distance-regular, the predistance polynomials and predistance matrices are, respectively, the distance polynomials and distance matrices of Γ. If Γ has spectrum where λ 0 > λ 1 > · · · > λ d , the predistance polynomials p 0 , p 1 , . . . , p d constitute an orthogonal sequence of polynomials (dgr(p i ) = i) with respect to the scalar product normalized in such a way that p i 2 Γ = p i (λ 0 ) (we know that p i (λ 0 ) > 0 for every i = 0, . . . , d). As every sequence of orthogonal polynomials, the predistance polynomials satisfy a threeterm recurrence of the form where the constants b i−1 , a i , and c i+1 are the Fourier coefficients of xp i in terms of p i−1 , p i , and p i+1 , respectively (and b −1 = c d+1 = 0). Moreover, p 0 + p 1 + · · · + p d = H, the Hoffman polynomial of Corollary 2.1. Hence, if Γ is k-regular, we can apply Algorithm 3.1 to obtain the predistance matrices if we normalize each A i , for i = 0, . . . , d, in such a way that Some recent characterizations of distance-regularity in terms of the predistance polynomials and distance matrices A d and A d−1 are the following: A regular graph Γ with d + 1 distinct eigenvalues, diameter D = d, is distance-regular if and only if either Every of the above conditions assures the existence of all the distance matrices A 0 (= I), A 1 (= A), A 2 , . . . , A d , which is a well-known characterization of distance-regularity. More generally, in [12], a graph Γ is said to be k-partially distance-regular, for some k < d, if there exist the distance matrices A i for i = 0, . . . , k. For more details, see [15,12,24,28]. Now, as another possible application of Algorithm 3.1 we have the following result. Let A i be the matrices obtained by applying the Algorithm 3.1, and normalizing them so that If the following conditions hold: (i) A D+1 = 0 and A D = 0, then, Γ is a distance-regular graph.
Proof. We will prove that A d is the d-distance matrix of Γ. First, as we have already seen, (i) implies that D = d. Then, if u, v ∈ X are two vertices at distance dist(u, v) = d, we have that (A d ) uv = (p d (A)) uv = (H(A)) uv = (J) uv = 1. Otherwise, assume that dist(u, v) = ℓ < d and (A d ) uv = 1. Then, from (5) and (iii), it should be (A ℓ + · · · + A d−1 ) uv = 0. In particular, (A ℓ ) uv = 0, a contradiction since A ℓ = p ℓ (A), with dgr(p ℓ ) = ℓ and so p ℓ has leading nonzero coefficient. Then, if dist(u, v) < d, then (A d ) uv = 0. Consequently, A d is as claimed, and (DS2) gives the result. Notice that, in fact, if Γ is indeed distance-regular, all the normalized matrices A 0 , A 1 , . . . obtained by the Algorithm 3.1 must be the corresponding distance matrices.

Algorithm 3.4
The following algorithm returns 'true' or 'false' depending on whether a regular graph is distance-regular or not.
Input: The adjacency matrix A of a regular graph Γ. Output: 'true' (Γ is distance-regular) or 'false'.

Compute the matrix
is not a (0, 1)-matrix, then return false, and end the program.

If
A k+1 = 0 and k < D then return false, and end the program.
6. IfA k+1 = 0 and k = D then return true, and end the program.
Remark 3.5 Here, a comment similar to Remark 1.1 is in order. Indeed, notice that in Proposition 3.3 and Algorithms 3.1 and 3.4 the normalization of the matrices A i is not strictly neccessary (and in the algorithms is time consuming). We only need to require that all entries of A i have the same value, say, c i . Then, if eventually we want to get (0, 1)-matrices, we simply

The distance-faithful intersection diagrams
In this section we prove Theorem 1. Proof of Theorem 1.2. Since Γ is a regular graph, by Theorem 1.1 A has the standard basis {F 0 , F 1 , . . . , F d }. Let X denote the vertex set of Γ, pick two vertices x, u ∈ X and define partitions π x and π u of X in the following way To prove the claim, we need to show that the following (i)-(iii) hold.
(i) All vertices in P i (x) are on the same distance from x.
(iii) There exist numbers c ij (0 ≤ i, j ≤ d) such that (1) π x is equitable partition of Γ with corresponding parameters c ij .
(2) π u is equitable partition of Γ with corresponding parameters c ij .
(i) We will first show that for any z, w ∈ P i (x) we have (A ℓ ) xz = (A ℓ ) xw (0 ≤ ℓ ≤ d), that is, the number of walks of length ℓ from x to z is the same as the number of walks of length ℓ from x to w.
If Γ is a regular graph of valency k, then Aj = kj (where j is all-ones column vector). This yields E 0 j = j and E j j = 0 for 1 ≤ j ≤ d (see property (e-x) from page 7). Now, since This implies F i j = β 0 E 0 j = β 0 j, that is, the sum of row entries is the same for every vertex. Therefore, Pick y ∈ P j (x). Now, from the left side of (6) we have and from the right side of (6) we have Thus, π x is an equitable partition of Γ with corresponding parameters c ij . Similarly, pick v ∈ P j (u). From one side of (6) we have (AF i ) vu = |Γ(v) ∩ P i (u)| and from the other side of (6), ( d h=0 c ih F h ) vu = c ij . Therefore, π u is also an equitable partition of Γ with corresponding parameters c ij .

The quotient-polynomial graphs
In this section we recall some old, and prove some new, properties of quotient-polynomial graphs. Recall that, for every y, z ∈ X, (A ℓ ) yz (0 ≤ ℓ ≤ d) is the number of walks of length ℓ between vertices y and z.
Definition 5.1 Let Γ denote a graph with vertex set X and d + 1 distinct eigenvalues. The column vector w(y, z) ∈ C d+1 is defined as Remark 5.2 If we have an equitable partition π = {P 0 , P 1 , . . . , P r } around y, P 0 = {y}, with intersection numbers b ij we can compute the vector w(y, z) (y, z ∈ X) from its quotient matrix The reason is that 1 |P j | (B ℓ ) P j ,P 0 is the number of ℓ-walks (0 ≤ ℓ ≤ d) from z to y for any z ∈ P j (0 ≤ j ≤ r) (see, for instance, [13]).
. Then all pairs of vertices in a given R i are at the same distance.

indexed by the vertices of Γ, and defined by
The matrix M i is called adjacency matrix of the equivalence class R i .
Remark 5.5 Note that we always can permute indices of {R 0 , R 1 , . . . , R r } of a walk-regular partition. So, if necessary and using Lemma 5.3, we can define a walk-regular partition by adding the following restriction on R: for any i ≤ j and (x, y) ∈ R i , (u, v) ∈ R j we have dist(x, y) ≤ dist(u, v).
Lemma 5.6 Let Γ be a graph with vertex set X and a walk-regular partition R of X × X.
denote the distance-i matrix of Γ, and let M i (0 ≤ i ≤ r) denote the adjacency matrices of the corresponding equivalence classes R i . Then there exists an index set Proof. Immediate from Lemma 5.3.
Definition 5.7 Let Γ denote a graph with vertex set X, d + 1 distinct eigenvalues, and adjacency algebra A. Let R = {R 0 , R 1 , . . . , R r } denote the walk-regular partition of X × X and let M i (0 ≤ i ≤ r) denote the adjacency matrices of the equivalence classes From this definition and Lemma 5.6 it follows that every distance-i matrix of Γ belongs to its adjacency algebra A.
Example 5.8 Let B ⊗ C denote the Kronecker tensor product of matrices B and C (for the definition and properties of Kronecker tensor product see, for example, [42,Chapter 13] or [36,Chapter 4]). Let A and A ′ denote the adjacency matrices of the graphs Γ and Γ ′ respectively. The Kronecker product, Γ ⊗ Γ ′ , is that graph with adjacency matrix A ⊗ A ′ (see [68]). Let For the corresponding intersection diagram of Γ see Figure 3.
Definition 5.9 Let Γ denote a graph with d + 1 distinct eigenvalues. Given a walk-regular partition R = {R 0 , R 1 , . . . , R r } of X × X, let w ij be the common value of the number of i-walks (0 ≤ i ≤ d) from y to z for any y, z ∈ R j (0 ≤ j ≤ r). Define the matrices W and Z, and the polynomials p i (t) (0 ≤ i ≤ d) as follows:  Theorem 5.10 Let Γ be a graph with vertex set X, d + 1 distinct eigenvalues, and let R = {R 0 , R 1 , . . . , R r } denote a walk-regular partition of X × X. Then, Furthermore, let Z denote the matrix of Definition 5.9, and define W := {w(y, z) | y, z ∈ X}. Then the following are equivalent.
(iv) W is a linearly independent set.
Proof. Let M j denote the adjacency matrix of the equivalent-class R j (0 ≤ j ≤ r). Since R is a walk-regular partition, for the scalars w ij (0 ≤ i ≤ d, 0 ≤ j ≤ r) of Definition 5.9, we have This yields span{I, A, . . . , A d } ⊆ span{M 0 , M 1 , . . . , M r } as vector spaces, and hence d ≤ r.
Let W denote the matrix from Definition 5.9. Note that the elements of the set W are columns of the matrix W , and since R is a walk-regular partition, W has exactly r +1 elements.
Otherwise, if rank(W ) < d + 1, applying elementary row operations on the above system, we get A d ∈ span{I, A, . . . , A d−1 }, a contradiction. To prove equivalences between (i)-(v), we show the following chain of implications.
(i) ⇒ (ii), (v). If d = r then rank(W ) = d + 1 = r + 1, which means that Z = I and for every M i we have M i = p i (A). This yields M i ∈ A, and Γ is a quotient-polynomial graph.
Corollary 5.11 Let Γ denote a graph with d + 1 distinct eigenvalues, and x-distance-faithful intersection diagram π with r+1 cells. If Γ has the same x-distance-faithful intersection diagram around every vertex x, then Γ has at most r + 1 eigenvalues. Moreover, if r = d then Γ is a quotient-polynomial graph.

Proof.
The same intersection diagram around every vertex corresponds to a walk-regular partition of X × X with r + 1 cells. The result now follows from Theorem 5.10.
Considering the proof of Theorem 5.10, the number of distinct eigenvalues of A i (0 ≤ i ≤ d) is important in deciding when Γ is not a quotient-polynomial graph.
Corollary 5.12 Let Γ denote a graph with vertex set X and d + 1 distinct eigenvalues. If, for i ∈ {0, . . . , d}, the matrix A i has more than d + 1 distinct eigenvalues, then Γ is not a quotient-polynomial graph.
Proof. Under the hypothesis, A i cannot be written as a linear combination of some d + 1 •-idempotent (0, 1)-matrices in {F 0 , . . . , F d } and, hence, A does not have a standard basis.

Comment 5.13
If Γ is a quotient-polynomial graph then the polynomials p i (0 ≤ i ≤ r) from Definition 5.9 are orthogonal with respect to the scalar product (3), as happens with the distance polynomials of a distance-regular graph. Indeed, for every i, j (0 ≤ i, j ≤ d), we have Also, for the same polynomials p i (0 ≤ i ≤ r), we have that Γ is a regular and connected graph if and only if r i=0 p i (A) = J.
Proof of Theorem 1.3. (If Γ has the same x-distance-faithful intersection diagram with r cells around every vertex, then Γ has exactly rank(P ) distinct eigenvalues, where P = (w ij ) (r+1)×(r+1) . If rank(P ) = r + 1 then Γ is a quotient-polynomial graph.) Using the intersection diagram π x = {P 0 , P 1 , . . . , P r } around x, we can consider the column vectors where w ij denote the number of i-walks (0 ≤ i ≤ r) from z to x for any z ∈ P j (0 ≤ j ≤ r). Note that we do not know is it w i = w j for every 0 ≤ i, j ≤ r. Now, pick a vertex u ∈ X (u = x), consider the intersection diagram π u = {P 0 (u), P 1 (u), . . . , P r (u)}, and let w ′ ij (u, v) denote the number of i-walks (0 ≤ i ≤ r) from v to u for any v ∈ P j (u) (0 ≤ j ≤ r). Then, since Γ has the same intersection diagram around every vertex, the set of vectors is the same as in (8). That is, for every i (0 ≤ i ≤ r) there exists exactly one h (0 ≤ h ≤ r) such that w i = w ′ h (u, v). Now we can define the matrices M i ∈ Mat (r+1)×(r+1) (C) in the following way: This definition of M i yields that Also, since Γ has the same distance-faithful intersection diagram around every vertex, using this intersection diagram we can construct a walk-regular partition of X × X with r + 1 basis relations R i . So, by Theorem 5.10, d ≤ r. By assumptions w 00 w 01 . . . w 0r w 10 w 11 . . . w 1r w 20 w 21 . . . w 2r . . . . . . . . .

Algorithmic approach for deciding if A i is polynomial in A
In this subsection we give an algorithm which, for a given graph Γ, decides whether A i (0 ≤ i ≤ D) is a polynomial (not necessarily of degree i) in A or not. If the answer is in the affirmative, the algorithm also compute that polynomial. Note that this procedure can be seen as a refinement of Algorithm 3.4, since allows to decide if Γ is distance-polynomial (A i ∈ A for every i = 0, . . . , D).
Algorithm 5.14 Let A denote the adjacency matrix of Γ with d + 1 distinct eigenvalues and diameter D. Considering only the matrix Z (from Definition 5.9) we can determine which distance-i matrix is a polynomial in A (see Example 4).
Input: The adjacency matrix A of Γ, or intersection diagrams around every vertex. Output: A polynomial p i such that A i = p i (A) (if such a polynomial exists).
1. Using the adjacency matrix A of Γ (or using intersection diagrams around every vertex), compute the vectors w(y, z) for every y, z ∈ X (see Definition 5.1 and Remark 5.2). 4. If the sum of the rows j 1 , j 2 , . . . , j m of Z is a (0, 1)-row vector for which the nonzero entry is only in columns R i 1 , R i 2 , . . . , R i k , and vice versa, then the adjacency matrix A i is polynomial in A, and we have A i = p j 1 (A) + p j 2 (A) + . . . + p jm (A). Otherwise, A i is not polynomial in A.

Example 5.15
Assume that Γ is the graph from Figure 4. Using the intersection diagram we can compute the adjacency matrix B ∈ Mat 8×8 (C) of intersection diagram, and using B, we can compute the numbers w ij from Definition 5.9 (for example, a number (B ℓ ) P 3 ,P 0 is the number w ℓ3 (0 ≤ ℓ ≤ 7)). Since we do not know the number of distinct eigenvalues, using  Figure 4: 'Chordal ring' (12,4) and its intersection diagram. This graph has the same intersection diagram around every vertex and adjacency algebra A is not closed with respect to Hadamard product. If R = {R 0 , R 1 , . . . , R 7 } is the walk-regular partition and if F i (0 ≤ i ≤ 7) are adjacency matrices of R i (0 ≤ i ≤ 7), then for a fixed vertex x of Γ we Corollary 5.11 we know that Γ will not have more then 8 of them. So we can compute the matrices W and Z with 8 rows and 8 columns. We have where polynomials p i (t) (0 ≤ i ≤ 6) are Since rank(W ) = 7, Γ has 7 distinct eigenvalues, which imply that the polynomial p 7 (t) is not important. Note that A 0 = p 0 (A), and A 4 = p 6 (A). Therefore, every distance-i matrix can be write as a polynomial in A and Assume that Γ is a quotient-polynomial graph. Let F i (0 ≤ i ≤ d) denote the adjacency matrix of the equivalence class R i (0 ≤ i ≤ d) of a walk-regular partition R = {R 0 , R 1 , . . . , R d } of X × X. By definition, {I = F 0 , F 1 , . . . , F d } is a linearly independent set such that F i • F j = δ ij F i , and d i=0 F i = J. Moreover since F i ∈ A we have span{F 0 , F 1 , . . . , F d } ⊆ A. Thus, the vector space A is closed under both ordinary and Hadamard multiplication.
Conversely, assume that the vector space A is closed under both ordinary and Hadamard multiplication. By Theorem 1.1, since Γ is a regular graph, the algebra A has the standard basis {I = F 0 , F 1 , . . . , F d }. Then, there exists scalars α ij (0 ≤ i, j ≤ d) such that Now, by (10), if u, v, y, z ∈ X are vertices such that (F i ) uv = 1 and ( then the number of walks of length ℓ from u to v, is equal to the number of walks of length ℓ from y to z (0 ≤ ℓ ≤ d). This implies that the matrices F i correspond to the basis relations R i (0 ≤ i ≤ d), and that R = {R 0 , R 1 , . . . , R d } is a walk-regular partition of X × X. Since F i ∈ A the result follows. Now we prove Theorem 1.5. (A regular graph Γ with diameter 2 and 4 distinct eigenvalues is quotient-polynomial if and only if either any two nonadjacent (respectively, adjacent) vertices have a constant number of common neighbours, and the number of common neighbours of any two adjacent (respectively, nonadjacent) vertices takes precisely two values.) The proof can be seen as very nice application of the walk-regular partition from Section 5.
Proof of Theorem 1.5.
(⇒) Now assume that F has d + 1 distinct eigenvalues, and let F denote the algebra generated by the set The numbers q h ij are called the Krein parameters for Γ with respect to the ordering E 0 , E 1 , . . . , E d of its basis of primitive idempotents. An ordering E 0 , E 1 , . . . , E d is a cometric (Q-polynomial ) ordering if the following conditions are satisfied: (Q1) q h ij = 0 whenever any one of the indices i, j, h exceed the sum of the remaining two, and (Q2) q h ij > 0 when 0 ≤ i, j, h ≤ d and any one of the indices equals the sum of the remaining two.
We say that Γ is a cometric (or Q-polynomial) quotient-polynomial graph when such an ordering exists. In the future, we plan to study algebraic and combinatorial properties of cometric quotient-polynomial graphs. This Q-polynomial concept is taken from the theory of commutative association schemes. A good introduction to the topic of Q-polynomial structures for association schemes and distance-regular graphs can be found in [19]. For a new technique (and approach) about computations in Bose-Mesner algebras, which also deals with Q-polynomial case, we recommend [49, Section 3]. Fix a "base vertex" x ∈ X. For each i (0 ≤ i ≤ D) let F * i = F * i (x) denote the diagonal matrix in Mat X (C) with (y, y)-entries (F * i ) yy = (F i ) xy . The Terwilliger (or subconstituent) algebra T = T (x) of Γ with respect to x is the subalgebra of Mat X (C) generated by {I, F 1 , . . . , F d , F * 0 , F * 1 , . . . , F * D }. By a T -module we mean a subspace W of V = C X such that BW ⊆ W for all B ∈ T . Let W denote a T -module. Then W is said to be irreducible whenever W is nonzero and W contains no T -modules other than 0 and W. In the future we plan to study irreducible T -modules of quotient-polynomial graph Γ. This T -module concept is also taken from the theory of commutative association schemes [63,64,65]. For most recent research on the use of Terwilliger algebra in the study of P -polynomial association schemes (that is, using the Terwilliger algebra to study distance-regular graphs) see [11,44,45,47,46,51,52,53,58].
Another possible line of research would be the study of 'pseudo-quotient polynomial graphs', defined by using weighted regular partitions, see [23].