Cyclic independence: Boolean and monotone

The present paper introduces a modified version of cyclic-monotone independence which originally arose in the context of random matrices, and also introduces its natural analogy called cyclic-Boolean independence. We investigate formulas for convolutions, limit theorems for sums of independent random variables, and also classify infinitely divisible distributions with respect to cyclic-Boolean convolution. Finally, we provide applications to the eigenvalues of the adjacency matrices of iterated star products of graphs and also iterated comb products of graphs.


Introduction
The present paper takes its origin in the concept of cyclic-monotone independence which appeared in the study of random matrices [5,20] and which deserves separate treatment; see [2] for further work.The term "cyclic-monotone independence" was coined in [5] because of its apparent similarity with monotone independence except that it involves two linear functionals: a state and a tracial linear functional.It abstracts an asymptotic formula for the mixed moments, with respect to the non-normalized trace, of a random rotation of two sets A N and B N consisting of N × N deterministic matrices such that all mixed moments of A N have finite limits with respect to the non-normalized trace as N tends to infinity, and all mixed moments of B N have finite limits with respect to the normalized trace.More precisely, suppose that {A N i : 1 ≤ i ≤ k} and {B N i : 1 ≤ i ≤ k}, N = 1, 2, 3, ... are families of N × N deterministic matrices that satisfy the following conditions: for any * -polynomial P (x 1 , x 2 , . . ., x k ) in non-commuting variables x 1 , x 2 , . . ., x k over the field C without a constant term (e.g.P (x 1 , x 2 ) = x 2 1 x 2 x * 1 ), the limits Tr The formula (1.1) shows some similarity with monotone independence, but they are not the same because the formula involves both normalized trace and non-normalized trace.
The present paper offers a simple operator model for cyclic-monotone independence realized on the tensor product of Hilbert spaces.This construction also uncovers the associativity of cyclic-monotone independence with respect to a state and a trace.In order to ensure associativity, we modify the definition of cyclic-monotone independence.The new definition consists of two conditions: one is basically the condition in [5,Definition 3.2] referring to both the state and the tracial linear functional, and the other is monotone independence with respect to the state (see Definition 7.2).The modified definition of cyclic-monotone independence shares the same spirit with c-monotone independence [10] (and c-freeness [3]) because they are all associative notions of independence referring to two linear functionals.
However, the relationship between the random matrix model and the operator model is not perfectly understood.Curiously monotone independence does not appear in the random matrix model above, although it appears very naturally in the operator model.This is related to the fact that the random matrix model above is limited to two families of random matrices A N and B N and hence the question of associativity is not relevant.
Our operator realization of (modified) cyclic-monotone independence also indicates that a similar construction works for Boolean independence, which therefore leads to a notion of cyclic-Boolean independence.We develop a general theory of these two independences: computing generating functions for the sum of independent random variables, limit theorems, cyclic-Boolean cumulants which are governed by cyclic-interval partitions and infinitely divisible distributions with respect to cyclic-Boolean convolution.We do not know how to define cyclic-monotone cumulants and therefore this question is not addressed in the present paper.
Moreover, the operator models for cyclic-Boolean and cyclic-monotone independences are directly connected to the star product of (rooted) graphs (see Section 2.6) and the comb product of (rooted) graphs (see Section 2.7).Specifically, the eigenvalues of their adjacency matrices can be analyzed by means of cyclic-Boolean independence and cyclic-monotone independence, respectively.
The techniques are motivated by the relations between the adjacency matrix, the spectrum, the characteristic polynomial and walk generating functions of a graph.These form the core subject of algebraic graph theory, which deals with various matrices, polynomials and generating functions and other invariants carrying information about graphs.
It was shown by Schwenk [19] (later generalized by Godsil and McKay [9]) that the characteristic polynomial of the star product (or coalescence) and the comb product (or rooted product) of graphs only depends on the characteristic polynomials of the factors and the walk generating function at the roots of the factors and he gave an explicit formula.Similar simple formulas for the generating function of closed walks starting at the root hold.While Schwenk's proofs are combinatorial, we will give algebraic proofs based on the Schur complement which can be generalized to arbitrary matrices and operators.Accardi, Ben Ghorbal and Obata [1] and Obata [15] initiated the application of monotone independence and Boolean independence to the asymptotic spectral analysis of adjacency matrices of iterated comb products and star products, respectively.The operator models for cyclic-monotone and cyclic-Boolean independences extend their work in the sense that the new framework also enables one to analyze refined properties of eigenvalues of the adjacency matrices.These generalizations are the subject of the present paper and illustrate the emergence of new notions of noncommutative independence, i.e., cyclic-monotone and cyclic-Boolean ones.
To summarize, the main contributions of the present paper are: (1) the new notion of cyclic-Boolean independence (Sections 3) and a modification of the definition of cyclic-monotone independence given in [5] (Section 7); (2) operator models for cyclic-Boolean independence (Sections 3) and for cyclic-monotone independence (Section 7); (3) convolution formulas for the sum of independent random variables (Sections 4, 7) and their relationships to algebraic graph theory (Section 2); (4) limit theorems for sums of independent random variables (Sections 3, 7); (5) cyclic-Boolean cumulants and the relevant partition structure with cyclic-interval partitions (Section 5); (6) classification of infinitely divisible distributions for cyclic-Boolean convolution (Section 6); (7) analysis of the asymptotics of the eigenvalues of the adjacency matrices of iterated star product of graphs and iterated comb product of graphs (Sections 4, 7).
Recently Collins, Leid and Sakuma found a different matrix model for monotone independence and cyclic monotone independence [6].So far a connection between their model and ours is not clear.

Preliminaries
2.1.Adjacency matrix.Let Γ = (V, E) be a graph on a vertex set V = {v 1 , v 2 , . . ., v d } with edge set E. We always consider finite undirected graphs without loops or multiple edges.An edge between two vertices u and v is denoted by uv.The adjacency matrix of Γ is the matrix A = {a ij } d i,j=1 with entries The spectrum of the graph Γ is the spectrum of its adjacency matrix.It consists of the eigenvalues λ i of A which are the roots of the characteristic polynomial Alternatively, the eigenvalues of A are the poles of the (tracial) Cauchy transform The Cauchy transform and the characteristic polynomial are mutually related by the logarithmic derivative For the generalization of this identity to trace class operators it will be convenient to remove the moment of order zero and work with the "renormalized" Cauchy transform 2.2.Walk generating functions.Let (Γ, o) be a finite rooted graph, i.e., a graph on vertices v 1 , v 2 , . . ., v d where we single out the vertex o = v 1 as the root of the graph.The number m n of closed walks of length n starting at the root o is equal to A n e 1 , e 1 where A is the adjacency matrix of Γ and e 1 is the vector (1, 0, 0, . . ., 0) ∈ C d .Denote by the walk generating function.One caution is in place here.To keep notation simple here and below we do not explicitly write the root in subscripts, although the generating functions depend on the choice of the root.
It will be more convenient to rather work with the resolvent of A and with the Green function (evaluated at the root) and its reciprocal .
We can obtain a relation between the Green function (2.5) and the Cauchy transform (2.1) from the Schur complement.

M =
A B C D be a block matrix and assume D is invertible.Then the Schur complement [24] is defined as It appears in Aitken's factorization which is obtained by Gaussian elimination on the original Matrix M .From this factorization we infer the following assertions: (1) M is invertible if and only if M/D is invertible.If this is the case, then the Banachiewicz inversion formula (2.9) holds.

Relation between the Green function and the Cauchy transform of a general matrix.
Let A be a d × d matrix.We want to understand the relation between the functions To this end we partition the matrix into blocks of dimension 1 and d − 1 (2.12) a 2 and we conclude from the Banachiewicz inversion formula (2.9) that (2.13) Star product of rooted graphs and After subtracting the unit matrix according to (2.3) we obtain the identity which can be extended to trace class operators.
2.6.The star product and its adjacency matrix.
Two vertices (x 1 , x 2 ) and (y 1 , y 2 ) of V are connected by an edge if either x 1 y 1 ∈ E 1 and The cartesian product of the vertices corresponds to the tensor product of the vector spaces and the adjacency matrix has entries x 2 ,y 2 i.e., where P i is the orthogonal projection of 2 (V i ) onto the one-dimensional subspace spanned by the delta function δ o i ; see [12,Proposition 8.50].The star product is associative and hence one may define by iteration the star product (Γ . ., N .Suppose further that those graphs are finite and simple.Then the vertex set V of Γ can be regarded as a subset of V 1 × • • • × V N and hence the adjacency matrix A Γ can be regarded as an operator on Comb product of rooted graphs 2.7.The comb product and its adjacency matrix.Given a graph ) is defined by gluing a copy of Γ 2 to every vertex of Γ 1 at the root o 2 : The vertex set If we further specify a root of Γ 1 , then the natural root for the comb product is (o 1 , o 2 ), which makes the comb product associative (but non-commutative) in the category of rooted graphs; see Fig. 2. The adjacency matrix now can be written as and by iteration one may define the comb product (Γ, o) of a sequence of rooted graphs . ., N .The adjacency matrix A Γ can then be regarded as an operator on and it has the form (2.16) see [12,Proposition 8.38].
For comb products of identical rooted graphs, Accardi, Ben Ghorbal and Obata used monotone independence satisfied by the summands in (2.16) in order to study the asymptotics of the Green function of Γ as N → ∞; see the original article [1, Theorem 5.1] or the book [12,Theorem 8.40].On the other hand, for the star product, the summands in (2.15) are Boolean independent, which provides another type of asymptotics of Green function; see the original article of Obata [15,Theorem 3.7] or the book [12,Theorem 8.53].
In the present paper we study the asymptotic behavior of eigenvalues or empirical eigenvalue distributions of A Γ for large N using the asymptotics of the characteristic polynomial φ Γ (z) or the Cauchy transform g Γ (z).
2.8.Identities for the star product.For the sake of notational convenience we denote by Γ 1 Γ 2 the star product of two rooted graphs (Γ 1 , o 1 ) and (Γ 2 , o 2 ).The Green function of the star product satisfies the following relation, which follows from the decomposition (2.15) of the adjacency matrix into Boolean independent operators and the linearization formula for Boolean convolution in [23, Section 2]; see also [17] for another proof of the latter.Proposition 2.1.For rooted graphs (Γ 1 , o 1 ) and (Γ 2 , o 2 ) the following formula holds.
The Cauchy transform of the star product is computed by the formula below.Proposition 2.2.For rooted graphs (Γ 1 , o 1 ) and (Γ 2 , o 2 ) the following formula holds.
Remark 2.3.Later we will give two alternative proofs in a more general setting; see Theorem 4.2.
Proof.The key is the simple identity for the star product, which follows from the fact that the removal of the root splits the graph into two disjoint connected components Using the Schur identity (2.11) we can rewrite (2.19) as and taking the logarithmic derivative of (2.20) together with (2.2) yields (2.18).
Finally, the characteristic polynomial of the star product satisfies the following identity proved by Schwenk, for which we give an alternative proof.(2.21) Proof.Equations (2.20) and (2.17) give rise to Substituting the formula (2.11), G Γ i (x) = φ Γ i \o (x)/φ Γ i (x), into the above yields the desired formula.
2.9.Identities for the comb product.In this section, the comb product is denoted simply by Γ 1 Γ 2 , the root being omitted for simplicity, for a graph Γ 1 and a rooted graph (Γ 2 , o 2 ).The relation between the Green functions is simple and follows from the decomposition (2.16) of the adjacency matrix together with Muraki's formula [14,Theorem 3.1]; see also [16,Theorem 3.2] for another proof of Muraki's formula.
Proposition 2.5.For rooted graphs (Γ 1 , o 1 ) and (Γ 2 , o 2 ) the following formula holds: (2.22) For the characteristic polynomial Schwenk proved the following relation by combinatorial arguments; we give an algebraic proof based on the simpler relation (2.22).Theorem 2.6 ([19, Theorem 5]).Let Γ be a graph on d vertices and (H, o) be a rooted graph.Then Proof.We proceed by induction.Fix an arbitrary vertex o of Γ as a root.Then removing the root o from Γ H splits off an extra copy of H \ o (cf.Fig. 2) and therefore φ (Γ H)\o (x) = φ (Γ\o ) H (x) φ H\o (x).We proceed with identities (2.11) and (2.22) to conclude by induction Finally, formula (2.23) gives rise to an equivalent formula for the renormalized Cauchy transform.
which can be rewritten as , where d is the number of vertices of H.

Cyclic-Boolean independence
In order to study the eigenvalues of the adjacency matrix of star product graphs, we will compute traces of powers of the adjacency matrix.These computations can be abstracted and formulated as a new notion of independence, which we call cyclic-Boolean independence.
3.1.Definition and example.The definition of cyclic-Boolean independence is motivated by the star product from Section 2.6, which can be extended to the general setting of Hilbert spaces as follows.
Example 3.1.Let H i , i ∈ N, be Hilbert spaces with distinguished unit vectors ξ i ∈ H i , P i : H i → H i the orthogonal projection onto Cξ i , T (H i ) the * -algebra of trace-class operators on H i and ϕ i the vector state on B(H i ) defined by The family of * -subalgebras {π i (B(H i ))} N i=1 is Boolean independent in (B(H), ϕ); e.g.see [12,Theorem 8.8].Furthermore, we compute the mixed moments with respect to the trace.A key formula is (3.2) For any cyclically alternating tuple (k

and for any
Let us raise this identity to an abstract concept.Definition 3.2.Let A be a * -algebra over C, ϕ a positive linear functional on A and ω a positive tracial linear functional on A. The triplet (A, ϕ, ω) is called a cyclic non-commutative probability space (cncps).The distribution of a self-adjoint element a ∈ A is the data (ii) for any n ≥ 1, any cyclically alternating tuple (k 1 , . . ., k n ) ∈ K n and any choice of A family of elements {a k } k∈K of A is said to be cyclic-Boolean independent if this is the case for {A k } k∈K , where A k is the * -subalgebra generated by a k without unit.
Example 3.4.Suppose that {a, b, c} is cyclic-Boolean independent in (A, ϕ, ω).Then Another operator model occurs on star products of Hilbert spaces.
Example 3.5.Let H i be separable Hilbert spaces with distinguished unit vectors ξ i as above and Hi = (Cξ i ) ⊥ .The star product of the Hilbert spaces H i is the direct sum Then each H i can be identified with the subspace Cξ ⊕ Hi ⊆ H and there is a canonical representation of B(H i ) on H which acts by simply annihilating the complement of H i .More precisely we decompose H as a direct sum H H i ⊕ H ⊥ i where H ⊥ i = j =i Hj and define the representation π i (X) = X ⊕ 0. Then the algebras A i = π i (B(H i )) are Boolean independent with respect to the vacuum expectation ϕ = .ξ, ξ and moreover, the algebras A i are cyclic-Boolean independent with respect to the trace.Indeed, let P 0 ∈ B(H) be the projection onto Cξ and P i the projections onto Hi ; then P 0 , P 1 , P 2 , . . .form a partition of unity and by definition we have X = (P 0 + P i )X(P 0 + P i ) for all X ∈ A i .Let X 1 X 2 . . .X n be a cyclically alternating product of trace class operators with Next we show that any Boolean independent family can be represented on a star product space.

Construction of a cyclic-Boolean trace. Let (A, ϕ) be a noncommutative probability space,
where A is a * -algebra and A i are Boolean independent subalgebras.In the following assume that A is faithfully represented on a Hilbert space H and that the state ϕ is realized as a vector state ϕ(X) = Xξ, ξ .One way to achieve this under certain conditions is the GNS-construction.
Recall that the GNS-representation consists of the Hilbert space H ϕ obtained by completing the quotient space A/N ϕ , where

The action of the GNS representation is π
Lemma 3.6.The GNS representation is faithful if and only if the state ϕ is nondegenerate in the sense that if ϕ(axb) = 0 for all a, b ∈ A, then x = 0.
If A is unital, then the state vector ξ = [1] ϕ comes for free, otherwise the state must satisfy the Cauchy-Schwarz condition |ϕ(x)| 2 ≤ Cϕ(x * x) for some fixed constant C in order to allow a positive extension to the unitization of A, see [18,Theorem 4.5.11].
Assuming that A and the state ϕ are faithfully represented on some Hilbert space H we identify A with a subalgebra of B(H) and we are now going to reconstruct the star product space from this data.Let H 0 = [ξ] = Cξ be the subspace spanned by ξ and P 0 the orthogonal projection onto it.Adjoining this projection to the algebra A and to each subalgebra A i , Boolean independence is preserved and wlog we may assume that P 0 ∈ A i for every i.Let now Åi = ker ϕ ∩ A i , then we can construct the components of the star product space as follows.
(ii) H i ⊥ H j for all i = j.
Proof.It suffices to verify orthogonality on the dense subspaces Åi ξ.
We now construct the decomposition.Under the assumption that P 0 ∈ A i we have Denote by P i the projection onto Hi , by Â the subalgebra of A generated by (A i ) i∈I and let Ĥ = [ Âξ] ⊆ H the closed invariant subspace generated by ξ.Let further P and P ⊥ be the respective projections onto the space Ĥ and its orthogonal complement Ĥ⊥ .Proposition 3.8.
(i) For each i the space H i is invariant under A i , i.e., for X ∈ A i (3.3) X(P 0 + P i ) = (P 0 + P i )X(P 0 + P i ).
(ii) For i = j the subspace Hj is annihilated by A i , i.e., for X ∈ A i (3.4) XP j = P j X = 0.
(iii) The space Ĥ is the closed linear span of the subspaces A i ξ, i.e., Proof.
(i) This is an immediate consequence of the definition.
(ii) It suffices to show that XP j = 0, i.e., X vanishes on Hj .We verify this on the dense subspace Åj ξ.Indeed, let Y ∈ Åj , then The space Ĥ is the closure of the span of the alternating words X 1 X 2 . . .X n ξ with X j ∈ A i j and i j = j j+1 .We claim that such a word satisfies We proceed by induction.The claim is obviously true for Corollary 3.9.Every X ∈ Â has block decomposition (3.6) X = P X P + P ⊥ X P ⊥ and more precisely every X ∈ A i has the block decomposition (3.7) X = (P 0 + P i )X(P 0 + P i ) + P ⊥ X P ⊥ .
Theorem 3.10.The functional ω(X) = Tr( P X P ) is a semifinite trace on the algebra Â and the subalgebras A i ∩ T (H) are cyclic-Boolean independent with respect to ω.
Proof.ω is a trace on Â because P is in the commutant of Â.Now let X 1 X 2 • • • X n be a cyclically alternating product with X j ∈ A i j for j = 1, 2, . . ., n, then we have Remark 3.11.Conversely, assume that subalgebras A and A are cyclic-Boolean independent in a cnps (A, ϕ, ω).Assume further that A is generated by A and A and that there is a projection p ∈ A such that pap = ϕ(a)p for a ∈ A and ω(p) = ϕ(p) = 1.Then ϕ(x) = ω(px) for all x ∈ A.

Convolution and central limit theorem
where {λ i } i≥1 is the multiset of eigenvalues of a.In particular, the non-zero eigenvalues of a can be detected from ga as poles.If the Hilbert space is finite-dimensional, then we also have the formula and hence lim z→0 z ga (z) = the multiplicity of the eigenvalue zero − dim(H).
Remark 4.1.By [5, Corollary 2.2], the tracial moments Tr(a n ) for all but finitely many natural numbers n determine the eigenvalues of a. So, for any p ∈ N, we can generalize the above setting to the Schatten class S p by using the truncated generating function Let a and b be cyclic-Boolean independent in (A, ϕ, ω).It is known [23] (and will be shown in Remark 4.4 below) that the Green function of a + b can be computed via the formula is the Boolean cumulant transform.The next theorem generalizes this identity to an analogous formula for the generating function ga+b (z) which gives information on the eigenvalues of a + b.
Theorem 4.2.Let a and b be cyclic-Boolean independent elements.Then the renormalized Cauchy transform of their sum is Remark 4.3.While h linearizes independent sums and is useful for analyzing convolutions, we will later introduce a modification which deserves to be called the cyclic-Boolean cumulant transform; see Section 5.
Algebraic proof.We expand the power (a + b) n and regroup the resulting monomials into those ending in a and those ending in b: and applying ω yields Multiplying the above identity by z −n−1 and taking the summation over n yields and thus Analytic proof in the setting of Example 3.5.Under the assumption that our * -algebras are represented as trace class operators on the star product Hilbert space equipped with a vacuum state ϕ = .ξ,ξ and the trace ω = Tr we can use this decomposition and represent the involved operators as block operator matrices In other words, (A + B) ˚= Å ⊕ B is a direct sum and therefore g(A+B) (z) = gÅ (z) + gB (z) and we conclude with the identity (2.14).
Remark 4.4.The idea of the above algebraic/analytic proofs can also be used to verify the known formula (4.3).For example, the Banachiewicz formula (2.9) applied to the decomposition (4.6) yields the Green function in the form 1 Combining this with the formulas 1 we obtain (4.3).

4.2.
Examples from star product graphs.For a rooted graph (Γ, o), its N -fold star product ) has the adjacency matrix that is the sum of cyclic-Boolean independent copies of the adjacency matrix of Γ; see (2.15).Therefore, Theorem 4.2 and (2.17) imply that (4.7) gΓ  3).The eigenvalues of the adjacency matrix of K 2 are ±1, and hence where the latter formula can be computed via (2.11).Using (4.8) entails The renormalized Cauchy transform of (S N , o N ) may be calculated from (4.7) as follows: gS The Cauchy transform is given by This recovers the fact that the multiset of eigenvalues of the adjacency matrix of S N is given by In this case On the other hand 1 and then Thus the renormalized Cauchy transform may be calculated as follows. gF Then This recovers the fact that the multiset of eigenvalues of the adjacency matrix of F N is given by 4.3.Cyclic-Boolean central limit theorem.Since we have an appropriate linearization (4.5) for cyclic-Boolean convolution, we are able to determine the central limit law.
Theorem 4.7.For each N ∈ N, let {a i } N i=1 be self-adjoint cyclic-Boolean independent random variables in a cncps (A N , ϕ N , ω N ).Assume that, for each fixed k ∈ N, the moments ϕ N ((a do not depend on i or N , and also ω N (a for all i and N .Then, for the normalized sum and lim Proof.The Boolean central limit theorem [23] The limit law exhibits a large spectral gap: Corollary 4.8.In addition to the setting of Theorem 4.7, suppose that A N = T (H N ) for some Hilbert space H N and ω N = Tr H N .Let λ N and µ N be the largest and smallest eigenvalues of s N , respectively.The following assertions hold: (i) the multiplicities of λ N and µ N are both one for sufficiently large N ; (ii) λ N converges to 1 and µ N converges to −1 as N → ∞; (iii) the remaining eigenvalues accumulate around 0:  appears because of the variance , where V is the vertex set of Γ. Remark 4.11.In the setting of Corollary 4.10 it is already known that, according to the Boolean central limit theorem, the distribution of (deg(o)N ) − 1 2 A N regarding the vector state ϕ N = •δ o , δ o converges weakly to 1 2 (δ −1 + δ 1 ).This fact entails an intuitive consequence of Corollary 4.10: the vector δ o in the tensor product Hilbert space 2 (V ) ⊗N is almost orthogonal to the subspace spanned by eigenvectors corresponding to small eigenvalues, or equivalently, δ o is almost contained in the twodimensional subspace spanned by the eigenvectors corresponding to the eigenvalues near ±1., respectively, and hence, the function δ o , which corresponds to the vector (1, 0, 0, . . ., 0), is exactly contained in the subspace spanned by f 1 and f 2 .(ii) For the friendship graph on 2N + 1 vertices, its adjacency matrix divided by √ 2N has eigenvalues
We can modify h a (z) by adding the Boolean cumulants to delete −nϕ(a n ) from h n (a).We switch from ga and G a to the moment generating functions The Boolean cumulant transform (4.4) is then expressed by We introduce the new generating function For general n ≥ 2, there exists a universal polynomial P n (x 1 , . . ., x n−1 ) depending only on n such that c n (a) = ω(a n ) + P n (ϕ(a), . . ., ϕ(a n−1 )).

Cyclic-interval partitions.
Cyclic-Boolean independence gives rise to an exchangeability system and we can define and compute the (multivariate) cyclic-Boolean cumulants using the methods of [13,11].
The relevant partition structure turns out to be cyclic-interval partitions, which were already discussed in [7] in their search for notions of independence, similar to Boolean and monotone ones, but such that the algebra of scalars, C, is independent from any other algebra.
Before embarking on we recall some basic concepts on set partitions.

. , j} is called an interval and a set partition of [k] is called an
interval partition if all its blocks are intervals.The set of the interval partitions is denoted by I(k).(iii) For set partitions σ, π ∈ P(k) we write σ ≤ π if every block of σ is a subset of a block of π.This makes P(k) a poset.The trivial set partition {[k]} is the maximum of P(k), which is denoted by 1k .(iv) A tuple (i 1 , . . ., i k ) ∈ N k induces a unique equivalence relation ∼ on [k] by the requirement that p ∼ q holds if and only if i p = i q .The corresponding set partition is called the kernel set partition, denoted by κ(i 1 , . . ., i k ).
We will see that cyclic-Boolean cumulants c π (defined in the next section) vanish identically unless π is a cyclic-interval partition.As already noticed in [7, Corollary 1], it is not difficult to see that the number of cyclic-interval partitions is |CI(n)| = 2 n − n.To see this, the most convenient picture of cyclic-interval partitions is obtained by actually drawing them on a circle as show in Fig. 5. Then it is clear that a cyclic-interval partition is uniquely determined by the set of separators of the blocks.For the maximal partition 1n this set is empty, while for all other cyclic-interval partitions there must be at least two separators.

Multivariate cumulants.
In order to avoid the discussion of positivity (see Remark 5.4 below) we notice that one can easily extend the definition of independence to a purely algebraic setting without positivity.Thus in this section we will focus on an algebraic cyclic probability space (A, ϕ, ω) without positivity structure, that is, A is an algebra over C, ϕ is a linear functional and ω is a tracial linear functional.
Take copies A k of A and define the nonunital algebraic free product Let π (i) : a → a (i) denote the embedding of A into U as the i-th copy A i .By the universality of tensor products we can define ( φ, ω) on U as follows: for n ≥ 1, an alternating tuple (k 1 , . . ., k n ) ∈ N n and One can check that ω is a trace on U, ϕ = φ • π (i) , ω = ω • π (i) and the family of subalgebras {π (i) (A)} ∞ i=1 is cyclic-Boolean independent in (U, φ, ω).

Remark 5.4 (Positivity). It is not clear under what conditions the product trace ω preserves positivity.
First observe that the trivial example ω = 0 shows that some conditions are necessary.Indeed, if ω = 0 and a, b ∈ A are self-adjoint then ω((a (1) + b (2) ) 2 ) = 2ϕ(a)ϕ(b), which can be negative, although both ϕ and ω are positive.
This example suggests that in order to expect positivity of ω one should at least require ω(x * x) ≥ |ϕ(x)| 2 or ω(x * x) ≥ ϕ(x * x).Both conditions however are not promoted to the cyclic free product.Although the proof of [3, Theorem 2.2] adapts well to show ω(x * x) ≥ | φ(x)| 2 for x ∈ U 11 or x ∈ U 22 , where etc., the following example shows that positivity fails on elements mixing these subspaces.Choose x, y, z ∈ A 1 and w ∈ U 22 and put a = x + ywz.Note that ywz is alternating and we compute Now choose x, y, z, w such that which can be made negative by an appropriate rescaling of x.
The positivity condition ω(x * x) ≥ |ϕ(x)| 2 proves to be inappropriate as well.In the last specification, we further choose x so that |ϕ( Taking λ = − ϕ(x)ϕ(w) ϕ(x 2 )−|ϕ(x)| 2 will make this value negative.Both pairs (U, φ) and (U, ω) are exchangeability systems in the sense of [13,Definition 1.8] except that we are not assuming unitality, which however is not essential for the theory of cumulants.For the first pair we will get Boolean cumulants B π which are well known; therefore we will focus on (U, ω) from now on.The exchangeability of (U, ω) means that, for any n ∈ N and a 1 , . . ., a n ∈ A, the value of the function n ) ∈ C is determined by the kernel set partition π = κ(i 1 , . . ., i n ).This value is denoted by ω π (a 1 , . . ., a n ), which then gives an n-linear functional ω π : where µ is the Möbius function for the poset P(n).By [11,Lemma 4.18 (ii)] or imitating the proof of [13,Proposition 4.11] we can prove that c π = 0 if π ∈ P(n) \ CI(n).This vanishing property and the Möbius inversion of (5.3) imply that (5.4) ω(a To compute the non-vanishing cumulants we distinguish three cases. (i) Let π ∈ CI(n) and first assume that π ∈ I(n).Now if π < 1n then 1 ∼ π n and moreover 1 ∼ ρ n for any ρ satisfying ρ ≤ π.This allows us to replace ω by φ to obtain see [13,Proposition 4.11] for the last equality.(ii) Let us assume next that π ∈ CI(n) \ I(n).This means that π < 1n and that 1 ∼ π n.In this case we cannot immediately replace ω by ϕ, but first must use the traciality of ω and rotate the partition π into an element of I(n).Indeed fix a cyclic permutation σ ∈ S n such that σ • π ∈ I(n).
Then 1 ∼ π n and also 1 ∼ ρ n for any ρ ≤ π and we have notice that the rotation cannot be reversed now because the Boolean cumulants are nontracial.(iii) Finally if π = 1n there is no direct formula but we infer from the moment-cumulant formula (5.4) that where for each π an appropriate cyclic permutation σ is chosen.
Remark 5.5.Note that for univariate cumulants the rotation does not change the value of the cumulant and we can write where b π (a) = B π (a, a, . . ., a) and c n (a) = c 1n (a, a, . . ., a).
We can see that the definition of c n (a) in Remark 5.5 coincides with that in (5.2).This can be confirmed from the definition and uniqueness of cumulants, but here we directly prove the formula (5.1) for c n (a) = c 1n (a, a, . . ., a) using the recurrence relation (5.5).Decomposing CI(n) into I(n) and CI(n) \ I(n) we obtain Multiplying the above by z n and taking the sum over n in (5.5) yields

Cyclic-Boolean infinite divisibility
This section is devoted to the definition and classification of infinite divisibility.Due to the lack of a precise notion of positivity (see Remark 5.4), we are not able to treat general * -algebras with a state and a tracial linear functional.Hence, we give the definition of infinitely divisible distributions in the special setting of operators on Hilbert spaces where the linear functional ω is chosen to be the trace.Definition 6.1.Let H be a Hilbert space and ϕ a state on B(H).An element a ∈ T (H) sa is said to be cyclic-Boolean infinitely divisible if for any n ∈ N there exist a Hilbert space H n and a state ϕ n on B(H n ) and cyclic-Boolean i.i.d.elements a 1 , . . ., a n ∈ T (H n ) sa such that a with respect to (ϕ, Tr H ) has the same distribution as a 1 + • • • + a n with respect to (ϕ n , Tr Hn ).
Suppose that a is a trace class selfadjoint operator and cyclic-Boolean ID.For each n ≥ 2, a equals the sum of certain cyclic-Boolean iid random variables a n,1 , . . ., a n,n in distribution, and let t = 1/n and denote gt = ga n,i and G t = G a n,i .Let {λ i } i∈I be the set of mutually distinct eigenvalues of a and m i be the multiplicity of λ i .Setting I 0 = {i ∈ I : Moreover, let E a be the spectral decomposition of a and p i = ϕ(E a ({λ i })) ≥ 0; then we have For later use we also set By calculus, we see that G a has a unique zero in each interval between neighboring poles and has no other zeros off the real line.Hence the set {µ j } j∈J of zeros of G a is contained in R and is interlacing with {λ i } i∈I .
Lemma 6.2.The factorization Proof.When the set I is finite, the conclusion is easily proved since G a is a rational function.We may then assume that I and hence I 0 is an infinite set.We decompose the set {λ i } i∈I 0 into the positive part , where n ± ∈ N ∪ {0, ∞}.By our assumption, n − or n + is infinity.We rewrite the function G a into the form where The fact that n − or n + is infinity implies that p( ) > 0 for every > 0. Since G a is a rational function, we have where µ ± k ( ) is the unique zero of G a on the interval between λ ± k and λ ± k+1 for k = 1, 2, . . ., n ± ( ) − 1 and µ ± n ± ( ) ( ) is the unique zero of G a on the interval between 0 and λ ± n ± ( ) .In order to pass to the limit in (6.1), first note that n ± ( ) the desired formula.
Now we are able to characterize infinitely divisible measures with respect to cyclic independence.First, notice that taking the logarithmic derivative in Lemma 6.2 yields that As before, let t = 1/n and denote gt = ga n,i and G t = G a n,i .Let {λ i (t)} i∈I 0 (t) be the set of (mutually distinct) non-zero poles of G t .Since the set {λ i (t)} i∈I 0 (t) is exactly the set of the zeros of (1 − t)zG a (z) + t.On the other hand, the set of zeros of G t is exactly the set of the zeros of G a , and hence . .
Now, for each non-zero real α the number lim z→α (z − α) gt (z) is non-negative, as it is a positive integer if α is a non-zero eigenvalue of a n,1 and zero otherwise.Therefore, to cancel the negative coefficient −(1 − t) above, the only possibility is that each non-zero µ j must be a member of {λ i } i∈I 0 ∪ {λ i (t)} i∈I 0 (t) and because of interlacing of the zeros and poles of G a , one sees that µ j can not be included in {λ i } i∈I 0 ∪ {λ i (t)} i∈I 0 (t) as it is a zero of G a .
It is possible that µ j = λ i for some i ∈ I 0 \ I 0 .In this case, however, for t > 0 sufficiently small (namely, n sufficiently large) the coefficient tm i − (1 − t) is negative; therefore, we conclude that µ j = 0 for all j ∈ J or J = ∅, and hence #J = 0 or 1.This happens only if #I = 0, 1 or 2.
On the other hand, for i ∈ I 0 we have lim z→λ i (z − λ i ) gt (z) = t(m i − 1) which must be a non-negative integer for any t = 1/n.Therefore, we conclude that m i = 1 for all i ∈ I 0 .For i ∈ I 0 \ I 0 we have lim z→λ i (z − λ i ) gt (z) = tm i , which cannot be an integer for sufficiently small t, and hence I 0 = I 0 .Now it remains to study the possible cases for I 0 = I 0 = 0, 1 or 2.
The eigenvalues can be retrieved from the formula .
It is easy to see that the above cases are actually cyclic Boolean ID.Thus we arrive to the following.
Theorem 6.3.Let H be a Hilbert space and ϕ be a state on T (H).An element a ∈ T (H) sa is cyclic-Boolean ID with respect to (ϕ, Tr H ) if and only if a has either (i) only zero eigenvalues (that is, a = 0), (ii) only one non-zero eigenvalue and its multiplicity is one, or (iii) exactly two non-zero eigenvalues α, β, their multiplicities are one, αβ < 0 and the distribution of a with respect to ϕ is In the last case, for every n ≥ 2 an n-th root of a has two non-zero eigenvalues α n , β n given as the solutions to the equation and the distribution with respect to the state is Example 6.4.The matrix 1 } and its distribution with respect to the unit vector e 1 = t (1, 0) is By Theorem 6.
Note that this embedding does not preserve trace class and therefore the construction is restricted to finite dimensional spaces.It is known that the family {σ i (B(H i ))} N i=1 is monotonically independent with respect to ϕ ; see [12,Theorem 8.9].
In addition, we can compute moments with respect to the trace.Again formula (3.2) is crucial: for a cyclically alternating tuple (i 1 , . . ., i n ) ∈ [N ] n and A k ∈ B(H i k ), if p ∈ [n] is such that i p−1 < i p > i p+1 (with the conventions i 0 = i n and i n+1 = i 1 ) then direct computations entail This example can be abstracted in the following way.Definition 7.2.Let (A, ϕ, ω) be a cncps, I be a toset, and Î := {−∞}∪I be an enlargement of I, where −∞ is the minimum of Î.An ordered family of * -subalgebras {A i } i∈I of A is said to be cyclic-monotone independent if (i) it is monotonically independent with respect to ϕ, that is, for any n ≥ 2, any alternating tuple (i 1 , . . ., i n ) ∈ I n (namely (ii) for any n ≥ 2, cyclically alternating tuple (i 1 , . . ., i n ) ∈ I n (namely Definition 7.3.Let (A, ϕ, ω) be a cncps and I be a toset.An ordered family of elements {a i } i∈I of A is said to be cyclic-monotone independent if so is {A i } i∈I , where A i is the * -algebra generated by a i without unit.)ω(a 3 ).
Remark 7.5.Cyclic-monotone independence already appeared in the random matrix model in [5] (see also [20,2] and Section 1), where independence was defined for a pair of * -subalgebras and only for ω.
For a random matrix model for monotone independence see [4].
In [5] the trace functional ω is unbounded, because it can diverge in the large dimensional limit, and therefore a domain for ω was specified.To avoid this problem in the present paper focus on finite dimensional Hilbert spaces and ω = Tr.
It should be noticed that Example 7.1 does not provide an i.i.d.operator model even when H i = K does not depend on i; for A ∈ B(K) the operators {σ i (A)} N i=1 are identically distributed with respect to ϕ, but not with respect to ω, because where d = dim(K).In fact we do not know of any non-trivial operator model for cyclic monotone i.i.d.random variables and for this reason we do not see any meaningful notions of cumulants and of infinitely divisible distributions.7.2.Cyclic-monotone convolution.The convolution formula can be verified in ways.
Theorem 7.6.Let (A, ϕ, ω) be a cncps and a, b ∈ A. Suppose that (a, b) is cyclic-monotone independent.We then have ga+b and applying ω yields Multiplying the above identity by z −n−1 and taking the summation over n yields ga+b Note here that the identity is used above.
and we obtain the block decompositions and together We compute the resolvent via the Schur complement.To this end we first compute the lower resolvent the resolvents of Å and B, respectively, then with the help of Banachiewicz' formula (2.9) the resolvent (7.3) can be written as . Now we plug L −1 into Banachiewicz' formula (2.9) for (7.2): where A similar computation yields that Therefore, the normalized traces converge as N → ∞ without rescaling of b N : In order to describe the general situation, we need some concepts on ordered set partitions.For further information on ordered (kernel) set partitions the reader is referred to [11].
For an ordered set partition π of [k] there exists a unique packed word, i.e., a tuple i(π) = (i 1 (π), . . ., i k (π)) ∈ [|π|] k such that π = ker(i(π)).Using this tuple we define ω(π) to be ω(a Example 7.9.If π = ({1, 3}, {2}) then i(π) = (1, 2, 1) and Thus the empirical eigenvalue distributions of b N converge (in the sense of moments) to a probability measure whose k-th moment is the above limit.Of course the empirical eigenvalue distributions of the rescaled sum N −1/2 b N converge weakly to δ 0 , which means that the number of eigenvalues of N −1/2 b N outside a fixed neighborhood of 0 is of the order o(d N ).Combining this with the monotone CLT, which asserts that vacuum spectral distribution of N −1/2 b N weakly converges an arcsine distribution, it turns out that the vacuum vector captures a relatively small number of eigenvalues of N −1/2 b N that lie outside the neighborhood of 0.
The limit moments (7.5) depend on a lot of information about trace and vacuum moments of the original matrix a.This is in sharp contrast with the fact that if ψ(a) = 0 and ψ(a 2 ) = 1 then the distribution of the rescaled sum N −1/2 b N with respect to the vacuum state ϕ converges weakly to the same arcsine law.
We come back to the original model of comb product graphs in Section 2.7 (cf: Example 7.1), and compute the limit empirical eigenvalue distribution of the adjacency matrix of the iterated comb product of the complete graph K 2 .Even for this simplest graph, the limit moments (7.5) are not explicit; they only satisfy a recurrence relation.Fortunately, we can describe the limit distribution with the help of work of Smyth [21], who defined a distribution function L + : [0, ∞) → [0, 1) (denoted as F therein) characterized by the property that L + is strictly increasing, L + (0) = 0 and x > 0.
Let λ + be the distribution associated with L + and λ be the symmetrization of λ + .It is known that L + is continuous and hence λ has no atoms.
Theorem 7.12.Let A N be the adjacency matrix of the N -fold comb product of (K 2 , o) with itself.Then the empirical eigenvalue distribution of A N converges weakly to λ as N → ∞.
Proof.In the notation of this section, we are dealing with K = C 2 , a = 0 1 1 0 , and ξ = 1 0 .
The very definition of ω(π) shows that γ n,1 = 2 and ω(π) is either 0 or 2. Below we identify [p] with Z p regarded as points on a circle.Let p ≥ 2, then a maximal arc in a subset B ⊂ Z p is a maximal cyclic interval I ⊆ Z p contained in B.
Let n ≥ 2. We adopt the notation 0 0 = 1.Assuming the desired inequality holds until n − 1 for some constant C > 1, one has We split the sum into the two parts 1 ≤ ≤ n − 4 and n − 3 ≤ ≤ n − 1 (the arguments below are valid even for 2 ≤ n ≤ 4 by setting the irrelevant terms to be 0).The first part is estimated as

( 2 2 . 4 .
.10) det M = det(M/D) det D holds.Relation between the Green function and the characteristic polynomial.Let (Γ, o) be a rooted graph on vertices v 1 , v 2 , . . ., v d .If we decompose its adjacency matrix A = 0 b * b D into block form with D = A Γ\o , then the Green function (2.5) is the upper left entry of the inverse of the matrix M = zI − A = z −b * −b zI − D and coincides with the inverse of its Schur complement, which results in
and ϕ the vacuum state on B(H) defined by ξ.Let π i : B(H i ) → B(H) be the * -homomorphism defined by(3.1)

4. 1 .
Cyclic-Boolean convolution.Let (A, ϕ, ω) be a cncps.For a ∈ A the renormalized (tracial) Cauchy transform is the formal Laurent series ga (z) = ∞ n=1 ω(a n ) z n+1 .By slight abuse of terminology, we call G a the Green function (evaluated at the state ϕ) of a.It has formal Laurent expansion G a (z) = 1 z + ∞ n=1 ϕ(a n ) z n+1 , and we denote the reciprocal Green function by F a (z) = 1/G a (z).If a is a trace class operator on a Hilbert space and ω is the trace then |Tr(a n )| ≤ |a| n−1 Tr(|a|) ≤ a n−1 Tr(|a|), and hence ga (z) is absolutely convergent in {z ∈ C : |z| > a }.Moreover, if a is selfadjoint then ga has analytic extension to C \ spec(a) by Lidskii's theorem (4.1) ga

Example 4 . 6 (
Friendship graph).The friendship graph F N is the graph with 2N +1 vertices {0, . . ., 2N } in which 0 is connected to every other vertex and the only other edges are {2i − 1, 2i} for 1 ≤ i ≤ N .The friendship graph is the N -fold star product of the complete graph (K 3 , o) with itself; see Figure4.
lim N →∞ dist(spec(s N ) \ {λ N , µ N }, 0) = 0 Proof.Let s be a self-adjoint operator of rank two on a Hilbert space K having eigenvalues −1, 1, 0. Then Tr K (s n ) = 2 for even n ≥ 2 and Tr K (s n ) = 0 for odd n ≥ 1.The convergence of Tr H (s k N ) in Theorem 4.7 and [5, Proposition 2.8] imply that s N → s in eigenvalues and this concludes the argument.

Remark 4 . 9 .
As shown in the proof of Theorem 4.7, Tr H N (s 2 N ) converges (actually is equal) to α which might not equal 2 = Tr K (s 2 ).This difference of Hilbert-Schmidt norms is due to a large number of small eigenvalues of s N and does not contradict the convergence of eigenvalues; see[5, Proposition 2.8, Proposition 2.10 and Remark 2.11].Now we come back to the original model, the adjacency matrix of the star product of rooted graphs.Corollary 4.10.Suppose that (Γ, o) is a rooted graph with deg(o) ≥ 1.Let A N be the adjacency matrix of the N -fold star product graph (Γ, o) (Γ, o) . . .(Γ, o).Let λ N and µ N be the largest and smallest eigenvalues of (deg(o)N ) − 1 2 A N , respectively.The following assertions hold: (i) the multiplicities of λ N and µ N are both one for sufficiently large N ; (ii) λ N converges to 1 and µ N converges to −1 as N → ∞; (iii) lim N →∞ dist(spec((deg(o)N ) − 1 2 A N ) \ {λ N , µ N }, 0) = 0 Proof.This is a combination of Corollary 4.8, formula (2.15) and Example 3.1.The factor (deg(o)N ) − 1 2

. 1 )
which linearizes the convolution c a+b (z) = c a (z) + c b (z).The function c a will be called the cyclic-Boolean cumulant transform of a and the coefficients c n (a) appearing as (5.2) c a (z) = n≥1 c n (a)z n are called the (univariate) cyclic-boolean cumulants of a.The first two cumulants are c 1 (a) = ω(a) and c 2 (a) = ω(a 2 ) − ϕ(a) 2 .

Definition 5 . 1 .
Let k ∈ N. We often use the notation [k] = {1, 2, . . ., k}. (i) A set partition of [k] is a set π = {B 1 , B 2 , . . ., B p } of nonempty and disjoint subsets B 1 , . . ., B p of [k], called blocks, such that their union is [k].The length |π| of a partition π is the number of blocks.The set of the partitions of the set [k] is denoted by P(k).Set partitions are in one-to-one correspondence with equivalence relations: Any set partition π ∈ P(k) determines an equivalence relation i ∼ π j on [k] by requiring that i, j belong to the same block of π; conversely, for an equivalence relation ∼ on [k] its equivalence classes determine disjoint subsets of [k] and hence a set partition.(ii) A subset of [k] of form {i, i + 1, . .
two vertices (x 1 , x 2 ) and (y 1 , y 2 ) are connected by an edge if and only if either x 1 y 1 ∈ E 1 and x 2 = y 2 = o 2 , or x 1 = y 1 and x 2 y 2 ∈ E 2 .
Proposition 2.7.In the setting of Theorem 2.6, one has gΓ H (z) = d gΓ (z) + F H (z) gΓ (F H (z)). Later we will give two more proofs in a more general setting; see Theorem 7.6.
3, A is cyclic-Boolean infinitely divisible with respect to ( • e 1 , e 1 C 2 , Tr C 2 ).Definition and example.We perform an investigation for monotone independence in a spirit similar to cyclic-Boolean independence.To this end we start from a specific operator model inspired by the comb product of rooted graphs in Section 2.7.Example 7.1.Let H i , i ∈ N, be finite-dimensional Hilbert spaces with distinguished unit vectors ξ i ∈ H i respectively.Let P i : H i → H i be the orthogonal projection onto Cξ i and ϕ i be the vector state on B(H i ) defined byξ i .Let H = H 1 ⊗ • • • ⊗ H N , ξ = ξ 1 ⊗ • • • ⊗ ξ Nand ϕ be the vacuum state on B(H) defined by ξ.This is the same setting as in Example 3.1 with the additional requirement of finite dimensionality.Analogously to the embedding (3.1) we introduce embedding of B(H i ) into B(H): Analytic proof in the setting of Example 7.1: Schur complement approach.Let A ∈ B(H 1 ) and B ∈ B(H 2 ) be operators with block decompositions Cξ i ⊕ Hi as in (2.12) and let σ 1 (A) = A ⊗ P 2 and σ 2 (B) = I 1 ⊗ B act on H 1 ⊗ H 2 Cξ ⊕ H1 ⊕ (H 1 ⊗ H2 ) according to Example 7.1, i.e., if we denote by η 1 : H1 → H 1 the embedding and η * 1 : H 1 → H1 the projection, then σ 1 (ii) For a tuple i = (i 1 , . . ., i k ) ∈ N k , the ordered kernel set partition ker(i) ∈ OP(k) is defined as follows: first, pick the smallest value p 1 among i 1 , . . ., i k and then define the subset B 1 = {j ∈ [k] : i j = p 1 }; secondly, pick the second smallest value p 2 among i 1 , . . ., i k and define the subset B 2 = {j ∈ [k] : i j = p 2 }; continuing this procedure until the end we arrive at an ordered set partition (B 1 , B 2 , . . .), which is denoted by ker(i).
Definition 7.7.Let k ∈ N. (i) An ordered set partition of [k] is a tuple π = (B 1 , B 2 , . . ., B p ) of subsets of [k] such that {B 1 , . . ., B p } is a set partition of [k]; that is, B 1 , . . ., B p are non-empty and mutually disjoint subsets of [k], and their union is [k].The length p of π is denoted by |π|.The set of the ordered set partitions of [k] is denoted by OP(k).