Combinatorics of the immaculate inverse Kostka matrix

The classical Kostka matrix counts semistandard tableaux and expands Schur symmetric functions in terms of monomial symmetric functions. The entries in the inverse Kostka matrix can be computed by various algebraic and combinatorial formulas involving determinants, special rim hook tableaux, raising operators, and tournaments. Our goal here is to develop an analogous combinatorial theory for the inverse of the immaculate Kostka matrix. The immaculate Kostka matrix enumerates dual immaculate tableaux and gives a combinatorial definition of the dual immaculate quasisymmetric functions Sα. We develop several formulas for the entries in the inverse of this matrix based on suitably generalized raising operators, tournaments, and special rim-hook tableaux. Our analysis reveals how the combinatorial conditions defining dual immaculate tableaux arise naturally from algebraic properties of raising operators. We also obtain an elementary combinatorial proof that the definition of Sα via dual immaculate tableaux is equivalent to the definition of the immaculate noncommutative symmetric functions Sα via noncommutative Jacobi–Trudi determinants. A factorization of raising operators leads to bases of NSym interpolating between the S-basis and the h-basis, and bases of QSym interpolating between the S∗-basis and the M -basis. We also give t-analogues for most of these results using combinatorial statistics defined on dual immaculate tableaux and tournaments.

1. Introduction 1.1. The Kostka Matrix and its Inverse. The Kostka matrix and its inverse are central objects in the theory of symmetric functions. For each positive integer n, the Kostka matrix K n has rows and columns indexed by integer partitions of n. The entry K n (λ, µ) in row λ, column µ, counts the number of semistandard Young tableaux (SSYT) of shape λ and content µ. These are fillings of the cells in the diagram of λ with µ 1 copies of 1, µ 2 copies of 2, etc. such that every row is weakly increasing from left to right and every column is strictly increasing from bottom to top. Let Sym n be the vector space (over any field F ) of symmetric functions of degree n. Three bases of Sym n are the monomial basis (m λ ), the Schur basis (s λ ), and the complete basis (h λ ). The Kostka matrix and its transpose connect these bases, as follows: (1) s λ = The sums here range over integer partitions of n; we omit the subscript n in K n when it is clear from context. Here and below, we assume familiarity with standard notation for partitions and symmetric functions, which can be found in references such as [11,14,15]. If we order integer partitions of n lexicographically, then the Kostka matrix is upper-triangular with diagonal entries equal to 1. Thus we have an inverse Kostka matrix K −1 = K −1 n that provides inverse transition matrices to the ones above: There is a rich combinatorial theory for the inverse Kostka matrices in Sym. The following four methods (two algebraic formulas and two associated combinatorial models) are available for computing the entries K −1 (λ, µ).
Here I is the identity operator and R i,j acts on any h β by incrementing β i and decrementing β j , but there are subtleties (see §2.1 for a full explanation). (4) Tournament Model. The raising operator formula translates into a combinatorial model for K −1 (λ, µ) as a signed sum of certain tournaments. Details appear in §4.1.
1.2. The Immaculate Kostka Matrix. Our goal in this paper is to extend the combinatorics of the inverse Kostka matrix from the space Sym to the spaces NSym and QSym. Here Sym is the self-dual Hopf algebra of symmetric functions, while NSym and QSym are the dual Hopf algebras of noncommutative symmetric functions and quasisymmetric functions, respectively. We will only require the vector space structure of these Hopf algebras, rather than the full machinery of the product, coproduct, and antipode map. For more information on NSym and QSym, see [7,13]. A variety of Schur-like bases have been studied in NSym and QSym, each of which leads to a possible version of the classical Kostka matrix [1,2,7,8]. We focus on the version arising from the immaculate basis (S α ) of NSym and the dual immaculate basis (S * α ) of QSym, first defined in [2]. These bases, which are indexed by integer compositions α, can be defined in two equivalent ways. There is a combinatorial approach based on tableaux, as well as an algebraic approach based on the noncommutative Jacobi-Trudi formula.
Algebraic Combinatorics, Vol. 4 #6 (2021) The combinatorial approach needs the following definitions. A (strict) composition of n of length k is a list α = (α 1 , α 2 , . . . , α k ) of k positive integers with n = α 1 + α 2 + · · · + α k . Note that α i = 0 is not allowed in a composition. Let Comp n be the set of all compositions of n. Let Comp be the set of all compositions. The diagram of a composition α is an array of boxes, where row i consists of α i left-justified boxes. Our convention is to number the rows from bottom to top and number the columns from left to right. A dual immaculate tableau of shape α is a filling T of the diagram of α with positive integers such that every row is weakly increasing from left to right, and column 1 (the leftmost column) is strictly increasing from bottom to top. The content of T is the list (f 1 , f 2 , . . .) where f i is the number of times i appears in the filling T . For each n, the immaculate Kostka matrix is the matrix K n with rows and columns indexed by compositions of n, such that K n (α, β) is the number of dual immaculate tableaux of shape α and content β. As before, we omit the subscript n when it is clear from context. Ordering compositions lexicographically, one readily checks that each matrix K n is upper-triangular with 1s on the diagonal, so K n is invertible over Z.  4 31 22 211 13 121 112 1111  4 1 Now let (M α ) be the monomial basis for QSym, and let (h α ) be the noncommutative complete basis for NSym. By definition, (M α ) and (h α ) are dual bases for NSym and QSym. This means that there is a bilinear pairing ·, · : QSym × NSym → F (where F is the field of scalars) such that M α , h β = χ(α = β). Here and below, for any logical statement P , χ(P ) = 1 if P is true, and χ(P ) = 0 if P is false.
By analogy with (1), we define: These formulas explicitly define the dual immaculate functions S * α in QSym and implicitly define the immaculate functions S α in NSym. Multiplying by the inverse immaculate Kostka matrix, this definition is equivalent to: Another equivalent version of these formulas is to define S * α by the M -expansion in (3) and then declare that S α is the unique dual basis of NSym, meaning that S * α , S β = χ(α = β) for all compositions α, β.
Algebraic Combinatorics, Vol. 4 #6 (2021) 1.3. The NonCommutative Jacobi-Trudi Formula. The algebraic approach to defining S α and S * α goes in the opposite direction. First we define S β by a noncommutative version of the Jacobi-Trudi formula. Given any m × m matrix A with entries in a (possibly noncommutative) ring, let the top-to-bottom determinant of A be Here S m is the set of permutations of {1, 2, . . . , m}, and we multiply the chosen entries of A in order working from row 1 down to row m. For any composition β = (β 1 , . . . , β m ), we define where h 0 = 1 and h k = 0 for k < 0. Then we declare that (S * β ) is the unique basis of QSym dual to the basis (S β ) of NSym.
Due to noncommutativity, the answer is not zero even though the determinant has two equal rows.
To see that this approach is equivalent to the combinatorial approach, one must check that the h-expansion of S β (as given by the Jacobi-Trudi formula) really does agree with the expansion in (3) obtained by inverting the matrix of dual immaculate tableau counts. The equivalence of the two definitions is known but non-trivial; we give a combinatorial proof of this fact later (Theorem 4.7). We take the combinatorial formulas (3) and (4) as the definition of (S α ) and (S * α ) to be used here.
1.4. Overview of Results. Section 2 defines raising operators R i,j on various abstract vector spaces and examines algebraic properties of the inverse Kostka operators i<j (I − R i,j ) on each space. Section 3 develops combinatorial algorithms for computing these operators and their inverses based on manipulations of filled diagrams. Analysis of these algorithms leads naturally to expressions involving dual immaculate tableaux (Theorem 3.9) and analogous fillings of diagrams where rows of length zero may occur (Theorem 3.6). Applying raising operators in stages leads to bases of NSym interpolating between the S-basis and the h-basis, and dual bases of QSym interpolating between the S * -basis and the M -basis ( §3.7). Section 4 develops combinatorial models for the entries in the inverse of the dual immaculate Kostka matrix. We provide formulas involving tournaments (Theorem 4.3), transitive tournaments (Theorem 4.4), noncommutative determinants (Theorem 4.7), recursions (Theorem 4.9), and special rim hook tableaux (Theorem 4.12).
Section 5 defines t-analogues of inverse Kostka operators, dual immaculate tableaux, tournaments, and the associated bases of NSym and QSym. These t-analogues can be viewed as noncommutative and quasisymmetric versions of the Hall-Littlewood polynomials, of which there are several in the literature already [2,8,9].

Algebraic Development of Inverse Kostka Operators
In this section, we study the algebraic properties of inverse Kostka operators i<j (I − R i,j ) acting on various abstract vector spaces. Later we specialize these results to obtain information about transition matrices in QSym, NSym, and Sym. Throughout this discussion we fix a positive integer m and a field F . All vector spaces use scalars coming from F (the theory also works for free Z-modules). I denotes the identity map on the vector space currently being considered. We set [m] = {1, 2, . . . , m}.
2.1. Action on Lists of Integers. Let Z m be the set of all ordered lists [a 1 , . . . , a m ] with each a i ∈ Z. Let V 0 be the vector space having Z m as a basis. Thus, by definition, every v ∈ V 0 is a finite formal linear combination of m-element lists of integers. For distinct i, j ∈ [m], define the raising operator R i,j : V 0 → V 0 to be the linear map on V 0 that acts on basis vectors via In the F -algebra of all linear operators on V 0 , all these raising operators commute and obey the associative and distributive laws. We can now give a rigorous explanation of the raising operator formula stated in §1.1. Given a partition µ = [µ 1 , . . . , µ m ] with m parts, first apply the operator 1 i<j m (I − R i,j ) to [µ 1 , . . . , µ m ] ∈ V 0 to obtain some linear combination of m-element lists. Then apply the evaluation map E : V 0 → Sym that sends each list α = [α 1 , . . . , α m ] to h α = h α1 · · · h αm (where h 0 = 1 and h k = 0 for k < 0, and the h k 's commute). The resulting symmetric function is known to be the Schur function s µ [14, pg. 42].
Trying to analyze i<j (I − R i,j ) in V 0 is problematic, however, for the following reason. Although each R i,j is invertible (clearly R −1 i,j = R j,i ), the operators  Intuitively, if the old raising operator produces a list with a negative entry, that list is discarded from the answer. For example, R 1,3 (−2[3, 1, 5] + 5[2, 3, 0]) = −2 [4,1,4]. These raising operators (like any linear maps on a vector space) still obey the associative and distributive laws, but they no longer commute in general. For example, However, if i, j, a, b are four distinct indices, then R i,j does commute with R a,b . Similarly, for a fixed index j, the operators R 1,j , R 2,j , . . ., R j−1,j all commute. This is because , the number of occurrences of j in j 1 , . . . , j s is at most a j . Then applying these raising operators to input [a 1 , . . . , a m ], in any order, produces the same output. This holds since the exceptional case where a basis vector is sent to zero never applies. More specifically, the output here is . This formula is unchanged by reordering the given raising operators.
Due to noncommutativity, we must take care in defining what i<j (I−R i,j ) means. For each fixed j between 2 and m, define a linear map T j : V → V by For definiteness we compose the factors I − R i,j in the indicated order, although these factors do commute. Next, we define the inverse Kostka operator on V to be So T m acts first on an input vector, then T m−1 , etc. ending with T 2 . We use this order to ensure that we get the expected answer for Sym when working in V rather than V 0 (see the discussion below (11) in §4.1).
Acting on V , it is no longer true that R −1 i,j = R j,i . In fact, each R i,j is a locally nilpotent linear operator on V . This means that for each v ∈ V , there exists a positive integer N (depending on v) such that R N i,j (v) = 0. More specifically, starting with the basis vector [a 1 , . . . , a j , . . . , a m ] and applying R i,j repeatedly, we obtain zero after N = a j + 1 steps.
The local nilpotence makes it easy to invert I − R i,j using the formal geometric series formula. Specifically, The sum on the right side has only finitely many nonzero terms when applied to any specific input vector v. We could also fix n and restrict attention to the subspace spanned by lists [a 1 , . . . , a m ] with a 1 +· · ·+a m = n. On this subspace, R i,j is nilpotent of index n + 1, and (I − R i,j ) −1 = It is now routine to invert the full inverse Kostka operator on V . For fixed j, we have Expanding this with the distributive law, we see that T −1 is the sum of all terms of the form R

Combinatorics of the immaculate inverse Kostka matrix
The operators T and T j on V do not always send inputs from W to outputs in W . To resolve this difficulty, we introduce a linear projection map P : V → W that acts on basis vectors by moving any zero parts to the right end. For example, P ([2, 0, 3, 0, 1, 0]) = [2, 3, 1, 0, 0, 0] and P ([2, 4, 1, 3, 0, 0]) = [2, 4, 1, 3, 0, 0]. We say that P acts on a list by packing the nonzero parts of its input on the left end. We now define our inverse Kostka operator on the space W by letting However, the following example shows this is not true in general. On one hand, applying The example shows that we cannot indiscriminately insert new P 's in the product of I − R i,j 's defining U . But there are certain locations where P 's may be inserted safely. Specifically, we show later (Lemma 3.
However, we no longer have simple formulas for U −1 j based on the geometric series, due to the presence of the non-invertible projection map P . The next section shows how to invert each U j and U itself using combinatorial models for these operators and their inverses.

Combinatorics of Inverse Kostka Operators
This section gives combinatorial models for the action of the operators T , and U on basis vectors of V and W . We visualize a basis vector [a 1 , . . . , a m ] of V by a generalized composition diagram that has a i left-justified boxes in row i from the bottom. Working in V , we allow some rows to have zero boxes; but for basis vectors in W , all such rows must occur at the top of the figure.
3.1. Combinatorial Action of T j and U j . Fix j between 2 and m. We begin with a model for We create a signed linear combination of diagrams by making the following choices in all possible ways and adding the results. Start with the diagram of v with coefficient +1. First, to model I − R j−1,j , either leave the diagram unchanged or move one box from row j to row j − 1 and change the sign. Second, either leave the diagram unchanged or move one box from row j to row j − 2 with a sign change. Continue until the choice for I − R 1,j , where we either leave the diagram unchanged or move one box from row j to row 1 and change the sign. During this process, if all boxes in row j are moved, then we must choose I (leave the diagram unchanged) from that point on.
The algorithm for T j can be described more concisely, as follows. Starting with the diagram of v = [a 1 , . . . , a m ], decrease a j by some amount d with 0 d a j , and increment d distinct entries in the first j − 1 positions of v. Use the coefficient (−1) d for this list, and add up all the lists that can be made from v in this way.
The action of U j = P • (T j | W ) is similar, but there are two modifications. First, U j is only allowed to act on (linear combinations of) basis vectors in W . So any input v = [a 1 , . . . , a m ] to U j must have all zero parts at the right end. Second, for any output list w = ±[b 1 , . . . , b m ] produced by the algorithm for T j , we must replace this list by its packed version P (w). Now v is already packed, and the choice process creating w from v can only create a new zero in position j. Therefore, we have terms of diagrams, when we make choices that move all a j cells from row j down to lower rows, the part of the diagram above row j falls into the empty row j at the end.
. The diagrams for v and U 3 (v) are shown below. We place a minus sign in each cell moved by an R i,j operator to keep track of the sign.
is given algebraically by the geometric series formula (7) Given a basis vector v = [a 1 , . . . , a m ] ∈ V , we can compute T −1 j (v) by acting on diagrams as follows. Do the following steps in all possible ways and add the results. Starting with the diagram of [a 1 , . . . , a m ], choose nonnegative integers e 1 , . . . , e j−1 with sum at most a j . For i = 1, 2, . . . , j − 1 in turn, move e i boxes from row j to row i. We mark each moved box with a + symbol, noting that there are no negative signs in the formula for T −1 j . Now we introduce a combinatorial operator U j on W that will turn out to be U −1 j . Start with the diagram of a packed basis vector v = [a 1 , . . . , a m ] ∈ W . As before, choose e 1 , . . . , e j−1 with sum at most a j and move e i boxes from row j to row i for each i < j. If there are still boxes left in row j, then we stop here and record the resulting object. However, for choices of e i where e 1 + · · · + e j−1 = a j , row j becomes empty and the higher rows fall down (as happened with U j ). When this occurs, we continue the process, choosing e 1 , . . . , e j−1 with sum at most a j+1 (the number of boxes now in row j). We again move e i boxes from row j to row i for each i < j. If e 1 + · · · + e j−1 < a j+1 , then we stop and record the resulting object. Otherwise, the rows above row j fall again and we continue recursively. This process must terminate after finitely many steps, since eventually we run out of boxes above row j.
Proof. It suffices to prove that U j •U j = I (the identity map on W ). The other identity U j •U j = I follows automatically because for each n and m, U j and U j restrict to linear maps on the finite-dimensional subspaces of W spanned by composition diagrams with n boxes and m rows. When we act on diagrams by a sequence of operators, it is helpful to annotate the diagrams by filling each box with its row number in the original input diagram. A box that moves down due to a −R i,j term retains its number along with a minus sign (drawn above the number to save space). Other boxes that move or fall down keep their number with no sign. The next example illustrates this annotation process for the operator sequence U 3 • U 3 . . We now prove that the cancellation seen in the example holds in general. Fix j > 1. To prove U j • U j = I, it suffices to show that U j (U j (w)) = w for all basis vectors w = [a 1 , . . . , a m ] of W . We show this by introducing a sign-reversing, shapepreserving involution on the set of annotated diagrams that appear in the calculation of U j (U j (w)). Observe the following property of such diagrams: each row i < j contains (among other things) at most one box labeled −j, followed by zero or more boxes labeled j.
Given such a diagram z, find the least row index i < j such that a box with −j or j occurs in that row. If there is no such i, then neither U j nor U j moved any boxes down, so we must have z = w. This diagram (which is positive) is the unique fixed point of the involution, corresponding to the output term w.
When i does exist, the involution acts as follows. If there is a −j in row i, change it to j. If there is no −j in row i, change the first j in row i to −j. One sees immediately that the new object is another valid annotated diagram arising in the computation of U j (U j (w)) with opposite sign as z. It is also clear that performing this action twice restores the original object z, so we do have an involution. Thus all output objects except w itself cancel, as needed. The previous example illustrates this involution. The first object in row 1 is the fixed point [2311], and the remaining objects in row 1 match (in order) with the corresponding objects in row 2.
The identity T −1 j • T j = I can be given a similar combinatorial proof (which we omit). Here the situation is simpler, since rows do not fall and there is no recursive continuation of the motion process in T −1 j .
Algebraic Combinatorics, Vol. 4 #6 (2021) 3.4. Tableau Description of the Kostka Operator T −1 . Next we use annotated diagrams to give a combinatorial formula for T −1 (the Kostka operator on V ) in terms of tableau-like structures. For a filled generalized composition diagram D, Therefore, we can compute the action of T −1 on a basis vector v = [a 1 , . . . , a m ] ∈ V as follows. Start with the annotated diagram for v, which has a j boxes labeled j in each row j (rows of length zero may occur). Generate a collection of filled diagrams by executing the following loop and making all possible box motions. For j = 2, 3, . . . , m in this order, move any number of boxes down from row j to the end of lower rows, keeping the labels in each box. Then T −1 (v) is the sum of the shapes of all filled diagrams generated in this way.
Using the annotated diagrams shown below, we compute The first diagram arises by moving no boxes when j = 2 and no boxes when j = 3. The second diagram (of shape [311]) arises by moving no boxes when j = 2 and moving one box from row 3 to row 1 when j = 3. We see this same shape in the ninth diagram, which arises by moving a box from row 2 to row 1 when j = 2 and moving a box from row 3 to row 2 when j = 3. This explains why [311] has coefficient 2 in The next theorem characterizes the filled diagrams that occur when we compute Proof. First suppose D is one of the filled diagrams generated by the algorithm for computing T −1 (v). The initial diagram for v has content v, and the motion rules do not change the content, so (b) holds. Since occurrences of j are moved to the end of lower rows in increasing order of j, (a) holds. Since each copy of j begins in row j and optionally moves down to lower rows, (c) holds.
To finish the proof, we must show that any filled diagram D with properties (a), (b), (c) is produced in exactly one way by the algorithm computing T −1 (v). Given such a diagram D, we can produce D from the original filled diagram for v by the following choices. For j = 2, 3, . . . , m in order, move some of the a j copies of j in row j down to lower rows so that the frequency of j's in rows 1 through j matches the frequency in D. This is possible, since D has exactly a j copies of j that all occur in the first j rows. The choices made at each step are forced, so there is only one way to produce D.
Algebraic Combinatorics, Vol. 4 #6 (2021) 3.5. Combinatorial Action of T and U . We can give a similar combinatorial prescription for acting by T = T 2 •· · ·•T m on a basis vector v = [a 1 , . . . , a m ] ∈ V . Start with the diagram of v, consisting of a j boxes in each row j. For j = m, m − 1, . . . , 2 in this order, move some number d 0 of the boxes currently in row j into d distinct rows i 1 , . . . , i d < j. Every box that moves causes a sign change in the coefficient of the final object produced. In this case, even if we fill each box with its original row number, there is not always enough information in the final filled diagram to reconstruct the choices that produced it. This is because a box might be moved to its final location in several steps as we proceed from j = m down to j = 2. We remedy this defect in §4.1 by giving a combinatorial model for T (v) based on tournaments. However, we can use the current model to prove the following technical fact mentioned at the end of §2.3. We claim that the extra packing steps do not affect the signed objects that can be generated. Intuitively, this holds since the packing step after T j does not affect the choices that can be made by later operators T j−1 , . . . , T 2 . The formal proof follows.
Since the input list v is already packed, we can assume without loss of generality that every a i is positive. This is because the operators T j and P • T j , corresponding to any zero parts a j = 0 at the right end of v, act first and send v to itself. Later operators T i and P • T i neither affect nor are affected by these trailing zero parts in v. For the induction step, assume the quoted statement holds for a fixed j between 2 and m, and prove this statement also holds with j replaced by j − 1. Here, The quoted statement is now known to hold for j = 1. This statement tells us (among other things) that we can obtain [ This is exactly what we set out to prove.   When we characterize the filled diagrams appearing in the computation of U −1 (v), dual immaculate tableaux miraculously emerge. Recall ( §1.2) that a dual immaculate tableau is a filling of a composition diagram so that rows weakly increase from left to right and the first (leftmost) column strictly increases from bottom to top. Proof. First suppose D is one of the filled diagrams generated by the algorithm for computing U −1 (v). Note that the initial filled diagram for v is a dual immaculate tableau of content v. The action of each U −1 j preserves the content. Also, a routine induction on j shows that U −1 j acts only on diagrams such that all values below row j are less than all values in rows j and higher. It follows that each U −1 j (hence U itself) preserves the property of having weakly increasing rows. Moreover, a value in column 1 only changes when a row becomes empty and higher rows fall down into the gap. Since the initial values in column 1 form a strictly increasing sequence, this property is preserved throughout the algorithm. Thus, D is a dual immaculate tableau of content v.
Conversely, we now verify that every dual immaculate tableau D of content v arises in exactly one way during the computation of U −1 (v). Fix such a tableau D. Let the strictly increasing sequence of values in column 1 of D be a 1 < a 2 < · · · < a k , where a 1 = 1, and let M be the maximum value in D. Because rows of D weakly increase, any occurrence in D of a value i a j must occur in one of the first j rows. Consider how we could produce D from the filled diagram of v during the computation of U −1 . The first step (acting by U −1 2 ) must terminate with a 2 residing in row 2, column 1, since none of the later U −1 j steps can displace the first entry in  Figure 1 shows the unique choice sequence that produces D from the filled diagram for v. In Figure 1 the labels moving are in bold in each step.  Let NSym m (resp. QSym m ) be the subspace of NSym (resp. QSym) consisting of homogeneous elements of degree m. Also let W m be the subspace of W spanned by the lists [α] with α ∈ Comp m . We can identify the vector space W m with NSym m by identifying the list [α] with the immaculate basis element S α for all α ∈ Comp m . Then U and U −1 , which map W m into itself, are identified with linear maps on NSym m . Comparing Theorem 3.11 to the definition (3), we see that Thus, the Kostka operator U −1 maps the S-basis of NSym to the h-basis, and the inverse Kostka operator U maps the h-basis to the S-basis.
We have seen that the linear map U −1 factors as maps W m into itself. Therefore, we obtain interpolating bases between the h-basis and the S-basis of NSym m by defining end of each basis vector. The maps T m+1 , U m+1 , and their inverses act as the identity on a list of length m + 1 ending with a zero part. In particular, the value of quantities such as U (v), , etc. remains stable if we append trailing zero parts to v. By taking a suitable algebraic limit as m goes to infinity, we can extend our entire discussion to spaces spanned by lists of infinite length that have only finitely many nonzero parts. So for each positive integer i, we obtain a basis (S Thus the dual inverse Kostka operator U * maps the S * -basis of QSym m to the Mbasis, while the dual of U −1 maps the M -basis to the S * -basis. By applying U * or its inverse in stages, we obtain interpolating bases between these two bases of QSym m . As above, we can take a formal limit to get bases for the full space QSym, which are dual to the bases (S for all compositions β. By adjusting the proof of Theorem 3.9 to stop the computation of U −1 after U −1 i , we can obtain a combinatorial expansion of S Proof. Consider a typical object D generated when we start with the filled diagram of [β], act by U −1 2 , U −1 3 , . . ., U −1 i in this order, and then stop. If D has fewer than i rows of positive length, then (as in the proof of Theorem 3.9) D is a dual immaculate tableau of content β and some shape α with (α) < i. Moreover, all such tableaux arise exactly once in this computation. This accounts for the first sum in (9).
On the other hand, suppose D has at least i rows, and let j be the symbol in row i, column 1 of D. Using the combinatorial description of the operators U −1 k , one readily sees that: i j m; the first i rows of D form a dual immaculate tableau counted by K (α, β j ) for some α with (α) = i; row i + 1 of D must contain β j+1 copies of j + 1; row i + 2 of D must contain β j+2 copies of j + 2; and so on. Thus the overall shape of D is α • β >j . Each diagram D satisfying these conditions (for some such α and j) arises exactly once in the computation. This explains the second sum in (9).

Tournament Models for Inverse Kostka Operators.
Let us first consider the action of the operator T 0 = 1 i<j m (I − R i,j ) on the vector space V 0 with basis Z m . We can expand this product of operators using the generalized distributive law. For each factor I −R i,j , we choose either I or −R i,j , multiply together the chosen factors, and add all the results. We can use a tournament τ ∈ T m to record all the choices made. Specifically, for each i < j in [m], we let τ (i, j) = 0 if we pick I from the factor I−R i,j , and we let τ (i, j) = 1 if we pick −R i,j from this factor. We conclude that This holds since position k initially contains a k , each R i,j with i = k increments position k, and each R i,j with j = k decrements position k. Identifying each list [c 1 , . . . , c m ] with h c1 · · · h cm ∈ Sym, we obtain the tournament model for the inverse Kostka matrix mentioned at the end of §1.1.
Turning to the space V , the inverse Kostka operator T acts on nonnegative lists v = [a 1 , . . . , a m ] ∈ V by a similar formula. Note first that We claim that we can find T (v) in V by computing T 0 (v) in V 0 (using formula (10)) and then discarding any output lists that have negative entries. This claim holds because of the order in which we apply the R i,j operators in (11). The key point is that if a list entry becomes negative after applying some of the operators encoded by τ , then that entry must remain negative after applying all the operators encoded by τ . Thus, a list that is sent to zero at some intermediate stage (working in V ) must also be sent to zero if we work in V 0 and only discard lists with negative entries at the very end.
Algebraic Combinatorics, Vol. 4 #6 (2021) Finally, we can describe how the inverse Kostka operator U = P • (T | W ) acts on the space W . We use the same formula (11) given for T (applied to basis vectors [a 1 , . . . , a m ] ∈ W ), but at the end we must apply P by packing all the output lists.
Example 4.1. The following table computes T ([3, 1, 3]) using tournaments. To visualize (10), we enter a 1 = 3, a 2 = 1, a 3 = 3 along the main diagonal of each tournament matrix τ ∈ T 3 . To find the output term ±[b 1 , b 2 , b 3 ] corresponding to τ via (10), we start with each diagonal entry a k and compute b k by adding the 1s to the right of a k in row k and subtracting the 1s above a k in column k. Let the difference vector of a tournament τ ∈ T m be the list ∆  Now suppose β is a (strict) composition of n with m parts. We know from Theorem 3.11 that U ([β]) = α K −1 (α, β) [α]. It suffices to sum over compositions α of n with at most m parts, as is readily checked. For any such composition α, let T m (α, β) be the set of τ ∈ T m such that P ([β] ⊕ ∆(τ )) = [α]. We deduce the following combinatorial formula for the entries of the inverse of the immaculate Kostka matrix. Proof. It suffices to define an involution I on T m such that for all transitive τ ∈ T m , I(τ ) = τ ; and for all non-transitive τ ∈ T m , sgn(I(τ )) = − sgn(τ ) and ∆(τ ) = ∆(I(τ )). Fix a non-transitive τ ∈ T m , so that not all outdegrees in τ are distinct. Choose the minimum r and then the minimum s > r such that d r (τ ) = d s (τ ). Define τ = I(τ ) by interchanging the roles of r and s in τ . In more detail, writing r = s, s = r, and k = k for all k = r, s in [m], we define τ (i , j ) = τ (i, j) for all i, j ∈ [m]. Since d r (τ ) = d s (τ ), the list of outdegrees for τ is the same as the list for τ . This implies that τ is non-transitive and I(τ ) = τ . Moreover, ∆(τ ) = ∆(τ ) follows from (12). It is routine to check that sgn(τ ) = − sgn(τ ) (see [11, p. 548] for details).
Proof. We see that W does map S m into the claimed codomain TT m by definition of transitive tournaments, while W does map TT m into the codomain S m by the characterization of transitive tournaments in terms of outdegrees. Given w ∈ S m , let is the total number of symbols in w that are less than w i . Since w is a rearrangement of 1, 2, . . . , m, the number of such symbols must be w i − 1. Hence d i (τ ) + 1 = w i for all i, so W (τ ) = w. As w was arbitrary, W • W is the identity map on S m . Since S m and TT m are both finite sets of size m!, we conclude that W is the two-sided inverse of W . To finish, we check that sgn(τ ) = sgn(w). We compute sgn(τ ) = (−1) i<j τ (i,j) = (−1) i<j χ(wi>wj ) = (−1) inv(w) = sgn(w).
Algebraic Combinatorics, Vol. 4 #6 (2021) We can now relate the tournament-based formulas for K −1 to the noncommutative Jacobi-Trudi formula. The next theorem provides the promised combinatorial proof of the equivalence of the two definitions (4) and (6) for the immaculate basis (S β ) of NSym.  (4)).
We have v Define v (i) as above, and let w [i] be the list w with its ith entry deleted.

Special Rim Hook Tableaux.
In the particular case where β is a partition (meaning that the parts of β are weakly decreasing), we can give a formula for K −1 (α, β) involving special rim hook tableaux. We first review the analogous formula, due to Eğecioğlu and Remmel [6], for the inverse of the original Kostka matrix.
Recall that we draw the Ferrers diagram of an integer partition µ with the longest row at the bottom. A special rim hook of length in the diagram of µ is a sequence of cells that starts in the leftmost column and moves right or down at each step. The sign of a rim hook is +1 (resp. −1) if the rim hook occupies an odd (resp. even) number of rows. Given partitions λ and µ, a special rim hook tableau (SRHT) of shape µ and type λ is a decomposition of the diagram of µ into a disjoint union of special rim hooks such that the weakly decreasing rearrangement of the list of rim hook lengths is λ. The sign of a SRHT is the product of the signs of its rim hooks. Eğecioğlu and Remmel [6] proved that K −1 (λ, µ) is the sum of the signs of all SRHT of shape µ and type λ. For more details and an abacus-based proof of this formula, see [11, §10.16].
The first three tableaux have sign +1 and types 433, 541, and 622, respectively. The next three tableaux have sign −1 and types 442, 532, and 631, respectively. We therefore obtain the six nonzero entries in column µ of K −1 . The same information Algebraic Combinatorics, Vol. 4 #6 (2021) can be found by taking the coefficients of h λ in the (commutative) Jacobi-Trudi expansion of the Schur function s 433 : To state our formula for K −1 (α, µ) where µ is a partition and α is a composition, we introduce the notions of total content and content for special rim hook tableaux. Let S be a SRHT of partition shape µ = (µ 1 , . . . , µ m ), where some parts at the end might be zero. For 1 i m, let the special rim hook starting in row i have length a i and end in row r i ; if no rim hook starts in row i, let a i = 0 and r i = i. The total content of S is the rearrangement of the rim hook lengths a 1 , . . . , a m produced as follows. Start with the empty list; for i = 1, 2, . . . , m, insert a i into position r i of the current list. The content of S is the strict composition obtained by deleting all zero parts from the total content of S.  1, 3, 3, 3). The total content algorithm builds the lists [2], [4,2], [4,2,0], [4,2,3,0], and [4,2,5,3,0]. The examples show that we cannot compute the content of an SRHT by scanning the boxes in the diagram in a predetermined order and recording the lengths of the rim hooks as they are encountered.
Each SRHT shown above corresponds to a term in the noncommutative Jacobi-Trudi determinant For example, the first SRHT corresponds to choosing h 4 from row 1, h 2 from row 2, h 5 from row 3, h 3 from row 4, and h 0 = 1 from row 5. As we prove in the next theorem, the total content of the SRHT agrees with the sequence of h k 's multiplied together in top-to-bottom order. We remark in passing that using a left-to-right determinant expansion would have led to a simpler content rule where we simply read the rim hook lengths from bottom to top, but this alternate determinant does not equal S µ in general. Proof. For all lists v, w ∈ Z m such that v is weakly decreasing, let C(w, v) be the sum of the signs of all SRHT of shape v and total content w. (This is zero if v or w has a negative entry.) We first prove that C(w, v) = D(w, v) for all such lists v, w. It suffices to check that the quantities C(w, v) satisfy the recursion and initial condition in Theorem 4.9(a). Note here that if v ∈ Z m is weakly decreasing, then all the related lists v (i) appearing in (14) are also weakly decreasing. When m = 1, the initial condition C(w, v) = χ(w = v) holds because there is exactly one SRHT of shape (v 1 ), which consists of a single positive rim hook of length v 1 . Now suppose m > 1, and fix v, w ∈ Z m with v weakly decreasing. Consider a particular SRHT S counted by C(w, v). Let the special rim hook starting in row m of S end in row i. This rim hook has sign (−1) m−i = (−1) i+m . Deleting this rim hook and the cells it occupies, we obtain a smaller SRHT S such that sgn(S) = (−1) i+m sgn(S ). One readily checks that since S has shape v, S has shape v (i) . Also since S has total content w, S has total content w [i] . Finally, w i is the length of the deleted rim hook, which is v i + m − i since this rim hook starts in column 1 of row m and ends in column v i of row i. Conversely, by adding a rim hook of this form above a smaller SRHT, we see that every S counted by C(w, v) arises in this way from a unique choice of i with w i = v i + m − i and a unique S counted by C(w [i] , v (i) ). Thus, recursion (14) holds with D replaced by C. It follows by induction that C(w, v) = D(w, v) for all v, w ∈ Z m such that v is weakly decreasing. Now for all compositions α and partitions µ where µ has m parts, Theorem 4.9(b) says Each SRHT of total content w has content P (w) (with trailing zeros deleted). So this expression for K −1 (α, µ) reduces to the sum of the signs of all SRHT of shape µ and content α, as needed.
Remark 4.13. In contrast to the situation for partition diagrams, we have found no satisfactory way of decomposing composition diagrams into rim hooks to give combinatorial objects satisfying the recursion in Theorem 4.9. Starting with the diagram of β ∈ Comp m , one would have to remove β i + m − i cells (for some i) in a way that leaves the diagram of β (i) . Various methods for drawing the diagram or removing these cells all seem to produce substructures consisting of diagrams and/or rim hooks that are disconnected.

t-Analogues
The original Kostka matrix ( §1.1) has a t-analogue that gives the expansion of Schur symmetric functions in terms of the Hall-Littlewood symmetric polynomials P µ [14, Ch. III]. Lascoux and Schützenberger [10] found a combinatorial formula for the entries of this matrix based on the charge statistic. Specifically, the t-analogue of the Kostka number K(λ, µ) is the sum of t charge(S) over all semistandard tableaux S of shape λ and content µ. For more details on charge, see [14,III.6] or [12, §3.3]. Carbonara [5] found a combinatorial formula for the inverse of the t-Kostka matrix as a sum over certain tournaments weighted by an appropriate power of t. See [12, §3.4] for a brief summary of this formula. In this section, we develop t-analogues of the inverse Kostka operators, the immaculate Kostka matrix, the inverse of the immaculate Kostka matrix, and related concepts. The basic idea is to replace each raising operator R i,j by tR i,j , where t is a formal variable, and trace the powers of t through all the combinatorial constructions. application of a raising operator, the coefficient of the resulting object is multiplied by t. However, when boxes fall into a row due to application of P , no new t-factors are introduced.
The inverse of [T j ] t is given by formula (7) with the power t ej−1+···+e2+e1 inserted inside the sums on the right side. Theorem 3.3 holds for the t-analogues of U j and U j , with the same proof. We need only observe that the involution preserves the t-power (since changing the sign of a moved j does not affect how many boxes are moved by a raising operator), and the fixed point w is not multiplied by any t's. Similarly, Lemma 3.7 is still true for the t-analogues, so that . In Example 3.4, the seven positive diagrams shown have t-weights 1, t, t 2 , t 2 , t, t 2 , t 2 (respectively). The six negative diagrams have the same t-weights as their matches under the involution. , (c) in Theorem 3.6. The power of t for a given diagram D is the number of times an entry j in D appears below row j. This holds since the only way a j initially in row j can move to a lower row is when it is moved there by a raising operator tR i,j .
The t-analogue of the Kostka operator U −1 : W → W is more interesting since the action of P (the falling operation) does not introduce new t's. In this case, we know ( §3.6) that U −1 (v) is the formal sum of the shapes of all dual immaculate tableaux with content v. Given such a tableau D with values a 1 < a 2 < · · · < a k in column 1, we find the t-power of D as follows. For 1 i k, count the number of occurrences of a i in D below row i. Add to these counts the number of occurrences of symbols j in D such that there is no j in column 1 of D. Denote the total count by wt(D). Note that the first column contains 1 < 2 < 3 < 5, so the labels in bold in the diagram each contribute to the weight of D. Additionally, since there is no 4 in the first column of D, the two 4's in the diagram each contribute to the weight of D. Thus, wt(D) = 5. This corresponds to the 5 raising operators used in Figure 1 to convert the filled diagram of v = [1, 2, 2, 2, 3] to D. Proof. Let D be a dual immaculate tableau with content v, maximum entry M , and first column a 1 < a 2 < · · · < a k . It suffices to show that wt(D) is the number of times Algebraic Combinatorics, Vol. 4 #6 (2021) Example 5.7. From the tournaments in Example 4.1, we compute all entries in column [3,1,3] of the t-analogue of K −1 : row 313 has 1, row 322 has −t, row 412 has t 2 − t, row 421 has t 2 , row 43 has −t, row 52 has t 2 , row 511 has −t 3 , and all other entries in this column are zero. The tournaments contributing to K −1 (412, 313; t) are the two non-transitive tournaments in T 3 , and we see that the t-powers of these two tournaments are unequal.