Tuesday, December 30, 2008
Los Vasco Grande Reserve 2006 Cabernet Sauvignon
Thursday, December 25, 2008
Hardys Oomoo Unwooded Chardonnay 2005
The Oomoo Unwooded offers a tight and full palate that carries pleasant, well balanced creamy Chardonnay tones set against a backdrop of tart but mellow citrussy flavours. Pale yellow with green hues, this wine displays lifted aromas of fresh peach, melon and fig with underlying citrus tones. There lies a perfect balance between sweet fruitiness and cakey, dry white qualities. The wine sits comfortably on the palate and is full flavoured. It shows characters of ripe peach, honeydew melon and lemon combined with a long elegant finish.
James Halliday’s 2005 Top 100 (selected)
Reds under $25
Hardys Oomoo Shiraz 2004 (94 points $12.95)
Hackersley Merlot 2004 (95 points $24)
Chalkers Crossing Hilltops Cabernet Sauvignon 2004 (95 points $24)
Ferngrove Majestic Cabernet Sauvignon 2003 (95 points $25)
Whites over $20
Peter Lehmann Reserve Riesling 2001 (96 points $24)
Hardys Eileen Hardy Chardonnay 2002 (96 points $38)
Leeuwin Estate Art Series Chardonnay 2002 (97 points $80)
Tyrrell’s Vat 1 Semillon 1999 (97 points $40)
Monday, December 22, 2008
Chateau Potensac 2004 Cru Bourgeois Exceptionnel
During recent decades the vineyard has been slightly dominated by Cabernet Sauvignon which accounts for about 60% of the vines, with approximately 25% Merlot and 15% Cabernet Franc in addition, planted at an average 8000 vines/ha. But with the purchase of new Merlot vines there is naturally a swing towards this variety, and it is notable that the 2005 vintage included more Merlot than Cabernet (41% Merlot, 40% Cabernet Sauvignon, 19% Cabernet Franc) in the final blend. Yields are restricted to approximately 35 hl/ha, and once harvested by hand the fruit is fermented at a maximum temperature of 28ºC in stainless steels and concrete vats, with 15 to 18 days maceration and constant pumping over.
Like Sociando-Mallet, Potensac is yet another chateau which illustrates the defunct nature of the 1855 classification of Bordeaux. Potensac regularly turns out wines of classed growth quality, but has only a Cru Bourgeois designation, although in the Cru Bourgeois classification of in 2003 (which subsequently collapsed following a legal challenge), it was accorded Cru Bourgeois Exceptionnel status, a short-lived recognition of the quality to be found here.
2004
Youthful hue, quite attractive, although this isn't matched on the nose which is quite closed down, with just a suggestion of some dark fruits when the wine is worked hard. A cool style, quite well textured, good tannic grip, but it is in keeping with the rest of the wine. An admirable style with good potential.
1996
A dark, claretty hue, with a cherry red rim. The nose is dark, smoky, with crisply bright yet deep, meaty fruit. More of the same on the palate, where the fruit has a precise, admirable presence with a warm, roasted yet fresh style. This has a very well defined structure, very upright, classic in nature, with a fine grip beneath. There is some bitterness to the fruit, which adds a delightful complexity.
Grape varietals : 46% Cabernet-sauvignon, 16% Cabernet-franc, 36% Merlot, 2% Carmenere.
Wednesday, December 17, 2008
Quotient Group - recall Weyl Group in SU(3)
Normal Subgroup and Equivalence
Let G be a group and N a subgroup of G. Then, N is called a normal subgroup of G if for each element g of G and each element n of N, the element gng-1 belongs to N. Note that if G is commutative, then every subgroup of G is automatically normal because gng-1 = n.
If G and H are groups and Φ : G --> H is a homomorphism, then ker Φ is a normal subgroup of G. Let e2 denote the identity element of H and suppose n is an element of ket Φ (i.e., Φ(n) = e2). Then Φ(gng-1) = e2. Thus, ket Φ is a normal subgroup of G.
Suppose G is a group and N is a normal subgroup of G. Define two elements g and h to be equivalent if gh-1 ∈ N.
1. If g1 is equivalent to g2 and h1 is equivalent to h2, then g1h1 is equivalent to g2h2.
2. If g1 is equivalent to g2, then g1-1 is equivalent to g2-1 .
Quotient Group
If g is any element of G, let [g] denote the equivalence class containing G; that is [g] is the subset of G consisting of all elements equivalent to g (including g itself).
Let G be a group and N a normal subgroup of G. The quotient group G/N is the set of all equivalence classes in G, with the product operation defined by [g][h] = [gh]. The elements of G/N are equivalence classes. The group product is defined by choosing one element g out of the first equivalence class, choosing one element h out of the second equivalence class, and then define the product to be the equivalence class containing gh.
The idea behind the quotient group construction is that we make a new group out of G by setting every element of N equal to the identity. This then forces ng to be equal to g ∈ G and n ∈ N. However, ng and n are equivalent, since (ng)g-1 = n ∈ N. So setting elements of N equal to the identity forces elements that are equivalent to be equal. The condition that N be a normal subgroup guarantees that we still have defined group operations after setting equivalent elements equal to each other.
If G is a group and N is a normal subgroup, then there is a homomorphism q of G into the quotient group G/N given by q(g) = [g]. Follows from the definition of the product operation on G/N that q is indeed a homomorphism and it clearly maps G onto G/N. For Φ : G --> H is homomorphism, we have seen that ker Φ is a normal subgroup of G. If Φ maps G onto H, then it can be shown that H is homomorphic to the quotient group G/ker Φ.
Note: If G is a matrix Lie group, then G/N may not be. Even if G/N happens to be a matrix Lie group, there is no canonical procedure for finding a matrix representation of it.
Examples
1. Group of integers modulo n. In this case, G = Z and N = nZ (set of integer multiples of n). To form the quotient group, we say that two elements of Z are equivalent if their difference is in N. Thus, the equivalence class of an integer i is the set of all integers that are equal modulo n to i.
2. Taking G = SL(n;C) and taking N to be the set of elements of SL(n;C) that are multiples of the identity. The elements of N are the matrices of the form e2πik/nI, k = 0, 1, ...., n-1. This is a normal subgroup of SL(n;C) because each element of N is a multiple of the identity and, thus, for any A ∈ SL(n;C), we have A(e2πik/nI)A-1 = AA-1(e2πik/nI) = e2πik/nI. The quotient group SL(n;C)/N is customarily denoted PSL(n;C), where P stands for "projective". It can shown that PSL(n;C) is a simple group for all n ≥ 2; that is, PSL(n;C) has no normal subgroups other than {I} and PSL(n;C) itself.
Abstract Root Systems
Root System
A root system is a finite-dimensional real vector space E with an inner product <⋅,⋅>, together with a finite collection R of nonzero vectors in E satisfying the following properties:1. The vectors in R span E.
2. If α is in R, then so is -α.
3. If α is in R, then the only multiples of α in R are α and -α.
4. If α and β are in R, then so is wα⋅β, where wα is the linear transformation of E defined by wα⋅β = β - 2(<β,α>/<α,α>)α, β ∈ E. Note: wα⋅α = -α.
5. For all α and β in R, the quantity 2<β,α>/<α,α> is an integer.
The map wα is the reflection about the hyperplane perpendicular to α; that is, wα⋅α = -α and wα⋅β = β for all β in E that are perpendicular to α. It should be evident that wα is an orthogonal transformation of E with determinant -1.
Since the orthogonal projection of β onto α is given by (<β,α>/<α,α>)α, we note that the quantity 2<β,α>/<α,α> is twice that the coefficient of α in this projection. Thus, the it is equivalent to saying that the projection of β onto α is an integer or half-integer multiple of α.
Suppose (E,R) and (F,S) are root systems. Consider the vector space E ⊕ F, with the natural inner product determined by the inner products on E and F. Then, R ∪ S is a root system in E ⊕ F, called the direct sum of R and S.
A root system (E,R) is called reducible if there exists an orthogonal decomposition E = E1⊕ E2 with dim E1 > 0 and dim E2 > 0 such that every element in R is either in E1 or in E2. If no such decomposition exists, (E,R) is called irreducible.
Two root systems (E,R) and (F,S) are said to be equivalent if there exists an invertible linear transformation A : E --> F such that A maps R onto S and such that for all α ∈ R and β ∈ E, we have A(wα⋅β ) = wα⋅Aβ. A map A with this property is called an equivalence. Note that the linear map A is not required to preserve inner products, but only to preserve the reflections about the roots.
Rank
The dimension of E is called the rank of the root system and the elements of R are called roots.
The A1 rank-one root system R must consist of a pair {α, -α}, where α is a nonzero element of E.
In rank-two root system, there are four possibilities: A1 x A1, A2, B2, and G2. In A1 x A1, the lengths of the horizontal roots are unrelated to the lengths of the vertical roots. In A2, all roots have the same length; angle between successive roots is 60º. In B2, the length of the longer roots is √2 times the length of the shorter roots; angle between successive roots is 45º. In G2, the length of the longer roots is √3 times the length of the shorter roots; angle between successive roots is 30º.
Weyl Group
If (E, R) is a root system, then the Weyl group W of R is the subgroup of the orthogonal group of E generated by the reflections wα, α ∈ R. By assumption, each wα maps R into itself, indeed onto itself, since each α ∈ R satisfies α = wα⋅ (wα⋅α). It follows that every element of W maps R onto itself.Since the roots span E, a linear transformation of E is determined by its action on R. This shows that the Weyl group is a finite subgroup of O(E) and may be regarded as a subgroup of the permutation group on R.
Monday, December 8, 2008
Semisimple Lie Group
There are three equivalent characterization of semisimple Lie algebras. The first characterization is one which is isomorphic to a direct sum of simple Lie algebras. The second characterization is that complexification of the Lie algebra of a compact simply-connected group, for example, that sl(n;C) ≅ su(n)C is semisimple. The third characterization is that a Lie algebra g is semisimple if and only if it has the complete reducibility property, that is, if and only if every finite-dimensional representation of g decomposes as a direct sum of irreducibles.
Recall that a group or Lie algebra is said to have the complete reducibility property if every finite-dimensional representation of it decomposes as a direct sum of irreducible invariant subspaces. A connected compact matrix Lie group always has this property. It follows that the Lie algebra of compact simply-connected matrix Lie group also has the complete reducibility property, since there is a one-to-one correspondence between the representations of the compact group and its Lie algebra. Because there is a one-to-one correspondence between the representations of a real Lie algebra and the complex-linear representations of its complexification, we see also that if a complex Lie algebra g is isomorphic to the complexification of the Lie algebra of a compact simply-connected group, then g has the complete reducibility property. We have seen this reasoning to sl(2;C) (the complexification of the Lie algebra of SU(2) and to sl(3;C) and the complexification of the Lie algebra SU(3)).
Complex semisimple Lie algebras are complex Lie algebras that are isomorphic to the complexification of the Lie algebra of a compact simply-connected matrix Lie group.
Definition
If g is a complex Lie algebra, the an ideal in g is a complex subalgebra h of g with the property that for all X in g and H in h, we have [X, H] in h.
A complex Lie algebra g is called indecomposable if the only ideals in g are g and {0}. A complex Lie algebra g is called simple if g is indecomposable and dim ≥ 2.
A complex Lie algebra is called reductive if it is isomorphic to a direct sum of indecomposable Lie algebras. A complex Lie algebra is called semisimple if it is isomorphic to a direct sum of simple Lie algebras. Note that a reductive Lie algebra is a direct sum of indecomposable algebras, which are either simple or one-dimensional commutative. Thus, a reductive Lie algebra is one that decomposes as a direct sum of a semisimple algebra and a commutative algebra.
The following table lists the complex Lie algebras that are either reductive (not semisimple) or semisimple.
so(n;C) (n≥3) semisimple
so(2;C) reductive
gl(n;C) (n≥1) reductive
sp(n;C) (n≥1) semisimple
All of the above listed semisimple algebras are actually simple, except for so(4;C), which is isomorphic to sl(2;C) ⊕ sl(2;C). Every complex simple Lie algebra is isomorphic to one of sl(n;C), so(n;C) (n≠4), sp(n;C), or to one of the five "exceptional" Lie algebras conventionally called G2, F4, E6, F7, and E8.
For real Lie algebra,
su(n) (n≥2) semisimpleso(n) (n≥3) semisimple
so(2) reductive
sp(n) (n≥1) semisimple
so(n,k) (n+k ≥3) semisimple
so(1,1) reductive
sp(n;R) (n≥1) semisimple
sl(n;R) (n≥2) semisimple
gl(n;R) (n≥1) reductive
In each case, the complexification of the listed Lie algebra is isomorphic to one of the complex Lie algebras in the above table. Note that the Heisenberg group, the Euclidean group, and the Poincare group are neither reductive nor semisimple.
Friday, December 5, 2008
Highest Weight
If we have a representation with a weight μ = (m1, m2), then by applying the root vectors Xα X2, X3, Y1, Y2, Y3, we can get some new weights of the form μ + α, where α is the root (recall that π(H1)π(Zα)v = (m1+ a1)π(Zα)v). If π(Zα)v = 0, then μ + α is not necessarily a weight. In analogy to the classification of the representation sl(2;C) : In each irreducible representation of sl(2;C), π(H) is diagonalizable, and there is a largest eigenvalue of π(H). Two irreducible representations of sl(2;C) with the same largest eigenvalue are equivalent. The highest eigenvalue is always a non-negative integer, and, conversely, for every non-negative integer m, there is an irreducible representation with highest eigenvalue m.
Let α1 = (2, -1) and α2 = (-1, 2) be the roots introduced in "Weights & Roots". Let μ1 and μ2 be two weights. Then, μ1is higher than μ2 if μ1- μ2 can be written in the form μ1 - μ2 = aα1 + bα2 with a ≥ 0 and b ≥ 0. If π is a representation of sl(3;C), then a weight μ0 for π is said to be a highest weight if for all weights μ of π, μ ≤ μ0.
Note that the relation of "higher" is only a partial ordering because μ1 is neither higher nor lower than μ2. For example, {0, α1 - α2} has no highest element. Moreover, the coefficients a and b do not have to be integers, even if both μ1 and μ2 have integer entries. For example, (1,0) is higher than (0,0) since (1,0) = 2/3α1 + 1/3α2.
Theorem of Highest Weight
The theorem of highest weight is a main theorem regarding the irreducible representation of sl(3;C).
1. Every irreducible representation π of sl(3;C) is the direct sum of its weight spaces; that is π(H1) and π(H2) are simultaneously diagonalizable in every irreducible representation.
2. Every irreducible representation of sl(3;C) has a unique highest weight μ0, and two equivalent irreducible representations have the same highest weight.
3. two irreducible representations of sl(3;C) with the same highest weight are equivalent.
4. If π is an irreducible representation of sl(3;C), then the highest weight μ0 of π is of the form μ0 = (m1, m2) with m1 and m2 being non-negative integers.
An ordered pair (m1, m2) with m1 and m2 being non-negative integers is called a dominant integral element. The theorem says that the highest weight of each irreducible representation of sl(3;C) is a dominant integral element and, conversely, that every dominant integral element occurs as the highest weight of some irreducible representation.
However, if μ has integer coefficients and is higher than zero, this does not necessarily mean that μ is dominant integral. For example, α1 = (2, -1) is higher than zero but is not dominant integral. Note that the condition on which weights can be highest weights is m1 and m2 being non-negative integer.
The dimension of the irreducible representation with highest weight (m1, m2) is
1/2(m1 + 1)(m2 + 1)(m1 + m2 + 2).
Weyl Group in SU(3)
The representations of sl(3;C) are invariant under the adjoint action of SU(3). Let π be a finite-dimensional representation of sl(3;C) acting on a vector space V and let Π be the associated representation of SU(3) acting on the same space. For any A ∈ SU(3), we can define a new representation πA of sl(3;C), acting on the same vector space V, by setting πA(X) = π(AXA-1). Since the adjoint action of A on sl(3;C) is a Lie algebra automorphism, this is a representation of sl(3;C). Π(A) is an intertwining map between (π, V) and (πA, V). We say that adjoint action of SU(3) is a symmetry of the set of equivalence classes of representations of sl(3;C).
The two-dimensional subspace h of sl(3;C) spanned by H1 and H2 is called a Cartan subalgebra. In general, the adjoint action of A ∈ SU(3) will not preserve the space h and so the equivalence of π and πA does not tell us anything about the weights of π. However, there are elements A in SU(3) for which AdA does preserve h. These elements make up the Weyl group for SU(3) and give rise to a symmetry of the set of weights of any representation π.
Let N be the subgroup of SU(3) consisting of those A ∈ SU(3) such that AdA(H) is an element of h for all H in h. And let Z be the subgroup of SU(3) consisting of those A ∈ SU(3) such that AdA(H) = H for all H ∈ h. The Weyl group of SU(3), denoted by W, is the quotient group N/Z.
The group Z consists precisely of the diagonal matrices inside SU(3), namely the matrices of the form A = (eiθ,0,0 : 0,eiΦ,0 : 0,0,e-i(θ+Φ)) for θ and Φ in R. The group N consists of precisely those matrices A ∈ SU(3) such that for each k = 1,2,3, there exist l ∈ {1,2,3} and θ ∈ R such that Aek = eiθel. Here e1, e2,e3 is the standard basis for C3. The Weyl group W = N/Z is isomorphic to the permutation group on three elements.
In order to show that Weyl group is a symmetry of the weights of any finite-dimensional representation of sl(3;C), we need to adopt a less basis-dependent view of weights. A vector v is an eigenvector of π(H1) and π(H2) then it is also an eigenvector for π(H) for any element H of the space h spanned by H1 and H2. Furthermore, the eigenvalues must depend linearly on H. For π(J) = λ2v, then π(aH + bJ)v = (aπ(H) + bπ(J))v = (aλ1 + bλ2)v. We have the following basis-independent notion of weight : a linear functional μ ∈ h* is called a weight for π if there exists a nonzero vector v in V such that π(H)v = μ(H)v for all H in h. Such a vector v is called a weight vector with weight μ. So a weight is just a collection of simultaneously eigenvalues of all the elements H of h, which depends linearly on H and, therefore, define a linear functional on h. The reason for adopting this basis-independent approach is that the action of the Weyl group does not preserve the basis {H1, H2} for h.
In another words, the Weyl group is a group of linear transformation of h. This means that W acts on h, and we denote this action as w⋅H. We can define an associated action on the dual space h*. Thus, for μ ∈ h* and w ∈ W, we define w⋅μ to be the element of h* given by (w⋅μ)(H) = μ(w-1⋅H).
Wednesday, December 3, 2008
Weights & Roots
There is a one-to-one correspondence between the finite-dimensional complex representations Π of SU(3) and the finite-dimensional complex-linear representation π of sl(3;C). This correspondence is determined by the property that Π(eX) = eπ(X) for all X ∈ su(3) ⊂ sl(3;C). The representation Π is irreducible if and only if the representation π is irreducible.
Simultaneous Diagonalization
Suppose that V is a vector space and A is some collection of linear operators on V. Then a simultaneous eigenvector for A is a nonzero vector v ∈ V such that for all A ∈ A, there exists a constant λA with Av = λAv. The numbers λA are the simultaneous eigenvalues associated to v. For example, consider the space D of all diagonal nxn matrices. For each k = 1,...,n, the standard basis element ek is a simultaneous eigenvector for D. For each diagonal matrix A, the simultaneous eigenvalue associated to ek is the k-th diagonal entry of A.
If A is a simultaneously diagonalisable family of linear operators on a finite-dimensional vector space V, then the elements of A commute.
If A is commuting collection of linear operators on a finite-dimensional vector space V and each A ∈ A is diagonalizable, then the elements of A are simultaneously diagonalizable.
Basis for sl(3;C)
Every finite-dimensional represetation of sl(2;C) or sl(3;C) decomposes as a direct sum of irreducible invariant subspaces. Consider the following basis for sl(3;C):H1 = (1,0,0 : 0,-1,0 : 0,0,0), H1 = (0,0,0 : 0,1,0 : 0,0,-1),
X1 = (0,0,0 : 1,0,0 : 0,0,0), X2 = (0,0,0 : 0,0,0 : 0,1,0), X3 = (0,0,0 : 0,0,0 : 1,0,0),Y1 = (0,1,0 : 0,0,0 : 0,0,0), Y2 = (0,0,0 : 0,0,1 : 0,0,0), Y3 = (0,0,1 : 0,0,0 : 0,0,0).
The span of {H1, X1, Y1} is a subalgebra of sl(3;C) which is isomorphic to sl(2;C) (can be seen by ignoring the third row and the third column in each matrix). Similarly for {H2, X2, Y2}. Thus, the following commutation relations exists:
[X1, Y1] = H1, [X2, Y2] = H2,
[H1, X1] = 2X1, [H2, X2] = 2X2,
[H1, Y1] = -2Y1, [H2, Y2] = -2Y2.
Other commute relations among the basis elements which involve at least one H1 and H2:
[H1, H2] = 0;
[H1, X1] = 2X1, [H1, Y1] = -2Y1,
[H2, X1] = -X1, [H2, Y1] = Y1;
[H1, X2] = -X2, [H1, Y2] = Y2,
[H2, X2] = 2X2, [H2, Y2] = -2Y2;
[H1, X3] = X3, [H1, Y3] = -Y3,
[H2, X3] = X3, [H2, Y3] = -Y3;
Adding all of the remaining commutation relations:
[X1, Y1] = H1,
[X2, Y2] = H2,
[X3, Y3] = H1 + H2;
[X1, X2] = X3, [Y1, Y2] = -Y3,
[X1, Y2] = 0, [X2, Y1] = 0;
[X1, X3] = 0, [Y1, Y3] = 0,
[X2, X3] = 0, [Y2, Y3] = 0;
[X2, Y3] = Y1, [X3, Y2] = X1,
[X1, Y3] = -Y2, [X3, Y1] = -X2.
Weights of sl(3;C)
A strategy to classify the representation sl(3;C) is to simultaneously diagonalize π(H1) and π(H2). Since H1 and H2 commute, π(H1) and π(H2) will also commute and so there is at least a chance that π(H1) and π(H2) can be simultaneously diagonalized.If (π, V) is a representation of sl(3;C), then an ordered pair μ = (m1, m2) ∈ C2 is called a weight of π if there exists v ≠ 0 in V such that π(H1)v = m1v, π(H2)v = m2v. A nonzero vector v satisfying this is called a weight vector corresponding to the weight μ. If μ = (m1, m2) is a weight, then the space of all vectors v satisfying π(H1)v = m1v, π(H2)v = m2v is the weight space corresponding to the weight μ. The multiplicity of a weight is the dimension of the corresponding weight space. Equivalent representations have the same weights and multiplicities.
If π is a representation of sl(3;C), then all of the weights of π are of the form μ = (m1, m2) with m1 and m2 being integers.
Roots of sl(3;C)
An ordered pair α = (a1, a2) ∈ C2 is called a root if1. a1 and a2 are not both zero, and
2. there exists a nonzero Z ∈ sl(3;C) such that [H1, Z] = a1Z, [H2, Z] = a1Z. The element Z is called a root vector corresponding to the root α.
Recall that adX(Y) = [X, Y], eadx = Ad(eX), and the adjoint mapping AdA(X) = AXA-1. Condition 2 of above says that Z is a simultaneous eigenvector for adH1 and adH2. This means that Z is a weight vector for the adjoint representation and weight (a1, a2). By condition 1 the roots are precisely the nonzero weights of the adjoint representation.
There are six roots of sl(3;C). They form a "root system", called A2.
α Z
(2, -1) X1
(-1, 2) X2
(1, 1) X3
(-2, 1) Y1
(1, -2) Y2
(-1, -1) Y3
It is convenient to single out the two roots corresponding to X1 and X2 and given them special names: α1 = (2, -1), α2 = (-1,2). α1 and α2 are called the positive simple roots. They have the property that all of the roots can be expressed as linear combinations of α1 and α2 with integer coefficients :
(2, -1) = α1
(-1, 2) = α2
(1, 1) = α1 + α2
(-2, 1) = -α1
(1, -2) = -α2
(-1, -1) = -α1 - α2 .
Let α = (a1, a2) be a root and Zα a corresponding root vector in sl(3;C). Let π be the representation of sl(3;C), μ = (m1, m2) a weight for π, and v ≠ 0 a corresponding weight vector. Then
π(H1)π(Zα)v = (m1+ a1)π(Zα)v,
π(H2)π(Zα)v = (m2+ a2)π(Zα)v.
Thus, either π(Zα)v = 0 or π(Zα)v is a new weight vector with weight
μ + α = (m1+ a1, m2+ a2) .