Tuesday, December 30, 2008

Los Vasco Grande Reserve 2006 Cabernet Sauvignon

While the Cabernet Sauvignon 2006 is light and balanced with good persistent tannins, the Grand Reserve is full bodied and dry. It is big in style with powerful and juicy plum and dark cherry fruit flavours on the palate. Well balanced with good complexity and a long finish that has layers of spice and subtle hints of cocoa.

Best decanted for an hour to really open up the wine before drinking.


Thursday, December 25, 2008

Hardys Oomoo Unwooded Chardonnay 2005

This is the inaugural vintage of Oomoo Unwooded Chardonnay produced at the historic Tintara Winery in McLaren Vale. It represents all the great qualities of a McLaren white with its reliability and abundance of ripe fruit flavours.

The Oomoo Unwooded offers a tight and full palate that carries pleasant, well balanced creamy Chardonnay tones set against a backdrop of tart but mellow citrussy flavours. Pale yellow with green hues, this wine displays lifted aromas of fresh peach, melon and fig with underlying citrus tones. There lies a perfect balance between sweet fruitiness and cakey, dry white qualities. The wine sits comfortably on the palate and is full flavoured. It shows characters of ripe peach, honeydew melon and lemon combined with a long elegant finish.

James Halliday’s 2005 Top 100 (selected)
Reds under $25
Hardys Oomoo Shiraz 2004 (94 points $12.95)
Hackersley Merlot 2004 (95 points $24)
Chalkers Crossing Hilltops Cabernet Sauvignon 2004 (95 points $24)
Ferngrove Majestic Cabernet Sauvignon 2003 (95 points $25)

Whites over $20
Peter Lehmann Reserve Riesling 2001 (96 points $24)
Hardys Eileen Hardy Chardonnay 2002 (96 points $38)
Leeuwin Estate Art Series Chardonnay 2002 (97 points $80)
Tyrrell’s Vat 1 Semillon 1999 (97 points $40)

Monday, December 22, 2008

Chateau Potensac 2004 Cru Bourgeois Exceptionnel

The vineyards of Potensac are located in Ordonnac, in the Médoc appellation, and incorporate the vines of three properties managed as a single entity, these being Potensac, Gallais-Bellevue and Lassalle. There is a rigorous selection for the grand vin Chateau Potensac, with about 40-45% of the crop going to the second wine, which today is bottled as La Chapelle de Potensac, although Chateau Lassalle has also been used as a second label in the past.

During recent decades the vineyard has been slightly dominated by Cabernet Sauvignon which accounts for about 60% of the vines, with approximately 25% Merlot and 15% Cabernet Franc in addition, planted at an average 8000 vines/ha. But with the purchase of new Merlot vines there is naturally a swing towards this variety, and it is notable that the 2005 vintage included more Merlot than Cabernet (41% Merlot, 40% Cabernet Sauvignon, 19% Cabernet Franc) in the final blend. Yields are restricted to approximately 35 hl/ha, and once harvested by hand the fruit is fermented at a maximum temperature of 28ºC in stainless steels and concrete vats, with 15 to 18 days maceration and constant pumping over.

Like Sociando-Mallet, Potensac is yet another chateau which illustrates the defunct nature of the 1855 classification of Bordeaux. Potensac regularly turns out wines of classed growth quality, but has only a Cru Bourgeois designation, although in the Cru Bourgeois classification of in 2003 (which subsequently collapsed following a legal challenge), it was accorded Cru Bourgeois Exceptionnel status, a short-lived recognition of the quality to be found here.

2004
Youthful hue, quite attractive, although this isn't matched on the nose which is quite closed down, with just a suggestion of some dark fruits when the wine is worked hard. A cool style, quite well textured, good tannic grip, but it is in keeping with the rest of the wine. An admirable style with good potential.

1996
A dark, claretty hue, with a cherry red rim. The nose is dark, smoky, with crisply bright yet deep, meaty fruit. More of the same on the palate, where the fruit has a precise, admirable presence with a warm, roasted yet fresh style. This has a very well defined structure, very upright, classic in nature, with a fine grip beneath. There is some bitterness to the fruit, which adds a delightful complexity.

Grape varietals : 46% Cabernet-sauvignon, 16% Cabernet-franc, 36% Merlot, 2% Carmenere.

Wednesday, December 17, 2008

Quotient Group - recall Weyl Group in SU(3)

Normal Subgroup and Equivalence

Let G be a group and N a subgroup of G. Then, N is called a normal subgroup of G if for each element g of G and each element n of N, the element gng-1 belongs to N. Note that if G is commutative, then every subgroup of G is automatically normal because gng-1 = n. 


If G and H are groups and Φ : G --> H is a homomorphism, then ker Φ is a normal subgroup of G. Let e2 denote the identity element of H and suppose n is an element of ket Φ (i.e., Φ(n) = e2). Then Φ(gng-1) = e2. Thus, ket Φ is a normal subgroup of G. 


Suppose G is a group and N is a normal subgroup of G. Define two elements g and h to be equivalent if gh-1 N.

1. If g1 is equivalent to g2 and h1 is equivalent to h2, then g1h1 is equivalent to g2h2.

2. If g1 is equivalent to g2, then g1-1 is equivalent to g2-1 .


Quotient Group

If g is any element of G, let [g] denote the equivalence class containing G; that is [g] is the subset of G consisting of all elements equivalent to g (including g itself). 


Let G be a group and N a normal subgroup of G. The quotient group G/N is the set of all equivalence classes in G, with the product operation defined by [g][h] = [gh]. The elements of G/N are equivalence classes. The group product is defined by choosing one element g out of the first equivalence class, choosing one element h out of the second equivalence class, and then define the product to be the equivalence class containing gh. 


The idea behind the quotient group construction is that we make a new group out of G by setting every element of N equal to the identity. This then forces ng to be equal to g G and n N. However, ng and n are equivalent, since (ng)g-1 = n N. So setting elements of N equal to the identity forces elements that are equivalent to be equal. The condition that N be a normal subgroup guarantees that we still have defined group operations after setting equivalent elements equal to each other. 


If G is a group and N is a normal subgroup, then there is a homomorphism q of G into the quotient group G/N given by q(g) = [g]. Follows from the definition of the product operation on G/N that q is indeed a homomorphism and it clearly maps G onto G/N. For Φ : G --> H is homomorphism, we have seen that ker Φ is a normal subgroup of G. If Φ maps G onto H, then it can be shown that H is homomorphic to the quotient group G/ker Φ. 

Note: If G is a matrix Lie group, then G/N may not be. Even if G/N happens to be a matrix Lie group, there is no canonical procedure for finding a matrix representation of it. 


Examples

1. Group of integers modulo n. In this case, G = Z and N = nZ (set of integer multiples of n). To form the quotient group, we say that two elements of Z are equivalent if their difference is in N. Thus, the equivalence class of an integer i is the set of all integers that are equal modulo n to i.


2. Taking G = SL(n;C) and taking N to be the set of elements of SL(n;C) that are multiples of the identity. The elements of N are the matrices of the form e2πik/nI, k = 0, 1, ...., n-1. This is a normal subgroup of SL(n;C) because each element of N is a multiple of the identity and, thus, for any A SL(n;C), we have A(e2πik/nI)A-1 = AA-1(e2πik/nI) = e2πik/nI. The quotient group SL(n;C)/N is customarily denoted PSL(n;C), where P stands for "projective". It can shown that PSL(n;C) is a simple group for all n ≥ 2; that is, PSL(n;C) has no normal subgroups other than {I} and PSL(n;C) itself. 

Abstract Root Systems

Root System

A root system is a finite-dimensional real vector space E with an inner product <,>, together with a finite collection R of nonzero vectors in E satisfying the following properties:

1. The vectors in R span E.

2. If α is in R, then so is -α.

3. If α is in R, then the only multiples of α in R are α and -α.

4. If α and β are in R, then so is wαβ, where wα is the linear transformation of E defined by wαβ = β - 2(<β,α>/<α,α>)α,  β E. Note: wαα = -α.

5. For all α and β in R, the quantity 2<β,α>/<α,α> is an integer.


The map wα is the reflection about the hyperplane perpendicular to α; that is, wαα = -α and wαβ = β for all β in E that are perpendicular to α. It should be evident that  wα is an orthogonal transformation of E with determinant -1. 


Since the orthogonal projection of β onto α is given by (<β,α>/<α,α>)α, we note that the quantity 2<β,α>/<α,α> is twice that the coefficient of α in this projection. Thus, the it is equivalent to saying that the projection of β onto α is an integer or half-integer multiple of α. 


Suppose (E,R) and (F,S) are root systems. Consider the vector space E F, with the natural inner product determined by the inner products on E and F. Then, R S is a root system in E F, called the direct sum of R and S. 


A root system (E,R) is called reducible if there exists an orthogonal decomposition E = E1 E2 with dim E1 > 0 and dim E2 > 0 such that every element in R is either in E1 or in E2. If no such decomposition exists, (E,R) is called irreducible.


Two root systems (E,R) and (F,S) are said to be equivalent if there exists an invertible linear transformation A : E --> F such that A maps R onto S and such that for all α R and β E, we have A(wαβ ) = wαAβ. A map A with this property is called an equivalence. Note that the linear map A is not required to preserve inner products, but only to preserve the reflections about the roots. 


Rank

The dimension of E is called the rank of the root system and the elements of R are called roots.


The A1 rank-one root system R must consist of a pair {α, -α}, where α is a nonzero element of E. 


In rank-two root system, there are four possibilities: A1 x A1, A2, B2, and G2. In A1 x A1, the lengths of the horizontal roots are unrelated to the lengths of the vertical roots. In A2, all roots have the same length; angle between successive roots is 60º. In B2, the length of the longer roots is √2 times the length of the shorter roots;  angle between successive roots is 45º. In G2, the length of the longer roots is √3 times the length of the shorter roots;  angle between successive roots is 30º. 


Weyl Group

If (E, R) is a root system, then the Weyl group W of R is the subgroup of the orthogonal group of E generated by the reflections wα, α R. By assumption, each wα maps R into itself, indeed onto itself, since each α R satisfies α = wα (wαα). It follows that every element of W maps R onto itself.

Since the roots span E, a linear transformation of E is determined by its action on R.  This shows that the Weyl group is a finite subgroup of O(E) and may be regarded as a subgroup of the permutation group on R.

Monday, December 8, 2008

Semisimple Lie Group

There are three equivalent characterization of semisimple Lie algebras. The first characterization is one which is isomorphic to a direct sum of simple Lie algebras. The second characterization is that complexification of the Lie algebra of a compact simply-connected group, for example, that sl(n;C) su(n)C is semisimple. The third characterization is that a Lie algebra g is semisimple if and only if it has the complete reducibility property, that is, if and only if every finite-dimensional representation of g decomposes as a direct sum of irreducibles. 


Recall that a group or Lie algebra is said to have the complete reducibility property if every finite-dimensional representation of it decomposes as a direct sum of irreducible invariant subspaces. A connected compact matrix Lie group always has this property. It follows that the Lie algebra of compact simply-connected matrix Lie group also has the complete reducibility property, since there is a one-to-one correspondence between the representations of the compact group and its Lie algebra. Because there is a one-to-one correspondence between the representations of a real Lie algebra and the complex-linear representations of its complexification, we see also that if a complex Lie algebra g is isomorphic to the complexification of the Lie algebra of a compact simply-connected group, then g has the complete reducibility property. We have seen this reasoning to sl(2;C) (the complexification of the Lie algebra of SU(2) and to sl(3;C) and the complexification of the Lie algebra SU(3)).


Complex semisimple Lie algebras are complex Lie algebras that are isomorphic to the complexification of the Lie algebra of a compact simply-connected matrix Lie group.


Definition

If g is a complex Lie algebra, the an ideal in g is a complex subalgebra h of g with the property that for all X in g and H in h, we have [X, H] in h. 


A complex Lie algebra g is called indecomposable if the only ideals in g are g and {0}. A complex Lie algebra g is called simple if g is indecomposable and dim ≥ 2.


A complex Lie algebra is called reductive if it is isomorphic to a direct sum of indecomposable Lie algebras. A complex Lie algebra is called semisimple if it is isomorphic to a direct sum of simple Lie algebras. Note that a reductive Lie algebra is a direct sum of indecomposable algebras, which are either simple or one-dimensional commutative. Thus, a reductive Lie algebra is one that decomposes as a direct sum of a semisimple algebra and a commutative algebra. 


The following table lists the complex Lie algebras that are either reductive (not semisimple) or semisimple.

sl(n;C) (n≥2) semisimple
so(n;C) (n≥3) semisimple 
so(2;C) reductive 
gl(n;C) (n≥1) reductive 
sp(n;C) (n≥1) semisimple

All of the above listed semisimple algebras are actually simple, except for so(4;C), which is isomorphic to sl(2;C) sl(2;C). Every complex simple Lie algebra is isomorphic to one of sl(n;C), so(n;C) (n≠4), sp(n;C), or to one of the five "exceptional" Lie algebras conventionally called G2, F4, E6, F7, and E8.

For real Lie algebra,

su(n) (n≥2) semisimple 
so(n) (n≥3)  semisimple 
so(2) reductive 
sp(n) (n≥1) semisimple 
so(n,k) (n+k ≥3) semisimple 
so(1,1) reductive 
sp(n;R) (n≥1) semisimple 
sl(n;R) (n≥2) semisimple
gl(n;R) (n≥1) reductive


In each case, the complexification of the listed Lie algebra is isomorphic to one of the complex Lie algebras in the above table. Note that the Heisenberg group, the Euclidean group, and the Poincare group are neither reductive nor semisimple. 

Friday, December 5, 2008

Highest Weight

If we have a representation with a weight μ = (m1, m2), then by applying the root vectors Xα X2, X3, Y1, Y2, Y3, we can get some new weights of the form μ + α, where α is the root (recall that π(H1)π(Zα)v = (m1+ a1)π(Zα)v). If π(Zα)v = 0, then μ + α is not necessarily a weight. In analogy to the classification of the representation sl(2;C) : In each irreducible representation of sl(2;C), π(H) is diagonalizable, and there is a largest eigenvalue of π(H). Two irreducible representations of sl(2;C) with the same largest eigenvalue are equivalent. The highest eigenvalue is always a non-negative integer, and, conversely, for every non-negative integer m, there is an irreducible representation with highest eigenvalue m. 


Let α1 = (2, -1) and  α2 = (-1, 2) be the roots introduced in "Weights & Roots". Let μ1  and μ2 be two weights. Then, μ1is higher than μ2 if μ1- μ2 can be written in the form  μ1 - μ2 = aα1 + bα2 with a ≥ 0 and b ≥ 0. If π is a representation of sl(3;C), then a weight μ0 for π is said to be a highest weight if for all weights μ of π, μ ≤ μ0.


Note that the relation of "higher" is only a partial ordering because μ1 is neither higher nor lower than μ2. For example, {0, α1 - α2} has no highest element. Moreover, the coefficients a and b do not have to be integers, even if both μ1 and μ2 have integer entries. For example, (1,0) is higher than (0,0) since (1,0) = 2/3α1 + 1/3α2.


Theorem of Highest Weight

The theorem of highest weight is a main theorem regarding the irreducible representation of sl(3;C).

1. Every irreducible representation π of sl(3;C) is the direct sum of its weight spaces; that is π(H1) and  π(H2) are simultaneously diagonalizable in every irreducible representation.

2. Every irreducible representation of sl(3;C) has a unique highest weight μ0, and two equivalent irreducible representations have the same highest weight.

3. two irreducible representations of sl(3;C) with the same highest weight are equivalent.

4. If π is an irreducible representation of sl(3;C), then the highest weight μ0 of π is of the form μ0 = (m1, m2) with m1 and m2 being non-negative integers. 


An ordered pair (m1, m2) with m1 and m2 being non-negative integers is called a dominant integral element. The theorem says that the highest weight of each irreducible representation of sl(3;C) is a dominant integral element and, conversely, that every dominant integral element occurs as the highest weight of some irreducible representation. 


However, if μ has integer coefficients and is higher than zero, this does not necessarily mean that μ is dominant integral. For example, α1 = (2, -1) is higher than zero but is not dominant integral. Note that the condition on which weights can be highest weights is m1 and m2 being non-negative integer.

The dimension of the irreducible representation with highest weight (m1, m2) is

1/2(m1 + 1)(m2 + 1)(m1 + m2 + 2).

Weyl Group in SU(3)

The representations of sl(3;C) are invariant under the adjoint action of SU(3). Let π be a finite-dimensional representation of sl(3;C) acting on a vector space V and let Π be the associated representation of SU(3) acting on the same space. For any A SU(3), we can define a new representation πA of sl(3;C), acting on the same vector space V, by setting πA(X) = π(AXA-1). Since the adjoint action of A on sl(3;C) is a Lie algebra automorphism, this is a representation of sl(3;C). Π(A) is an intertwining map between (π, V) and (πA, V). We say that adjoint action of SU(3) is a symmetry of the set of equivalence classes of representations of sl(3;C). 


The two-dimensional subspace h of sl(3;C) spanned by H1 and H2 is called a Cartan subalgebra. In general, the adjoint action of A  SU(3) will not preserve the space h and so the equivalence of π and πA does not tell us anything about the weights of π. However, there are elements A in SU(3) for which AdA does preserve h. These elements make up the Weyl group for SU(3) and give rise to a symmetry of the set of weights of any representation π. 


Let N be the subgroup of SU(3) consisting of those A SU(3) such that AdA(H) is an element of h for all H in h. And let Z be the subgroup of SU(3) consisting of those  A  SU(3) such that AdA(H) = H for all H h. The Weyl group of SU(3), denoted by W, is the quotient group N/Z. 


The group Z consists precisely of the diagonal matrices inside SU(3), namely the matrices of the form A = (e,0,0 : 0,e,0 : 0,0,e-i(θ+Φ)) for θ and Φ in R. The group N consists of precisely those matrices A SU(3) such that for each k = 1,2,3, there exist l {1,2,3} and θ R such that Aek = eel. Here e1, e2,e3 is the standard basis for C3. The Weyl group W = N/Z is isomorphic to the permutation group on three elements. 


In order to show that Weyl group is a symmetry of the weights of any finite-dimensional representation of sl(3;C), we need to adopt a less basis-dependent view of weights. A vector v is an eigenvector of π(H1) and π(H2) then it is also an eigenvector for π(H) for any element H of the space h spanned by H1 and H2. Furthermore, the eigenvalues must depend linearly on H. For π(J) = λ2v, then π(aH + bJ)v = (aπ(H) + bπ(J))v = (aλ1 + bλ2)v. We have the following basis-independent notion of weight :  a linear functional μ h* is called a weight for π if there exists a nonzero vector v in V such that π(H)v = μ(H)v for all H in h. Such a vector v is called a weight vector with weight μ. So a weight is just a collection of simultaneously eigenvalues of all the elements H of h, which depends linearly on H and, therefore, define a linear functional on h. The reason for adopting this basis-independent approach is that the action of the Weyl group does not preserve the basis {H1, H2} for h.


In another words, the Weyl group is a group of linear transformation of h. This means that W acts on h, and we denote this action as wH. We can define an associated action on the dual space h*. Thus, for μ h* and w W, we define wμ to be the element of  h* given by (wμ)(H) = μ(w-1H). 

Wednesday, December 3, 2008

Weights & Roots

There is a one-to-one correspondence between the finite-dimensional complex representations Π of SU(3) and the finite-dimensional complex-linear representation π of sl(3;C). This correspondence is determined by the property that Π(eX) = eπ(X) for all X su(3) sl(3;C). The representation Π is irreducible if and only if the representation π is irreducible. 


Simultaneous Diagonalization

Suppose that V is a vector space and A is some collection of linear operators on V. Then a simultaneous eigenvector for A is a nonzero vector v V such that for all A A, there exists a constant λA with Av = λAv. The numbers λA are the simultaneous eigenvalues associated to v. For example, consider the space D of all diagonal nxn matrices. For each k = 1,...,n, the standard basis element ek is a simultaneous eigenvector for D. For each diagonal matrix A, the simultaneous eigenvalue associated to ek is the k-th diagonal entry of A.


If A is a simultaneously diagonalisable family of linear operators on a finite-dimensional vector space V, then the elements of A commute.

If A is commuting collection of linear operators on a finite-dimensional vector space V and each A A is diagonalizable, then the elements of A are simultaneously diagonalizable. 


Basis for sl(3;C)

Every finite-dimensional represetation of sl(2;C) or sl(3;C) decomposes as a direct sum of irreducible invariant subspaces. Consider the following basis for sl(3;C):

H1 = (1,0,0 : 0,-1,0 : 0,0,0), H1 = (0,0,0 : 0,1,0 : 0,0,-1),

X1 = (0,0,0 : 1,0,0 : 0,0,0), X2 = (0,0,0 : 0,0,0 : 0,1,0), X3 = (0,0,0 : 0,0,0 : 1,0,0),

Y1 = (0,1,0 : 0,0,0 : 0,0,0), Y2 = (0,0,0 : 0,0,1 : 0,0,0), Y3 = (0,0,1 : 0,0,0 : 0,0,0).


The span of {H1, X1, Y1} is a subalgebra of sl(3;C) which is isomorphic to sl(2;C) (can be seen by ignoring the third row and the third column in each matrix). Similarly for  {H2, X2, Y2}. Thus, the following commutation relations exists:

[X1, Y1] = H1, [X2, Y2] = H2,

[H1, X1] = 2X1, [H2, X2] = 2X2,

[H1, Y1] = -2Y1, [H2, Y2] = -2Y2.

Other commute relations among the basis elements which involve at least one H1 and H2:

[H1, H2] = 0;

[H1, X1] = 2X1, [H1, Y1] = -2Y1,

[H2, X1] = -X1, [H2, Y1] = Y1;

[H1, X2] = -X2, [H1, Y2] = Y2,

[H2, X2] = 2X2, [H2, Y2] = -2Y2;

[H1, X3] = X3, [H1, Y3] = -Y3,

[H2, X3] = X3, [H2, Y3] = -Y3;

Adding all of the remaining commutation relations:

[X1, Y1] = H1,

[X2, Y2] = H2,

[X3, Y3] = H1 + H2;

[X1, X2] = X3, [Y1, Y2] = -Y3,

[X1, Y2] = 0, [X2, Y1] = 0;

[X1, X3] = 0, [Y1, Y3] = 0,

[X2, X3] = 0, [Y2, Y3] = 0;

[X2, Y3] = Y1, [X3, Y2] = X1,

[X1, Y3] = -Y2, [X3, Y1] = -X2.


Weights of sl(3;C)

A strategy to classify the representation sl(3;C) is to simultaneously diagonalize π(H1) and π(H2). Since H1 and H2 commute, π(H1) and π(H2) will also commute and so there is at least a chance that π(H1) and π(H2) can be simultaneously diagonalized. 

If (π, V) is a representation of sl(3;C), then an ordered pair μ = (m1, m2) C2 is called a weight of π if there exists v ≠ 0 in V such that π(H1)v = m1v, π(H2)v = m2v. A nonzero vector v satisfying this is called a weight vector corresponding to the weight μ. If μ = (m1, m2) is a weight, then the space of all vectors v satisfying π(H1)v = m1v, π(H2)v = m2v is the weight space corresponding to the weight μ. The multiplicity of a weight is the dimension of the corresponding weight space. Equivalent representations have the same weights and multiplicities. 


If π is a representation of sl(3;C), then all of the weights of π are of the form μ = (m1, m2) with m1 and m2 being integers.


Roots of sl(3;C)

An ordered pair α = (a1, a2) C2 is called a root if 

1. a1 and a2 are not both zero, and

2. there exists a nonzero Z sl(3;C) such that [H1, Z] = a1Z, [H2, Z] = a1Z. The element Z is called a root vector corresponding to the root α. 


Recall that adX(Y) = [X, Y], eadx = Ad(eX), and the adjoint mapping AdA(X) = AXA-1. Condition 2 of above says that Z is a simultaneous eigenvector for adH1 and adH2.  This means that Z is a weight vector for the adjoint representation and weight (a1, a2). By condition 1 the roots are precisely the nonzero weights of the adjoint representation. 


There are six roots of sl(3;C). They form a "root system", called A2.

α            Z

(2, -1)    X1

(-1, 2)    X2

(1, 1)     X3

(-2, 1)    Y1

(1, -2)    Y2

(-1, -1)   Y3

It is convenient to single out the two roots corresponding to X1 and X2 and given them special names: α1 = (2, -1), α2 = (-1,2). α1 and α2 are called the positive simple roots. They have the property that all of the roots can be expressed as linear combinations of α1 and α2 with integer coefficients :

(2, -1) = α1

(-1, 2) = α2

(1, 1) = α1 + α2

(-2, 1) = -α1

(1, -2) = -α2

(-1, -1) = -α1 - α2 .

Let α = (a1, a2) be a root and Zα a corresponding root vector in sl(3;C). Let π be the representation of sl(3;C), μ = (m1, m2) a weight for π, and v ≠ 0 a corresponding weight vector. Then

π(H1)π(Zα)v = (m1+ a1)π(Zα)v,

π(H2)π(Zα)v = (m2+ a2)π(Zα)v.

Thus, either π(Zα)v = 0 or π(Zα)v is a new weight vector with weight

μ + α = (m1+ a1, m2+ a2) .

Saturday, November 29, 2008

Chateau Paradis Casseuil 2006

Château Paradis Casseuil gets its name from the combination of the registered name of the main parcel of vineyards called “Vines of Paradise” and Casseuil county. Taken under Domaines Barons de Rothschild (Lafite)’s wing in 1984, Château Paradis Casseuil then included 14 hectares of vines. In 1989, the estate grew by 9 hectares, and chais were included in the heart of the Sainte Foy la Longue vineyard. The Château Paradis Casseuil chais, located in the heart of the Sainte Foy la Longue vineyard are used for producing red wines. White wines are made at Château Rieussec, benefiting from the technical capacities of that great estate.

Lively and intense ruby color. Fine nose, red fruit and slight licorice aromas. The attack is supple (slight sensation of sweetness) with silky tannins.

2007
After a wet winter, the high temperatures in March and April helped give a good start to the vegetation. The following months were moderate until August. The fine weather settled early September encouraging the ripening of the grapes.

Beautiful crimson colour. A fresh nose with a touch of redcurrant and mint. The first impression on the palate is pleasant, frank with intense fruit. This wine can be appreciated now or kept for a few years when it will have reached its peak.

2006
The winter of 2005/2006 was dry and cold and the spring months were mild with little rain. August was disconcerting – cool and wet – and at the beginning of September ripeness levels were very low. However, the weather then became summery allowing the grapes to finish ripening well.

Beautiful straw yellow colour with hints of green. Discreet on the nose, but when swirled, the aromas are revealed with a fine lime bouquet. The attack is supple and the finish is fresh.

2005
The end of 2004 and the first few months of 2005 were dry. Moreover, maturation took place in perfect conditions.

Pale yellow colour. Notes of citrus fruit, mainly grapefruit, on the nose.

First impressions on the palate are full, lively and well rounded, leading to a silky finish marked by hints of hazelnuts.

2004
The year was marked by stormy weather until July, with no effects on the vines.
The beginning of the year was warmer than in 2003, but from March onwards the trend was reversed and an average drop of 2°C was noted. Rainfall was about the average for the past three vintages, with a dry June. July and August were very damp. Maturation was therefore slow but at harvest time the grapes for the dry whites were ripe.

Pale yellow colour. Very open and fresh on the nose: aromas of white flowers and violets.

Delicate first impressions on the palate. Notes of fresh fruit and Granny Smith apples.

Region : Sainte Foy la Longue, Médoc
Grape Varietals : Cabernet Sauvignon 50%, Merlot 45% and Cabernet franc 5%
Average wine production : 12 000 cases per year.


Thursday, November 27, 2008

Use of SU(2) & SO(3)

Every Lie group homomorphism gives rise to a Lie algebra homomorphism. In the case of a simply-connected matrix Lie group G, a Lie algebra homomorphism also gives rise to a Lie group homomorphism. In fact, for a simply-connected matrix Lie group, there is a natural one-to-one correspondence between the representations of G and the representations of the Lie algebra g. Each of the representations πm of su(2) was constructed from the corresponding representation Πm of the group SU(2).


SU(2) is simply connected but SO(3) is not (SU(2) can be thought of (topologically) as the three-dimensional sphere S3 sitting inside R4. It is well known that S3 is simply connected). There exists a Lie group homomorphism Φ which maps SU(2) onto SO(3) and which is two-to-one. Therefore, SU(2) and SO(3) are almost isomorphic.


Consider the space V of all 2x2 complex matrices which are self-adjoint (i.e., A* = A) and have trace zero. This is a three-dimensional real vector space with the following basis: A1 = (0,1: 1,0) ; A2 = (0,-i : i,0) ; A3 = (1,0 : 0,-1) . Define the inner product on V by <A, B> = 1/2 trace(AB). {A1, A2, A3} is an orthonormal basis for V. Next we are going to identify V with R3. Suppose U is an element of SU(2) and A is an element of V. Consider UAU-1, trace(UAU-1) = trace(A) = 0 and (UAU-1)* = UAU-1. So UAU-1is again in V. Because the map A --> UAU-1 is linear. Therefore, we can define a linear map ΦU of V to itself by ΦU = UAU-1. Given A, B V, <ΦU(A), ΦU(B)> = <A, B>. Thus, ΦU is an orthogonal transformation of V.


Once we identify V with R3 using the above orthonormal basis, we may think of ΦU as an element of O(3). Since ΦU1U2 = ΦU1ΦU2, we see that Φ (the map U --> ΦU) is a homomorphism of SU(2) into O(3). SU(2) is connected, Φ is continuous, and ΦI is equal to I, which has determinant one. It follows that Φ must map SU(2) into the identity component of O(3), namely SO(3). However, the map ΦU is not one-to-one, since for any U SU(2), ΦU = Φ-U. Actually ΦU is a two-to-one map of SU(2) onto SO(3) (recall that every element of O(3) has determinant ± 1).


If we have the basis E1 = 1/2(i,0 : 0,-i) ; E2 = 1/2(0,-1: 1,0) ; E3 = (0,i : i,0) for su(2) and the basis F1 = (0,0,0 : 0,0,1 : 0,-1,0) ; F2 = (0,0,-1 : 0,0,0 : 1,0,0) ; F3 = (0,1,0 : -1,0,0 : 0,0,0) for so(3). Then, we have [E1 , E2] = E3 , [E2 , E3] = E1 and [E3 , E1] = E2 , and similarly with the E's replaced by the F's. Thus the linear map Φ : su(2) --> so(3) which takes Ei to Fi will be a linear algebra isomorphism.


Let σm = πm º Φ-1 be the irreducible complex representations of the Lie algebra so(3) (m ≥ 0). If m is even, then there is a representation ∑m of the group SO(3) such that ∑m(exp X) = exp(σm(X)) for all X in so(3). If m is odd, then there is no such representation of SO(3).


Representation of su(2) so(3) in Physics

Representation of su(2) so(3) in Physics are labeled by the parameter l = m/2. In terms of this notation, a representation of so(3) comes from a representation of SO(3) if and only if l is an integer. The representations with l an integer are called "integer spin"; the others are called the "half-integer spin."Consider the path in SO(3) consisting of rotations by angle 2πt in the (x, y)-plane, which comes back to the identity when t = 1. However, this path is not homotopic to the constant path.


If one defines ∑m along the constant path, then one gets the value ∑m(I) = I, as expected. If m is odd and one defines ∑m along the path of rotations in the (x, y)-plane, then one gets the value ∑m(I) = -I. There is no way to define ∑m(m odd) as a "single-valued" representations of SO(3).


An electron is a "spin-1/2" particle, which means that it is described in quantum machines in a way that involves the representation σ1 of so(3). In the quantum machines, one finds statements to the effect that performing a 360º rotation on the wave function of the electron gives back the negative of the original wave function.This reflects that if one attempts to construct the nonexistent representation1 of SO(3), then when defining ∑1 along a path of rotations in some plane, one gets that ∑1(I) = -I.


A Unitary Representations of SO(3)

Consider the unit sphere S2 R3, with the usual surface measure Ω. Any R SO(3) maps S2 into S2 . For each R, we can define Π2(R) acting on L2(S2, dΩ) by [ Π2(R) f](x) = f(R-1x). Then, Π2 is a unitary representation of SO(3). Here, L2(S2, dΩ) has a very nice decomposition as the orthogonal direct sum of finite-dimensional invariant subspaces. This decomposition is the theory of "spherical harmonics" in physics.

Representations of SU(2) & su(2)

su(2) so(3) and the representation of so(3) is important in the computation of angular momentum. By studying the representation theory of su(2) we can know:

(1) how to commutation relations to determine the representations of a Lie algebra.

(2) how to determine the representations of semisimple Lie algebras, e.g., su(3).


Some Representations of SU(2)

By definition, an element of U of SU(2) is a linear transformation of C2. Let z denotes the (z1, z2) pair in C2. Then, we may define a linear transformation Πm(U) on the space Vm by the formula [Πm(U)f] (z) = f(U-1z). The inverse is necessary in order to make Πm a representation.  And Vm is the space of functions of the form f(z1, z2) = a0z1m + a1z1m-1z2  + a2z1m-2z22  + ... + amz2m .
 

Therefore, [Πm(U)f] (z1, z2) =  ∑ ak(U11-1z1 + U12-1z2)m-k (U21-1z1 + U22-1z2)k for k = 0 .. m.  Πm(U)f is a homogeneous polynomial of degree m. Thus, Πm(U) maps Vm into Vm. Moreover, Πm(U1) [Πm(U2)f] (z) = Πm(U1U2) f(z). So Πm is a (finite-dimensional complex) representation of SU(2).


The Lie algebra representation of Πm can be computed as πm(X) = d/dt Πm(etX) |t=0. So (πm(X)f) (z) = d/dt f(e-tXz) |t=0. Let z(t) be the curve e-tXz. We have z(0) = z and (dz/dt) |t=0 = -Xz. Since z(t) can also be written as z(t) = (z1(t), z2(t)), with zi(t) C. By chain rule, πm(X)f = ∂f / ∂z1(dz1/dt) |t=0 + ∂f / ∂z2(dz2/dt) |t=0 . We have

πm(X)f = -∂f / ∂z1(X11z1 + X12z2)  - ∂f / ∂z2(X21z1 + X22z2). 


Because every finite-dimensional complex representation of the Lie algebra su(2) extends uniquely to a complex-linear representation of the complexification of su(2). And the complexification of su(2) is sl(2;C). Therefore, the representation πm of su(2) given by above extends to a representation of sl(2;C). 


Consider H = (1,0 : 0,-1)

m(H)f) (z) = -(∂f / ∂z1)z1 + (∂f / ∂z2)z2 .

 πm(H) = -z1(∂f / ∂z) + z2(∂f / ∂z2). Apply this to a basis element z1kz2m-k , we have

πm(H) z1kz2m-k = -k z1kz2m-k  + (m-k)z1kz2m-k  = (m-2k) z1kz2m-k .

Thus, z1kz2m-k is an eigenvector for πm(H) with eigenvalue (m-2k). In particular, πm(H) is diagonalizable. 


Let X and Y be the elements, X = (0,0 : 1,0) , Y = (0,1 : 0,0) in sl(2;C). We have πm(X) = -z2(∂f / ∂z1) and πm(X) = -z1(∂f / ∂z2). Apply these to the basis element 

πm(X) z1kz2m-k = -k z1k-1z2m-k+1  

πm(Y) z1kz2m-k = -(m-k) z1k+1z2m-k-1  

It suffices to show that every nonzero invariant subspace of Vm is equal to Vm. Let W be such a space. Since W is assumed nonzero, there is at least one nonzero element w in W. Then w can be written uniquely in the form 

w =  a0z1m + a1z1m-1z2  + a2z1m-2z22  + ... + amz2m

with at least one of the ak's nonzero. Let k0 be the smallest value of k for which  ak ≠ 0 and consider πm(X)m-k0 w. Since πm(X) lowers the power of z1 by 1, it will kill all the terms in w except ak0 z1m-k0 z2k0 . So we have  πm(X)m-k0 z1m-k0 z2k0 = (-1)m-k0(m-k0)! z2m . Since W is assumed invariant, W must contain z2m . Furthermore, πm(Y)k z2m is a nonzero multiple of z1kz2m-k for all 0 ≤ k ≤ m. Because these elements form a basis for Vm . In fact W = Vm. Therefore, representation πm is an irreducible representation of sl(2;C). 


Irreducible Representations of su(2)

Every finite-dimensional complex representation π of su(2) extends to a complex-linear representation of the complexification of su(2), namely sl(2;C). Studying the irreducible representations of su(2) is equivalent to studying the irreducible representation of sl(2;C). Passing to the complexified Lie algebra makes computations easier, in that there is a nice basis for sl(2;C) that has no counterpart among the bases of su(2).


We can use commutation relations to determine the representation of a Lie algebra. Consider the following basis for sl(2;C) and commutation relations:

H = (1,0 : 0,-1);  X = (0,0 : 1,0);  Y = (0,1 : 0,0)

[H, X] = 2X,  [H, Y] = -2Y,  [X, Y] = H.

If V is a (finite-dimensional complex) vector space and A, B, and C are operators on V satisfying

[A, B] = 2B,  [A, C] = -2C,  [B, C] = A, then

because of the skew symmetry and bilinearity of brackets, the linear map π : sl(2;C) --> gl(V) satisfying π(H) = A, π(X) = B, π(Y) = C will be a representation of sl(2;C).


We call π(X) the "raising operator", because it has the effect of raising the eigenvalue of π(H) by 2, and call π(Y) the "lowering operator". Since [π(H), π(X)] = π([H, X]) = 2π(X). Let u be an eigenvector of π(H) with eigenvalue α C. Thus,

π(H)π(X)u = π(X)π(H)u + 2π(X)u

= π(X)(αu) + 2π(X)u = (α + 2)π(X)u.

Either π(X)u = 0 or π(X)u is an eigenvector for π(H) with eigenvalue α+2. More general, π(H)π(X)nu = (α + 2n)π(X)nu. Similarly, for [π(H), π(Y)] = -2π(Y), we have π(H)π(Y)u = (α - 2)π(Y)u.


An operator on a finite-dimensional space can have only finitely many distinct eigenvalues. Therefore, there is some N ≥ 0 such that π(X)N+1u = 0.

Define u0 = π(X)Nu and λ = α + 2N. Then,

π(H)u0 = λu0, π(X)u0 = 0. 


Define uk = π(Y)ku0 , for k ≥ 0. Thus, we have π(H)uk = (λ - 2k)uk . Since π(H) can have only finitely many eigenvalues, the uk's cannot be all be nonzero.

For k = 1,

π(H)u1 = π(H)π(Y)u0 = (α - 2)π(Y)u0 = (α - 2)u1.

π(X)u1 = π(X)π(Y)u0 = (π(Y)π(X) + π(H))u0 = π(H)u0 = λu0 (as π(X)u0 = 0)

π(Y)u1 = π(Y)π(Y)u0  = π(Y)2u0 = u2  

If π(X)uk = [kλ - k(k-1)]uk-1. By induction, π(X)uk+1= π(X)π(Y)uk

= (π(Y)π(X) + π(H))uk = π(Y)π(X)uk + (λ - 2k)uk

= π(Y)[kλ - k(k-1)]uk-1 + (λ - 2k)uk  = [kλ - k(k-1) + (λ - 2k)]uk

= [(k+1)λ - (k+1)k]uk  


Because π(H) can have only finitely many eigenvalues, the uk's cannot all be nonzero. For all k ≤ m, um+1 = π(Y)m+1u0  = 0. If um+1 = 0. Then π(X)um+1  = (m+1)(λ - m)um  = 0. Since m ≠ 0 and m + 1 ≠ 0. So we have λ = m, where m is a non-negative integer. 


In summary, given a finite-dimensional irreducible representation π of sl(2;C) acting on a space V and putting λ = m, there exists an integer m ≥ 0 and nonzero vectors u0, ..., um such that

π(H)uk = (m - 2k)uk, 

π(Y)uk = uk+1 (k < m),

π(Y)um = 0, 

π(X)uk = [km - k(k-1)]uk-1 (k > 0),

π(X)u0 = 0.

The vectors u0, ..., um must be linearly independent, since they are eigenvectors of π(H) with distinct eigenvalues. Moreover, the (m+1)-dimensional span u0, ..., um is explicitly invariant under π(H), π(X), and π(Y). Hence under π(Z) for all Z sl(2;C). Since π is irreducible, this space must be all of V. 


The (m+1)-dimensional representation Πm described above must be equivalent to π. This can be seen explicitly by introducing the following basis for Vm :

uk = [πm(Y)]k (z2)m = (z2)m (m! / (m-k)!) z1kz2m-k  (k ≤ m).

In other words, πm have a basis of the form π(H), π(X), and π(Y). π's have the right commutation relations to forma representation of sl(2;C) and that this representation is irreducible. 

Wednesday, November 19, 2008

Complexification

Complexification of a Real Lie Algebra

The complexification of a finite-dimensional real vector space V, as denoted by Vc, is the space of formal linear combinations v1+ iv2 , with v1, v2 V. 


Let g be a finite-dimensional real Lie algebra and gc its complexification (as a real vector space). Then, the bracket operation on g has a unique extension to gc which makes gc into a complex Lie algebra. The complex Lie algebra gc is called the complexification of the real Lie algebra g


Isomorphisms of Complex Lie Algebra

The Lie algebra gl(n;C), sl(n;C), so(n;C), and sp(n;C) are complex Lie algebras. In addition, there are also following isomorphisms of complex Lie algebras:

gl(n;R)c  gl(n;C)

u(n)c  gl(n;C)

su(n)c  sl(n;C)

sl(n;R)c  sl(n;C)

so(n)c  so(n;C)

sp(n;R)c  sp(n;C)

sp(n)c  sp(n;C)

(u(n) is the space of all nxn complex skew-self-adjoint matrices. If X is any nxn complex matrix, then X = (X-X*)/2 + i(X+X*)/2i. Thus, X can be written as a skew matrix plus i times a skew matrix. Every X in gl(n;C) can be written uniquely as X1 + iX2, with X1 and X2 in u(n). It follows that u(n)c  gl(n;C). If X has trace zero, then so do X1 and X2 , which has su(n)c  sl(n;C))


Note that u(n)c  gl(n;R)c  gl(n;C). However, u(n)c is not isomorphic to gl(n;R), except when n = 1. The real algebra u(n) and gl(n;R) are called real forms of the complex Lie algebra gl(n;C). 


In physics, we do not always clearly distinguish a matrix Lie algebra and its Lie algebra, or between a real Lie algebra and its complexification. For example, some references in the literature to SU(2) actually refer to the complexified Lie algebra sl(2;C). 


Representation and Complexification

Let g be a real Lie algebra and gc its complexification. Then, every finite-dimensional complex representation π of g has a unique extension to a complex-linear representations of gc , also denoted as π and given by π(X+iY) = π(X) + iπ(Y) for all X, Y g. Furthermore, π is irreducible as a representation of gc  if and only if it is irreducible as a representation of g.


If π is a complex representation of the real Lie algebra g, acting on the complex vector space V. Then, saying that π is irreducible means that there is no nontrivial invariant complex subspace W V. Even though g is a real Lie algebra, when considering complex representations of g, we are interested only in complex invariant subspaces.


Credit Default Swaps

Nov 18 (Reuters) - A report on Bloomberg said the cost of protecting Berkshire's debt against default using credit default swaps (CDS) has risen from 140 (two months ago) to 388 basis points, or $388,000 a year to protect $10 million for five years. BRK-A shares fall 4% to $91,700 after the report.

Sunday, November 16, 2008

Chateau Sainte Barbe 2004 Merlot

Blind Test
After a blind tasting of 350 Bordeaux wines by the Belgium Wine tasting committee of the Revue Vino magazine, Merlot Sainte Barbe 2004, Chateau Sainte Barbe 2003 & 2004 were selected amongst the finalists and Chateau Sainte Barbe was awarded 2 Bacchus.

The wine's taste is fruity with harmonious aromas of ripe grapes, plums and white flowers. On the palate (roof of the mouth), it combines a well-balanced structure with a velvety (closely woven fabric of silk), fine and voluptuous flavour. Evolution: 3 to 5 years

Region: Gironde, right bank of the Garonne, Bordeaux
Grape Varieties: 100% Merlot



Reference
"In 2004, observing how some parcels were reacting to our husbandry, we decided to produce a still higher quality wine, called 'Cuvée VSP' lowering the yield to an extreme 4 bunches of grapes per vine, using picking boxes for hand harvesting and making malolactic fermentation directly in 100% new oak barrels.

The results have exceeded even our expectations."



Complex wine with an aroma of blackberries, blackcurrants, chocolate, and expresso. Medium to full bodied with rich ripe tannins.

Thursday, November 13, 2008

Generating Representations

One way of generating representations is to take some representations one knows and combine them in some fashion.There are three standard methods of obtaining new representations from old, namely, direct sum of representations, tensor products of representations, and dual representations.


Direct Sum of Representations

Let G be a matrix Lie group and let Π1, Π2, ..., Πm be representations of G acting on vector spaces V1, V2, ..., Vm. Then, the direct sum of  Π1, Π2, ..., Πm is a representation Π1 Π2 ... Πm of G acting on the space V1 V2 ... Vm, defined by [Π1 Π2 ... Πm (A)] (v1, v2, ..., vm) = (Π1(A)v1, Π2(A)v2, ..., Πm(A)vm) for all A G. 


Similarly, if g is a Lie algebra, and π1, π2, ..., πm are representations of g acting on V1, V2, ..., Vm, then we define the direct sum of  π1, π2, ..., πm , acting on  V1 V2 ... Vm by [π1 π2 ... πm (X)] (v1, v2, ..., vm) = (π1(X)v1, π2(X)v2, ..., πm(X)vm) for all X g.


Tensor Products of Representations

Consider an element u of U and v of V, the "product" of these two are denoted by uv. The space UV is then the space of linear combination of such products. The space of elements is in the form of a1u1v1 + a2u2v2 + ... + anunvn . The product is not necessary commutative (vu is in different space) but is bilinear ((u1+ au2)v = u1 v + au2 v, u (v1+ av2) = u v1 + au v2). 


If U and V are finite-dimensional real or complex vector spaces, then tensor product (W, ϕ) of U and V is a vector space W, together with a bilinear map ϕ : U x V --> W with the following property: If ψ is any bilinear map of UxV into a vector space X, then there exists a unique linear map ψ of W into X. Bilinear maps on UxV turn into linear maps on W. Suppose e1, e2 , en is a basis for U and f1, f2 , fm  is a basis for V. Then {ϕ(ei, fj) | 1 ≤ i ≤ n, 1≤ j ≤ m} is a basis for W. In this case, {eifj | 1 ≤ i ≤ n, 1≤ j ≤ m} is a basis for UV and dim (UV) = (dim U)(dim V). 


The tensor product (W, ϕ)  is unique up to canonical isomorphism. That is, if (W1, ϕ1) and (W2, ϕ2) are two tensor products, then there exists a unique vector space isomorphism Φ : W1 --> W2 . 


The defining property of UV is called the universal property of tensor products. Suppose that ψ(u,v) is some bilinear expression in (u,v). Then , the universal property says precisely that there is a unique linear map T(= ψ ) such that T(uv) = ψ(u,v). Let A : U --> U and B : V --> V be linear operators. Then, there exists a unique linear operator from UV to UV, denoted by AB, such that (AB)(uv) = (Au)(Bv) for all u U and v V. Moreover, (A1B1)(A2B2) = (A1A2)⊗(B1B2). 


There are two approaches to define tensor of representations. 

(A) Starts with a representation of a group G acting on a space V and a representation of another group H acting on space U and produces a representation of the product group GxH acting on space UV.


(B) Starts with two different representations of the same group G, acting on spaces U and V, and produces a representation of G acting on UV.


(A) Let G and H be matrix Lie groups. Let Π1 be a representation of G acting on a space U and let Π2 be a representation of G acting on a space V. Then, the tensor product of Π1 and Π2 is a representation Π1Π2 of GxH acting on UV defined by  Π1Π2 (A,B) =  Π1(A)Π2(B) for all A G and B H. Let  π1 π2 denote the associated representation of the Lie algebra of GxH, namely gh. Then, for all X g and Y h, π1 π2(X,Y) = π1(X) I + I π2(Y).  


(B) Let G be a matrix Lie groups and let Π1 and Π2 be representation of G acting on a space V1 and V2. Then, the tensor product of Π1 and Π2 is a representation of G acting on V1 V2 defined by Π1Π2 (A) =  Π1(A)Π2(A) for all A G. The associated Lie algebra g satisfies π1 π2(X) = π1(X) I  + I π2(X) all X g. 


Suppose Π1 and Π2 are irreducible representations of a group G. If regard Π1Π2 as representation of G, it may no longer be irreducible. If it is not irreducible, one can attempt to decompose it as a direct sum of irreducible representations. This process is called the Clebsch-Gordan theory. In the physics, the problem of analyzing tensor products of representations of SU(2) is called "addition of angular momentum."


Dual Representations

A linear functional on a vector space V is a linear map of V into C. If v1, v2, ..., vn is a basis for V, then for each set of constants a1, a2, ..., an, there is a unique linear functional ϕ such that ϕ(vk) = ak. If V is a finite-dimensional complex vector space, then the dual space to V, denoted by V*, is the set of all linear functionals on V. This is also a vector space and its dimension is the same as that of V.  If A is a linear operator on V, let Atr denote the dual or transpose operator on V*, (Atrϕ)(v) = ϕ(Av) for all ϕ V*, v  V. Note that the matrix of Atr is the transpose of the matrix A and not the conjugate transpose. If A and B are linear operators on V, then (AB)tr = Btr Atr . 


Suppose G is a matrix Lie group and Π is a representation of G acting on a finite-dimensional vector space V. Then, the dual representation Π* to Π is the representation of G acting on V* given by Π*(g) = [Π(g-1)]tr . Similarly, if π is a representation of a Lie algebra g acting on a finite-dimensional vector space V, then π* is the representation of g acting on V* given by  π*(g) = -π(X)tr. The dual representation is also called contragredient representation. Note that the transpose is an order-reversing operation, we cannot simply define Π*(g) = Π(g)tr. This would not be a representation. 

Sunday, November 9, 2008

Los Vascos 2006 Cabernet Sauvignon

Under the direct control of Domaines Barons de Rothschild (Lafite), Los Vascos, one of the oldest wine estates in Chile, is located in the Colchagua valley. The pre-phylloxra Bordeaux rootstock, Cabernet Sauvignon is the grape that made the estate famous.

Cabernet Sauvignon 2006 has a very red fruit nose and chocolate and bay leaf touches. It is fresh in the mouth, light, very fruity and balanced with good persistent tannins well blended into the wine. Highly concentrated with strawberry & cherries fruit notes and marked spices. International Wine Cellar 2006 Cebernet Sauvignon 88 point rating. Other years: 2004 (better), 2005.

Region: VALLE CENTRAL, Chile
Sub-Region: RAPEL
Grape Varieties: CABERNET SAUVIGNON

Friday, November 7, 2008

Representation Theory

Let G be a matrix Lie group. Then, a finite-dimensional complex representation of G is a Lie group homomorphism: Π : G --> GL(V), where V is a finite-dimensional complex vector space (with dim(V) ≥ 1). We may think of a representation as a linear action of a group on a vector space, i.e. to every g G, there is an operator Π(g) acts on the vector space. 


If g is a real or complex Lie algebra, then a finite-dimensional complex representation of g is a Lie algebra homomorphism π of g into gl(V). It is a unique representation π of g acting on the same space V such that Π(eX) = eπ(X) for all X g. The representation π can be computed as π(X) = d/dt  Π(etX) |t=0 and satisfies π(AXA-1) =  Π(A) π(X) Π(A)-1 for all X g  and all A G.  

If Π or π is a one-to-one homomorphism, then the representation is called faithful.


Let Π be a finite-dimensional real or complex representation of a matrix Lie group G, acting on a space V. A subspace W of V is called invariant if  Π(A)w W for all w W and all A G. An invariant subspace W is called nontrivial if W {0} and W V. A representation with no nontrivial invariant subspaces is called irreducible.


Let G be a connected matrix Lie group with Lie algebra g. Let Π be a representation of G and π the associated representation of g. Then, Π is irreducible if and only if π is irreducible.


Let G be a matrix Lie group, let H be a Hilbert space, and let U(H) denote the group of unitary operators on H. Then, a homomorphism Π : G --> U(H) is called a unitary representation of G if Π satisfies the following continuity condition (strong) condition: If An, A G and A --> An, then Π(An)v --> Π(A)v for all v H. A unitary representation with no nontrivial closed invariant subspace is called irreducible.  


The terms invariant, nontrivial, and irreducible are defined analogously for representations of Lie algebra.


Equivalence

Let G be a matrix Lie group, let  Π be a representation of G acting on the space V, and let ∑ be a representation of G acting on the space W. A linear map ϕ : V --> W is called an intertwining map of representation if ϕ(Π(A)v) = ∑(A)ϕ(v) for all A G and v V. The analogous property defines intertwining maps of representations of a Lie algebra. 

If ϕ is an intertwining map of representations and, in addition, ϕ is invertible, then ϕ is said to be an equivalence of representations. If there exists an isomorphism between V and W, then the representations are said to be equivalent. 


Let G be a connected matrix Lie group, let Π1 and Π2 be represenation of G, and let π1and π2 be the associated Lie algebra representations. Then, π1and π2 is equivalent if and only if Π1 and Π2 are equivalent.


Schur's Lemma

1. Let V and W be irreducible real or complex representations of a group or Lie algebra and let  ϕ : V --> W be an intertwining map. Then either ϕ = 0 or ϕ is an isomorphism.

2. Let V be an irreducible complex representation of a group or Lie algebra and let ϕ : V --> V be an intertwining map of V with itself. Then ϕ = λI, for some λ C.

3. Let V and W be irreducible complex representations of a group or Lie algebra and let ϕ1 , ϕ2  : V --> W , be nonzero intertwining maps. Then, ϕ1 = λϕ2, for some λ C. 


Applications of Representation Theory

Studying the representations of a group G (or of a Lie algebra) can give information about the group (or Lie algebra) itself. For example, if G is a finite group, then associated to G is something called the group algebra. The structure of this group algebra can be described very nicely in terms of the irreducible representations of G. 

One of the chief applications of representation theory is to exploit symmetry. If a system has symmetry, then the set of symmetries will form a group, and understanding the representations of the symmetry group allows one to use symmetry to simplify the problem. For example, if the equation has rotational symmetry, then the space of solutions will be invariant under rotations. Thus, the space of solutions will constitute a representation of the rotation group SO(3). If one knows what all of the representation of SO(3) are, this can help in narrowing down what the space of solutions can be.