Saturday, December 1, 2007

Thermodynamics Marches On

On our last day I wanted to argue that, though thermodynamics has proven to be one of science's greatest achievements, it is far from complete. One area of study, in its infancy and built on the foundations of thermodynamics, is complex systems research. Perhaps in the next few decades a Fourth or Fifth law will be proposed?

Questions about the final and/or course?

Thursday, November 29, 2007

Gibbs Phase Rule

The maximum possible number of phases that can coexist in equilibrium (closed system) was a problem that Gibbs spent several years working on. In fact, he is said to have invented the Gibbs function and the chemical potential in order to solve this problem, a result we now know as the Gibbs phase rule.

From thermal and material equilibrium arguments, we can arrive, as Gibbs did, at the result F= C - P + 2, where F is the number of independent intensive variables, C is the number of components (chemically independent entities) and P is the number of phases present. For example, in a system in which we have neon in the gas phase, we can immediately write C=1 [Ne] and P=1 [gas], hence F=2. This means we have two intensive variables, usually T and P, which can be independently varied within the gas phase.

As soon as another phase appears at equilibrium, the number of degrees of freedom goes down. For example, if neon were to condense we would now have P=2 [gas, liquid] leading to F=1: we can independently vary either T or P and the other will adjust in order to stay at equilibrium.

When two or more of our substances are tethered together in a chemical equilibrium, we get an additional restriction. We often use the relation C = S - R account for this reduction in the number of components.

Several interesting applications of the phase rule can be found in geochemistry, biophysics and nuclear chemistry.

Tuesday, November 27, 2007

Curvature, Clouds and Capillarity

One of the fundamental results from the Laplace-Young derivation is that the pressure on the concave side of a curved liquid-vapor surface is greater than that on the convex side. Lord Kelvin immediately saw the impact a curved surface would have on the equilibrium vapor pressure (when comparted to a flat surface).

For example, consider a spherical droplet, which puts the liquid on the concave side of the interface (which is at higher pressure than the convex side). Essentially being an applied pressure, this will in turn increase the vapor pressure above the surface (see previous lecture on how external pressure increases the escape tendency of the liquid, leading to the Kelvin equation:

ln(Pdrop/Pbulk) = 2γM/rρRT

One example of its meteorophysicochemical consequences: Moist air rises naturally from the surface of the earth and will experience at some altitude a combination of pressure and temperature that makes condensation favorable (in terms of chemical potentials, μG > μL). The natural tendency then is the spontaneous formation of microdroplets; however, the vapor pressure is so huge for small radii that the opposite force is to evaporate immediately). Eventually this enhanced evaporative effect will be overcome by spontaneous coagulation into larger droplets, which is further aided by solid aerosols in the atmosphere (serving as nucleation sites). All of this, of course, leads to cloud formation, whose different types are partly dictated by the external pressure on the system of microdroplets.

For liquid droplets deposited on, say, glass, the internal cohesive forces struggle against possible adhesive forces with the surface. This interplay can be seen clearly as a function of the contact angle the liquid-gas interface makes with the surface. Angles close to 0° correspond to liquids that are considered good wetters while those close to 180° are nonwetting.
Something we should all strive for methinks.

Contact angles are seen again in the phenomenon of capillarity, the spontaneous rising of a liquid with appreciable adhesive forces with a hollow tube (or other porous medium). Simple physics leads to the formula
h=(2γ/ρgr)cosθc, which can easily be utilized for measuring the surface tension of a simple fluid by measuring its capillarity height. Examples of this phenomenon can be seen all around us.

Monday, November 26, 2007

The Liquid-Gas Interface; Droplets and Cavities

On the Monday before Thanksgiving break, we applied the concepts of chemical potential to an interesting system, the liquid-gas interface. We first briefly discussed the difference between vaporization and boiling [which occurs only in open systems] before showing how the vapor pressure of a liquid varies as a function of external, applied pressure (due to either mechanical forces or a secondary inert gas present).

We turned our attention away from phase transitions and developed a little further the notion of surface tension, seen before in the differential form of surface work: dw = γda. Surface tension of liquids is a function of intermolecular forces and, for larger molecules, mechanical tangling. Molasses, for example, has enormous surface tension due to the long alkyl chains.

Further, we saw through the Guggenheim-Katayama equation that the surface tension decreases with increasing temperature. This can easily be demonstrated experimentally by looking at the relative ease of floating needles or razor blades on the surface of cold versus hot water.

We can ask ourselves why droplets of water (or other liquids) tend to be spherical, especially in the absence of gravity, a result easily obtained by considering the Helmholtz energy. We can further obtain the Laplace-Young equation which demonstrates that, for a sphere, the pressure inside is always greater than the outside (a result that is general for either droplets or cavities). It is easier, for example, to create large cavities in liquids than small ones (which is why we add boiling stones when we drive off organic solvents -- to prevent such bumping). Large droplets form more easily than small ones, which is why cloud formation typically needs dust particles suspended in the air.

Thursday, November 15, 2007


Make sure your calculator can do factorials.

Wednesday, November 14, 2007

Phase Transitions and Chemical Potentials

A substance's phase diagram tells us at what temperatures and pressures we will observe the six common phase transitions [fusion, freezing, vaporization, condensation, sublimation, deposition]. The triple point, where all three common phases are in mutual equilibrium, is a convenient phase transition marker: below it, you will typically see sublimation/deposition as temperature is changed and above it, vaporization/sublimation. When more than one crystalline form exists in the solid phase, more than one triple point is possible.

To truly understand phase transitions, we can plot the chemical potential μ vs T. Since (∂μ/∂T)P=–Sm, it is apparent that these plots trend downwards, and the slope cusps at the transition temperature. Where the solid and liquid lines intersect will be the substance's melting point (μS=μL). In fact, we can take this statement as the thermodynamic definition of the melting point. Analogously, the boiling point is that temperature at which the chemical potential of the liquid phase equals that of the vapor phase.

Pressure effects on the transition temperatures can be seen from the relationship (∂μ/∂P)T=Vm. Clearly an increase in pressure will similarly increase the chemical potential and the magnitude of this change is proportional to that phases molar volume. This leads to a boiling point elevation and freezing point elevation for most substances; in water, however, whose molar volume of solid is greater than that of the liquid phase, we see a freezing point depression as pressure is increased (as at the bottom of the ocean).

Since chemical potentials are equal for two phases in equilibrium, we are able to quickly derive the Clapeyron equation [dP/dT = ∆transS/∆transV] and its sister, the Clausius-Clapeyron equation: [dP/dT = ∆transH/T∆transV]. The latter equation allows us to easily obtain, in mathematical form, general formulas for the SL, LG and SG coexistence curves.

Chemical potential is a BIG idea in chemical thermodynamics – more for you to love! – and esplaining phase transitions is another eye-watering demonstration of the mad POWER of thermo.

Monday, November 12, 2007

K's and Phase Transitions

On Friday's lecture, we discussed how the equilibrium constant K for a given reaction might change as we vary either the temperature or the pressure. From the Gibbs-Helmholtz equation, (∂(∆G/T)/∂T)P = – ∆H/T2 and that ∆G° = – RT lnK, it is straightforward to show that (∂(lnK)/∂T)P = ∆H°/RT2.

To see the effect of pressure, we look at the derivative
(∂(∆G°)/∂P)T =∆V°. But, since the reference point ∆G° (1 bar) is independent of pressure, this derivative is zero and the dependence of K on P to be zero.

From these two results we conclude:

a) K is dependent only on temperature changes
b) this response is very sensitive (it is ln K that changes with T) and
c) the sign of the response is dictated by ∆H° (LeChatelier's principle in disguise)

Lastly, we began a formal discussion on phase diagrams/changes, our last chapter of the quarter, with the Ehrenfest classification of phase transitions. Some phase changes exhibit a discontinuity in the Sm vs T plot (as well as the Vm vs T plot). Since Sm is the first derivative of Gm, we call these first-order transitions. In other phase changes, the discontinuity does not arise until a plot of CP,m vs T; this is the second derivative of Gm so we call these second-order transitions. In general, an nth-order phase transition would have a discontinuity in the nth derivative of Gm, but only first and second-order are seen in practice. Though now outdated, the Ehrenfest classification is a useful way to think about the differences in the thermodynamics variables seen as systems undergo phase changes.

On Wednesday, phase diagrams and coexistence curves before hitting the Clausius-Clapeyron equation.

Wednesday, November 7, 2007

Chemical Potential Leads to Chemical Equilibrium

In today's lecture, we showed how the chemical potential μi (aka the partial molar gibbs energy) naturally leads to the way we have written equilibrium constants since we first began studying chemistry. Along the way, we learned why solids and liquids are typically not included in the equilibrium expression [aka mass-action expression]: because their chemical potentials do not appreciably vary from their standard states, unless subjected to significant pressures.

A very important equation in chemical thermodynamics is ∆G=∆G° + RT lnQ, where Q is the reaction quotient. Recall from previous courses that this parameter tells us how far away from equilibrium we are and which direction a process will go to get there. This relationship is also a jumping off point for electrochemistry and kinetics, both of which we will examine in detail next quarter. Coupled with that is the equally important ∆G° =– RT lnK.

Treating real gases, using the van der Waals equation, in a brute-force mathematical way would have destroyed the elegance of our equilibrium constant, so, instead, a new function that captures the nonideality of gases was introduced: fugacity. We define the fugacity through the chemical potential: μ = μo + RT ln f. The fugacity can be calculated through methods outlined in class and, in principle, would replace the partial pressures in the equilibrium constant.

A note on the rest of the quarter: My plan is, before the last day, to get through Chapter 8, which is phase changes and diagrams, with one added half-lecturette on the Gibbs Phase Rule. Even though Chapter 9 is in the syllabus/schedule, we will cover that next quarter in Chem 352 (when we do solutions outright). This is, in fact, how I always teach this series, I had just forgotten that Chapter 9 was solutions.

Monday, November 5, 2007

Gibbs-Helmholtz Equation and The Chemical Potential

How ∆G for a process/reaction varies with temperature is an important question in chemistry as it will dictate whether processes become more or less spontaneous as we change the temperature. Moreover, this relationship is the framework for the temperature-dependence of the equilibrium constant K (to be discussed next lecture).

The Gibbs-Helmholtz equation has several forms -- I prefer the following:

[∂(∆G/T)/∂(1/T)]P = ∆H

We can use this equation, as was done in class, to calculate ∆G at a new temperature as long as we know its value at another temperature, along with ∆H. We can also easily predict, by the sign of ∆H, which direction to take the temperature to increase or decrease ∆G.

Missing from all of our previous work this quarter has been allowing n, the number of moles, to vary. In developing these ideas that are central to chemistry, we define the notion of a partial molar function (in terms of any extensive variable Y) as [∂Y/∂ni]T,P,nj=Yi. In other words, how does Y respond to changes in ni, while every other variable is constant.

A particularly valuable partial molar function is the partial molar gibbs energy, which is usually called the chemical potential µi. We demonstrated how matter spontaneously moves from high to low chemical potential until material equilibrium is established, at which points the chemical potentials are equal. This last point will lead the way to developing a framework for quantifying chemical equilibria.

Sunday, November 4, 2007

More on Spontaneity/Equilibrium, Making Diamonds

During the last lecture of Week 7, we completed "remixing" the Second Law in terms of more convenient system variables. Before we might have said that, for spontaneous processes, the entropy of the universe tends towards a maximum until it reaches zero, at which point we have reached equilibrium. Now we can look entirely at system functions and say that the gibbs energy G of the system tends towards a minimum when T, P are constant (with the same argument applying for the helmholtz energy A when T,V are constant).

Calculating rxnG and rxnA for chemical reactions is straightforward if we remember that Hess' Law works for all extensive thermodynamic properties. Typically, however, only fH, fG and S are tabled in appendices, but the other thermodynamic functions can readily be calculated from these.

As an example showing how to use one of the eight fundamental relations, we looked at the Superman problem: How much pressure is necessary to convert graphite to diamond? Using the equation (∂∆G/∂P)T=∆V and densities of graphite and diamond, we calculated 14.8 kbar, a pressure trivially attained by the Last Son of Krypton.

Rather than mining diamonds (which has political and human costs), we can now make synthetic diamonds. What a great topic for, say, undergraduate seminar.

Friday, November 2, 2007

Mathfest 2007

If you like math, the abstract bone-crushing kind that makes your eyes all puppy-doggy, then Wednesday was the best thermodynamic day of your life.

Not only did we introduce the last two members of the big four: A [helmholtz energy] and G [gibbs energy], but, more importantly, we derived the four fundamental differentials:

dU = TdS - PdV [the fundamental differential]
dH = TdS +VdP
dA = -SdT - PdV
dG = -SdT + VdP

From each of these differentials, two fundamental equations and one Maxwell relation can be found. These 12 expressions, coupled to the 1st and 2nd Laws, is what Thermodynamics rests on. Except for chemical potential, nearly nothing new in terms of concepts will be presented for the rest of the quarter. The Four Laws, Maxwell relations, the four fundamental differentials and eight fundamental relations have now been developed -- the task from here is to extend them to real-world applications. We will especially do just that when addressing phase changes.

We also showed how ∆A and ∆G could be related to total work and nonexpansion work, respectively, under certain conditions. The gibbs energy change is especially important in chemistry, when T and P are constant, which leads to criteria for spontaneity and equilibrium.

Tuesday, October 30, 2007

351 q3 moved to Friday

I won't make a habit of succumbing to the mob but I am moving the quiz to Friday. Hopefully this change doesn't screw anyone over. Tomorrow, Chapter 6, in which we will take thermo to a whole new level.

The Plight of Boltzmann+ Four Laws

By the time the first lecture of Week 7 had finished, we had discussed all four laws of thermodynamics, in mathematical and verbal forms, established the probabilistic nature of entropy and worked through ten types of entropy calculations. In fact, nearly all of the framework has now been constructed for Chapter 6, in which everything comes to fruition as we discuss spontaneity and equilibrium.

Backtracking, last Friday we explored the work of [my idol] Ludwig Boltzmann, who beat his head against the scientific establishment as he spearheaded the development of statistical mechanics. Bypassing all the entropy is disorder nonsense, here we can see the physical underpinnings of entropy are probabilistic and that entropy is a measure of the number of accessible microstates to a system [which may arise as translational, rotational, vibrational, electronic, nuclear, configurational, etc]. In other words, entropy is a metric related to the number of ways that energy can be dispersed (into these microstates). Now that you have been equipped with this interpretation of absolute entropy, you can successfully point-and-laugh at all those who persist in utilizing the now-debunked disorder interpretation.

On Monday, we finished [finally] our set of ten processes:

01. cyclic process
02. reversible adiabatic
03. reversible isothermal
04. reversible phase change (at constant T, P)
05. reversible change of state [ideal gas]
06. irreversible change of state [ideal gas]
07. change of state [general] (two versions: T,V and T,P)
08. mixing of ideal gases A and B (also ideal solutions)
09. irreversible phase change (at constant T, P)
10. chemical reactions

These 10 processes cover nearly every situation of interest in chemistry.

Finally we elucidated the Third Law of Thermodynamics, that S → 0 as T → 0. As indicated in class, this is a restatement of what we saw in the Carnot engine, that absolute zero cannot be attained (although we've gotten way way down there, to 450 pK, where matter acts truly bizarre because of the dominance of quantum over thermal effects).

The absolute entropy of real matter, incidentally, usually approaches a nonzero S0, the residual entropy, which is a loose measurement of the strength of low-temperature intermolecular forces.

I hope it is clear by now that the Laws of Thermodynamics, in essence, establish a logical code from which nearly all energy transfer (and, hence, all phenomena) can be described. Turning this framework into usable results is not always easy, however.

Interestingly, a Fourth Law of Thermodynamics is often proposed, the Onsager reciprocal relations which we will not cover until pchem 2.

And now, the Four Laws of Thermodynamics, translated for Sanitation Engineers:

0th: There is shit.
1st: You can't get rid of it.
2nd: It gets deeper.
3rd: A nice empty trashcan is wishful thinking.

Thursday, October 25, 2007

The Second Law Is Better Than The Zeroth Law By Two Units

On Wednesday, we finally made it to possibly the most powerful and darkly beautiful statement in all of thermodynamics: the Second Law. But, before that, we examined how we might calculate some entropy changes using the Clausius relation [dS = dqrev/T], underscoring how this equation only works while tracing reversible paths. The irreversible heat transfer from a hot to cold reservoir can, for example, be broken into three reversible -- and calculable -- steps. It is from this simple system that leads us to the 2nd Law conjecture:
∆Suniv ≥ 0 [= for equil/reversible, > for spont/irreversible]
Unfortunately, abuses of this statement are many and takes but a moment's googling to find them. For many scientists, it holds a special place:
The law that entropy always increases-the second law of thermodynamics-holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations-then so much the worse for Maxwell's equations. If it is found to be contradicted by observation-well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation." – Sir Arthur Eddington [1928]
And to reiterate some general statements from class, which will hopefully help you more fully grasp spontaneity and reversibility:
All spontaneous processes are irreversible.
For all irreversible processes, ∆Suniv > 0.
All reversible processes are at equilibrium.
For all reversible processes, ∆Suniv = 0.
Some common misstatements about the Second Law:
All systems tend to greater disorder.
False: Not only is "disorder" a poorly constructed idea and overly dependent on human interpretation, the underlying premise is wrong.
All systems tend towards greater entropy.
False: Some systems tend towards greater entropy while others don't. The restriction rests on ∆Suniv, not on ∆S.
Before moving onto specific applications of the Clausius relation, we paused, for bookkeeping's sake, to mention the Zeroth Law: If systems A and B are in equilibrium, and systems B and C are in equilibrium, then systems A and C are in equilibrium. Not only does this establish that a state property common to them must be equal (the temperature), it also allows for the possibility that two systems could be in equilibrium without in direct contact.

Lastly we began our march of calculating entropy changes for ten processes, finishing four:
01. cyclic
02. reversible adiabatic
03. reversible isothermal
04. reversible phase transition [at const T, P]

On Friday, we will finish this list of ten, visit the revolutionary work of Boltzmann and, if time permits, elucidate the Third Law of Thermodynamics.

Tuesday, October 23, 2007

The Direction of Spontaneous Change

We began the first lecture of Week 6 with an examination of the inadequacy of the First Law to sufficiently describe thermodynamic events. For example, it does not preclude a penny, say, from absorbing thermal energy from a table and turning it into gravitational work (that is, springing up off the table). The Boltzmann formula will demonstrate that the probability of this event is nonzero but exceedingly small (so unimaginably improbable that perhaps we should call it impossible?) But the point is that, macroscopically, we see a definite directionality to energy transfer:
First Law - limits the magnitude of energy transfer
Second Law - limits the direction of energy transfer
Before discussing how it was first discovered, we first needed to correct some misconceptions about entropy, one of the most thoroughly mangled concepts in all of science:
Entropy is not equal to disorder, nor is it a measure of disorder (whatever that means scientifically).
One of the most common [bad] examples demonstrating the alleged relationship between entropy and disorder is a deck of cards. When we shuffle an "ordered" deck of cards, we always see it become "disordered". The problem with this language is that there is no way to quantify order, especially since every outcome is equally probable. We have simply defined A,2,3,4 .. Q,K of each suit as being the ordered state -- but that definition is arbitrary. Nature should not -- and does not -- depend on such human definitions. Other [bad] examples include blaming messy desks and cluttered rooms on this "law of entropy."

Entropy is a measure of the tendency of energy to disperse, rather than being localized.
When we connect it directly to the number of accessible microstates (ala Boltzmann) we will understand the probabilistic basis of entropy more fully.

Through his theoretical work on heat engine efficiency, the French engineer Sadi Carnot was our first thermodynamicist. His memoirs, lost for twenty years and posthumously rescued by college friend Benoit Clapeyron, inspired the work of Clausius and Thomson [Kelvin], both of whom essentially triggered the thermodynamic revolution. Carnot's greatest achievement was to demonstrate that heat flow could be harnessed and transmogrified into usable work by engines, but with a maximum efficiency less than 100%. Indeed, this maximum efficiency is dependent only on the reservoir temperatures and not on the material used in the engine, nor on the actual steps of each cycle. It is a thermodynamic limit imposed on us by nature, who has decreed that heat is a form of energy rather than a transferred substance.

[Note: In today's lecture, I believe that I inadvertently flipped the subscripts for the temperatures in the adiabatic formulas. Consult Engel-Reid for consistency]

Looking further at the results of Carnot we see, as Clausius did, a hidden state function, one that sums to zero as we go around a cycle. From this fact we can back out the relationship dS = dq/T [the Clausius equation], introduced at the end of the hour but forming the basis of Wednesday's lecture to come.

Sunday, October 21, 2007

The Second Law

On Monday we will start Chapter 5 and move towards the Second Law of Thermodynamics, one of the most important statements in all of science. First I'll argue why the First Law is incomplete as a description of thermodynamics then dive headfirst into the Carnot cycle.

Wednesday, October 17, 2007

End of the First Law Era

How do we obtain reaction energies from reaction enthalpies, or vice versa? How do we determine a reaction enthalpy at a nonstandard temperature given a value at 298K? These two questions take us to the end of Chapter Four and the "First Law Era".

It is straightforward to derive the relation rxnU°=rxnH°-RTrxnνgas which can be used to interconvert between reaction energy and enthalpy. When using this equation, we must keep in mind that (a) we are implicitly assuming all gases are ideal and (b) that, for liquids and solids, molar enthalpies and internal energies are approximately equal. In principle, we could jam in a real gas equation of state and create a ghastly version that is more general or, as is always done, use this version anyway and take the hit in accuracy. Assumption (b) is rather good because, unless we encounter extreme pressures, the heat capacities CP,m and CV,m are nearly equal for condensed phases.

Adjusting to a nonstandard temperature is rather important since many (most?) reactions do not actually occur at 25°C and the difference is often quite significant. Once the heat capacity CP,m is known as a function of temperature, we can calculate ∆∆rxnH°=∫rxnPdT.

To make integration life easier, the heat capacities are fit to simple polynomials of T:

CP,m=a + bT +cT2+dT3+eT4 [Shomate] or
CP,m=a + bT +cT

Exam 1 on Thursday. If you are nervous, just remember that if you got through organic chemistry, calculus and physics, you can do this. I have posted solutions to the two questions I assigned from Chapter 4. I also updated the study sheet that had some equations missing and an error in one of the thermodynamic equations of state.

Tuesday, October 16, 2007

Thermochemistry and Hess' Law

The first lecture of Week 5 found us tackling thermochemistry, that subset of thermodynamics describing heat transfer that accompanies chemical reactions. In constant-volume calorimeters (often closed vessels), the heat transfer q we measure is rxnU, whereas in constant-pressure calorimeters (open vessels), q will be rxnH. (On Wednesday, we will see a simple method to interconvert them). Since reactions are typically performed at constant temperature and pressure, any results give us important information about the energy stored in chemical bonds.

Using the properties of state functions, we can predict the heat transfer under these two conditions using Hess' Law and, since constant-pressure conditions are more common in chemical systems, we tend to focus on rxnH rather than rxnU. The norm is to cast all reactions as simple sums of formation reactions, each of which represents the formation of 1 mole of a substance from constituent elements in their standard states/phases. Hess' Law is particularly powerful in thermochemistry because it applies equally well for any extensive state property. Note that this is our second usage of the ∆ symbol (the first being the familiar ∆Y = Yfinal - Yinitial). Whenever the subscript appears on the ∆ itself, as in ∆combY, we are calculating the sum of the products minus the sum of the reactants, each multiplied by appropriate stoichiometric coefficients. Hopefully this operation is still familiar from general chemistry.

Next lecture, we will finish thermochemistry and begin to tackle entropy, one of the most important and poorly understood concepts in all of science. If your intrepid instructor has the backbone to trudge through the quiz 2 carnage, he may be able to return them on Wednesday. I will admit that the uncharacteristic dearth of questions/comments/emails/office visits so far this quarter (quickly approaching the 50%-done mark) had lulled me to mistakenly believe that this class was further along the thermodynamic path than it actually was. Hopefully quiz 2 will be a valuable learning experience for many as we lumber towards Thursday (and remember, no class on Friday).

Monday, October 15, 2007

exam 1 on the horizon

The pchem fun will commence Thursday evening, 6:30 pm in Fisher Science North [53-213]. I will try to create something worthy enough for you and your finely honed instruments of pchem carnage...

Friday, October 12, 2007

Heat Capacity Difference & the Joule-Thomson Effect

Previously in this course we pulled from the thermodynamic heavens that, for an ideal gas, CP - CV = nR. Our first task in today's lecture was to find an expression for the heat capacity difference of any system and, in doing so, we showed from where the ideal gas relationship arises. Midstream in the derivation we paused to explain why CP and CV values are nearly identical for condensed phases (solids and liquids) under normal conditions, another result that was simply asserted before. Remember that the goal of pchem is to explain all of chemistry using simple models, so we like to pull relations from nowhere as rarely as possible.

In Wednesday's lecture, we had expanded on how the internal energy U varies with T and V, obtaining the internal pressure concept [(∂U/∂V)T] in the process. Today we performed an analogous treatment of the enthalpy H, finding how it varies with T and P, making it the third time this quarter we've associated U with V and H with P. Engel-Reid does not use this terminology but our results (Equations 3.20, 3.44) are commonly called the thermodynamic equations of state. Note that these are different from plain-vanilla equations of state, which tie together physical properties of a system, like P, T and V. The derivative (∂H/∂P)T will be rather important when we consider reactions and other processes that do not occur at 1 bar (for example, reactions occuring in the troposphere or several miles into the earth's mantle).

As we finish up Chapter 3, we then discussed the Joule-Thomson effect and coefficient, μT=(∂T/∂P)H, which measure the temperature response of a substance (usually a gas) to changes in pressure at constant enthalpy. Our intuition suggests that gases cool upon expansion, which is usually true and can be seen explicitly from the ideal gas law. But real gases are often unpredictable and several (like hydrogen and helium) have negative JT coefficients at standard conditions, meaning that they increase in temperature upon expansion, which is particularly important when handling tanks of hydrogen gas. The throttling apparatus devised by Joule and Thomson to attain isenthalpic conditions (in which no heat is extracted from nor no net work is done on the system) is particularly clever.

Monday will bring us to the [hopefully] familiar topic of thermochemistry [Chapter 4, which we will finish on Wednesday] and quiz 2, with exam 1 looming just over the horizon.

Thursday, October 11, 2007

P07 on hw.3

There is a typo in P07 on hw.3, which is Engel-Reid problem P3.8. The question should be asking for V as a function of T [not P] and beta, which should likewise be assumed to be independent of temperature.

Wednesday, October 10, 2007


In today's lecture we introduced our last mathematical identity for partial derivatives, the cycle rule (also called the triple product rule and the cyclic chain rule). It will be employed in several contexts this quarter, but one useful application is expressing a single partial derivative in terms of the product of two others. Surprising relationships often arise when using this rule (which is valid for any three arbitrary thermodynamic state functions). In some mathematical derivations, it is especially helpful for getting rid of a variable that is difficult to hold constant, such as H or U.

Several thermodynamic partial derivatives go by special names (we have already met the heat capacities CP and CV). Today we discussed the thermal expansion coefficient, the isothermal compressibility and the internal pressure (all, in principle, functions rather than values).

Given an equation of state, we could analytically find functions for these parameters simply by calculating the partial derivative. On the other hand, since every thermodynamic partial derivative implies an experiment, we could also uncover these relationships in the lab when an equation of state is not available (which is most systems except gases). Tables 3.1 and 3.2 show some experimental results at standard conditions for various solids and liquids. (Note that Table 3.1 is mislabeled as isothermal coefficient rather than thermal expansion coefficient).

To further clarify what the internal pressure parameter is actually measuring, we calculated it for both an ideal gas and a van der Waals gas. As expected, since the internal energy U of an ideal gas is a function of T only, its internal pressure is zero. Another way of looking at it: For a gas' internal energy to be altered by changing the volume, that gas would necessarily possess intermolecular forces between its particles. The vdW gas gave a nonzero answer, the correction term for the pressure in the van der Waals equation!

Now that the Box of Mathemagics has been assembled, onward to more relations! Friday will see us relating CP and CV for any arbitrary substance and hopefully mine won't be the only tears of mathematical joy being shed...

Monday, October 8, 2007

State Functions, Euler's Criterion

After welcoming in Week 4 today, we finished our cyclic process question: isothermal expansion followed by adiabatic compression, ending in isochoric cooling. Once again we saw, through calculation rather than assertion, that q and w are path functions -- and nonzero -- while U and H are state functions, making ∆U=∆H=0 for the cycle. In the process we found that this area was negative, giving positive work (done on system by surroundings).

Our second problem addressed how we might calculate ∆H for any case in which the heat capacity is temperature-dependent. As in the ideal gas case, we can simply integrate the expression dH=CPdT; but unlike that example, the heat capacity is not constant. Still, the procedure is nearly identical. This is an important step towards generalizing our thermodynamic approach to systems beyond ideal gases.

Chapter 3 lays much of the mathematical foundation for the rest of the quarter. Hopefully you are beginning to see the importance of state functions and we will build heavily on this idea. But before today, I had simply asserted which properties were state functions and which were path functions. Now, with the introduction of Euler's Criterion for exactness, we have a mathematical litmus test: Iff the differential dZ is exact, then Z is a state function. Later, we will turn this principle on its head (in the form of Maxwell's relations) and generate a handful of remarkable thermodynamic relations that are far from obvious.

On Wednesday, backwards sixes will fly as we begin to build our Big Box of Mathemagics.

Sunday, October 7, 2007

P10 on hw.2

Problem 10 on hw.2 will be postponed until we do thermochemistry, Chapter 4.

This Forum

I decided to start this blog because it seemed like a good, central spot for course information, questions and clarifications about lecture/homework and a place to expand on or summarize things we do in class. I've never used this type of forum for a class before so I have no idea if it is even useful or worth my effort. So far, no one has asked any questions or added any comments -- maybe it's because we aren't far enough long or maybe this isn't the right venue. Maybe it is just easier to watch than participate. Maybe it's because reading/participation isn't assigned or I haven't placed any point values on it (which is against my philosophy, incidentally).

I've decided to keep this blog going until at least Exam 1 and then will reevaluate whether it is worthwhile. An interesting sidenote is that I've noticed from my webcounter that lots of students from other universities seem to be accessing information from this site. Perhaps that is meritorious in its own right.

Saturday, October 6, 2007

Isobars, Isochores, Isotherms and Adiabats

In Friday's lecture, we finished our uberproblem, obtaining expressions for q, w, ∆U and ∆H for the following reversible ideal gas processes: isobaric, isochoric, isothermal and adiabatic. Not only are the equations themselves useful, they demonstrate something that had previously been simply asserted, that q and w are path-dependent while ∆U and ∆H are not. We also saw (for the second time this quarter) the connections between U and constant-volume and H and constant-pressure conditions.

It is important, however, to realize that these results are secondary to the calculation process itself. That is, what we get as the answer is somewhat less important than how we get it. Though we are focusing on the ideal gas as our system, the logic underlying these calculations apply just as readily to real gases and other substances, though the actual mathematics will be a bit messier.

We also developed some mathematical relations important to adiabatic processes and saw how the heat capacity ratio naturally arises in those equations, which tie together two of the three simultaneously changing variables (P, V, T).

On Monday, a new homework set [hw.3] as we venture into Chapter 3 and its elegant elegance.

Wednesday, October 3, 2007

Equipartition, Internal Energy, Heat Capacity

The equipartition theorem is an idea we're borrowing from statistical thermodynamics to predict expressions for the internal energy U for ideal gases. Many textbooks embed this concept with gases but Engel-Reid has squirreled it away in Chapter 14. To restate, for each quadratic term in the classical energy expression for a given molecule, there is a contribution of 1/2 kT to the average molecular kinetic energy. Implementing this rule is straightforward for any given structure if we just remember that each translational and rotational degree of freedom contributes 1/2 kT and each vibrational mode contributes a full kT.

As mentioned in class, the equipartition theorem ultimately fails when applied to gases that are nonideal -- of course, applying it to something like liquids or solids is heresy. Moreover it is inadequate when quantum effects are important, for example when the temperature is too low to significantly activate vibrational modes or if particular bonds are too stiff to vibrate at normal temperatures.

For ideal gases, we found that U=U(T); that is, internal energy is function only of temperature for a given sample. In fact it is a linear function of T, meaning that U = (constant)T. Since the heat capacity at constant volume, CV, is the derivative with respect to T, it will always be a constant for an ideal gas. The same can be said for CP, which should be apparent from the relationship CP = CV + nR (or, in molar form, CP,m = CV,m + R). I pulled this relationship out of thin air because we need it now but it will be proven when we are knee-deep in Chapter 3. Again, it is true only for ideal gases and applying it to any other system would be incorrect. Do not fret because more complicated systems (in fact, every system) will be addressed after we are finished laying down the mathematical foundations in Chapters 2 & 3.

A few notes on Problem 02 on hw.2 [P2.6 in Engel-Reid]: The notation for CP,m probably looks weird -- unfortunately it is becoming the standard way to write such equations in textbooks. Here's how I might write the same function:
CP,m = 20.9 + 0.042T ( T in K, CP,m in J K-1 mol-1)
It is important for you to understand why this gas is necessarily nonideal, despite the fact that the question states otherwise. In fact, for reasons stated above, it is not even possible to calculate a value for ∆U, which is also important for you to understand.

Lastly, are you now able to predict CP,m for an ideal diatomic gas absorbed onto the surface of a metal (so that it is constrained to two dimensions)?

Quiz 2

I'm looking at the schedule and am thinking about moving quiz 2 from October 12 [Friday] to October 15 [Monday]. Would that be met with approval, annoyance or the same grey ambivalence you have towards this blog?

Monday, October 1, 2007


In the half hour before today's heartwarming quiz, we explored two conditions that change the First Law into simple statements about heat transfer. First, under constant volume conditions, in which no work other than PV-work is possible, the infinitesimal change in the internal energy dU was shown to equal dqV.

If we define a new thermodynamic function, called the enthalpy, as H=U+PV, we quickly find, under constant pressure and reversible conditions, that its infinitesimal change is equal to dqP. These relations allow us to further connect the heat capacities CP and CV to H and U respectively. This is only the first of many instances in which we will see the pressure|enthalpy and volume|internal energy links this quarter.

A comment on where we are so far. I am attempting to demonstrate how thermodynamics can be systematically used to calculate work and heat transfer, for any process, before moving onto their directionality. Since work and heat transfer are not directly measurable, we need some way to infer what's going on using directly measurable properties, like pressure, volume and temperature. We'll find out that these variables are not sufficiently rich to fully describe thermodynamic behavior, so we'll soon add other functions, like entropy and gibbs energy, to add to our growing toolbox.

Saturday, September 29, 2007

Work, Heat Capacity and Reversibility

In Lecture 6 we fleshed out the ideas of work and heat a little more, showing how to take an infinitesimal quantity like dw and turn it into a macroscopic, measurable value. In the process, we distinguished between external and internal pressure, developed the heat capacity parameter and reminded ourselves that both heat and work are path functions.

Then we calculated general expressions for the work performed in two processes: free expansion into a vacuum and compression/expansion against constant external pressure, both irreversible processes. After defining reversibility, we addressed a third process: reversible, isothermal compression/expansion of a gas.

Quiz 1 on Monday. Any questions?

Thursday, September 27, 2007

Mathematics, Reality and Energy

The fact that mathematics works so well to depict the phenomena of thermodynamics, and other physical behavior, is -- to me at least -- nothing short of remarkable. This correlation between math and reality is so familiar, so apparently self-evident, that we take it for granted, not stopping to think whether or not it is even valid. Of course the pragmatist might argue that it obviously works well enough, otherwise we wouldn't have been able to do things like land people on the moon, program computers or construct MRIs. But a fundamental part of pchem is examining the inherent assumptions of a given model and analyzing their validity. We will not likely spend any more time on this concept, unless it slips out of my philosophically-knotted brain and into lecture. I do think anyone majoring in one of the physical sciences should be exposed to this question at least once -- hopefully I've yanked the carpet out from under your brain just a little. I will end this thread with some quotes:

The study of physics has driven us to the positivist conception of physics. We can never understand what events are, but must limit ourselves to describing the pattern of events in mathematical terms: no other aim is possible .... the final harvest will always be a sheaf of mathematical formulae. (Sir James Jeans)
How can it be that mathematics, a product of human thought independent of experience, is so admirably adapted to the objects of reality? (Albert Einstein)
Mathematics has the completely false reputation of yielding infallible conclusions. Its infallibility is nothing but identity. Two times two is not four, but it is just two times two, and that is what we call four for short. But four is nothing new at all. And thus it goes on and on in its conclusions, except that in the higher formulas the identity fades out of sight. (Johann Wolfgang Von Goethe)

Lecture Five found us [re]examining the concept of energy, momentarily reflecting on the fact that it is, at heart, a defined rather than measured quantity. So the principle of the Conservation of Energy is, simply, the statement that humanity has stumbled onto some number that happens to never change for the universe. This is a direct consequence that the laws of physics do not appear to change with time.

This conservation law leads directly to the development of the First Law: dU = dq + dw. We reminded ourselves of the differences between state and path functions -- a distinction which will be key to later developments. Further, the idea of work and generalized force was developed, enumerating six or seven examples to be used in subsequent lectures.

Monday, September 24, 2007

Compression Factors and Corresponding States

In Lecture 4 we introduced Z, the compression factor, a useful parameter that relates the molar volume of a real gas to that expected for ideal behavior. Regions where Z are greater or less than one represent the relative strengths of attractive (intermolecular) forces or repulsive (volume-based) forces. After discussing three general trends of Z vs P plots, we defined the Boyle temperature as that temperature at which the initial slope is equal to zero.

In order to connect this concept to stuff we already know, we saw how to relate a particular equation of state to this Boyle temperature. After manipulating the equation of state so that it fits into the Z=PV/nRT construct, we take the derivative with respect to P, take the limit as P approaches 0 and then see what we get. This procedure was straightforward with the virial equation (expanded in terms of P) but the van der Waals equation (and others) is a little bit more work. Still, a procedure was developed that connects these real gas empirical parameters to measurable and important quantities like the Boyle temperature.

Lastly we introduced van der Waals' Principle of Corresponding States, the hypothesis that a single equation could describe fluid behavior in which material-dependent parameters like a, b and B were not used, instead being expressed in terms of reduced thermodynamic parameters. Of considerable historical importance, this analysis was used to help guide early scientists into cryogenic work and represents a universal equation for substances that do not possess high directionality (e.g. polarity).

Question 09 in hw.1 has a typo: Instead of alpha and beta (the symbols from our previous textbook) that should read beta and kappa.

We will spend perhaps 15 minutes finishing up Chapter 7 on Wednesday [delaying the discussion of fugacity until later] before hopping back to Chapter 2 and Thermodynamics proper.

Saturday, September 22, 2007

Only 9 weeks (+10 +10) of pchem left!

So far in pchem: We finally met each other and went through the syllabus (including thursday exams and face the query). We discussed systems [open, closed, isolated], variables [intensive and extensive], equilibrium, SI units and equations of state.

The second lecture focused on the four assumptions of the ideal gas law. This led us into the van der Waals equation, which attempts to correct for nonzero volume and intermolecular attractive forces, briefly introduced in Chapter 1.5 and further developed in Chapter 7.

Isotherms and partial derivatives were the main topics in Lecture 3, particularly in correlating real gas equations of state (which are largely empirical) to observable parameters like the critical point. Figure 7.2 is a clearer example of real gas isotherms than what I chickenscratched on the board. It was pointed out to me after class that I was missing a 2 in that derivation so be sure to work through it yourself (no peeking at Example Problem 7.1).

To get a better feel for critical points, visit Table 7.2 in Appendix A for values, Wikipedia for a general background, while Table 7.4 across the page has a bunch of real gas parameters. If you want more information on partial derivatives you can either visit Appendix B.6 or, again, Wikipedia.

If you are feeling rusty in your maths, I highly recommend this book by Barrante. It basically distills down an 800-page calculus book to a paperback full of parts useful in pchem.

For this coming week we will examine the compression factor and the law of corresponding states then dive right into thermodynamics proper. To placate you, I have posted solutions to and have updated the schedule with quiz dates.