The context.... a colleague and I are studying non-equilibrium switching invoking a concept called 'Brownian Ratchets' that has been well studied in nonequilibrium statistical physics over the years. The potential benefactor of this study is the chip industry, in a very broad way, as it is worried about rapidly increasing thermal budgets (chips are becoming very hot). We're simply trying to examine the physics of Brownian ratchets in a device context.

A popular model for heat dissipation in binary switching looks at a two well one barrier geometry, with a gate controlling the barrier and a drain controlling the overall directionality. Each such raising and lowering of a barrier at the end of the compute cycle dissipates energy irreversibly (during the reset step where one erases information), leading to a N.alpha.kTln2 dissipation per operation (kT is the thermal energy), where N is the number of electrons (assumed independent), and alpha is a non-ideality factor (depends on various things such as capacitance ratios).

Now, there are a number of things one can do to play with this without breaking fundamental principles. You could try to make the bits correlated so that N goes down. You could play with alpha using clever engineering as it's a non-ideality factor (although circuit theorists have their own constraints on what alpha should be -- but it's a practical limit not a fundamental one). You could try to do reversible computing, say, by rotating spins -- no dissipation there ideally, if you do it the right way. And you could try to play with non-equilibrium physics.

The analysis in the two well one barrier model is based on invoking equilibrium Boltzmann statistics. What is not clear is what happens during the non-equilibrium transition phase, say, with the application of a voltage gradient, or if you switch before the equilibrium is reached. This is one of the directions we are exploring. The aim of the study is

What is our practical strategy? We are employing quantum transport equations that are very well established and the workhorse of our research group. These equations are fundamental and produce dozens of precise quantitative agreements with experiments (in fact, we are very particular about calibrating our results with multiple experiments). They interpolate all the way from fully quantum to fully classical limits (e.g Ohm's Law for electrons or Fourier's Law for heat), from scattering-free (coherent) to scattering (incoherent) regimes, from atomistic to continuum. Given a specific device, we can then find out unambiguously what the equations tell us about its performance.

A past work we did showed that we can circumvent the textbook limit on switching (technically known as the subthreshold swing) if we can go around the assumptions that led to that limit in the first place. So there are a lot of games one can play without breaking fundamental principles.

Of course, more often than not, there are trade-offs, such as between heat, speed and accuracy -- e.g. you could trade off power dissipation if you are willing to live with higher error rates. And the biggest damper of them all could be simple economics -- a solution that looks great on paper but is simply not cost-effective to implement! So we do not have an answer yet. We expect to study the underlying science, not to solve miracles.

That's about it... but then, cooling laptops as hot as the sun through the power of thinking or by breaking the 2nd law sounds fancier ... doesn't it?

PS: Personally, I prefer to make my statements through refereed publications vetted by peer review -- may not sound as sensational in lay terms, but there it's clear what we do and what we don't, without opening it up to re-interpretation and speculation. So if you are really interested to know what we are doing at a solid, technical level, instead of at a 'hand-waving' non-technical level, I invite you to visit our research webpage .