Wednesday, August 29, 2007

Invited talk suggestions, APS March Meeting 2008

Along with Eric Isaacs, I am co-organizing a focus topic at the March Meeting of the APS this year on "Fundamental Challenges in Transport Properties of Nanostructures". The description is:
This focus topic will address the fundamental issues that are critical to our understanding, characterization and control of electronic transport in electronic, optical, or mechanical nanostructures. Contributions are solicited in areas that reflect recent advances in our ability to synthesize, characterize and calculate the transport properties of individual quantum dots, molecules and self-assembled functional systems. Resolving open questions regarding transport in nanostructures can have a huge impact on a broad range of future technologies, from quantum computation to light harvesting for energy. Specific topics of interest include: fabrication or synthesis of nanostructures involved with charge transport; nanoscale structural characterization of materials and interfaces related to transport properties; advances in the theoretical treatment of electronic transport at the nanoscale; and experimental studies of charge transport in electronic, optical, or mechanical nanostructures.
The sorting category is 13.6.2, if you would like to submit a contributed talk. Until Friday August 31, we're still soliciting suggestions for invited speakers for this topic, and I would like to hear what you out there would want to see. If you've got a suggestion, feel free either to post below in the comments, or to email me with it, including the name of the suggested speaker and a brief description of why you think they'd be appropriate. The main restriction is that suggested speakers can't have given an invited talk at the 2007 meeting. Beyond that, while talks by senior people can be illuminating, it's a great opportunity for postdocs or senior students to present their work to an audience. Obviously space is limited, and I can make no promises, but suggestions would be appreciated. Thanks.

Tuesday, August 28, 2007

Quantum impurities from Germany II

A recurring theme at the workshop in Dresden last week was quantum impurities driven out of equilibrium. In general this is an extremely difficult problem! One of the approaches discussed was that of Natan Andrei's group, presented here and here. I don't claim to understand the details, but schematically the idea is to remap the general problem into a scattering language. You set up the nonequilibrium aspect (in the case of a quantum dot under bias, this corresponds to setting the chemical potentials of the leads at unequal values) as a boundary condition. By recasting things this way, you can use a clever ansatz to find eigenstates of the scattering form of the problem, and if you're sufficiently clever you can do this for different initial conditions and map out the full nonequilibrium response. Entropy production and eventual relaxation of the charge carriers far from the dot happens "at infinity". Andrei gives a good (if dense) talk, and this formalism seems very promising, though it also seems like actually calculating anything for a realistic system requires really solving for many-body wavefunctions for a given system.

Tuesday, August 21, 2007

Quantum impurities from Germany

I'm currently at a workshop on quantum impurity problems in nanostructures and molecular systems, sponsored by the Max Planck Institute for Complex Systems here in Dresden. A quantum impurity problem is defined by a localized subsystem (the impurity) with some specific quantum numbers (e.g. charge; spin) coupled to nonlocal degrees of freedom (e.g. a sea of delocalized conduction electrons; spin waves; phonons). The whole coupled system of impurity (or impurities) + environment can have extremely rich properties that are very challenging to deduce, even if the individual subsystems are relatively simple.

A classic example is the Kondo problem, with a localized impurity site coupled via tunneling to ordinary conduction electrons. The Coulomb repulsion is strong enough that the local site can really be occupied by only one electron at a time. However, the total energy of the system can be reduced if the localized electron can undergo high order virtual processes where it can pop into the conduction electron sea and back. The result is an effective magnetic exchange between the impurity site and the conduction electrons, as well as an enhanced density of states at the Fermi level for the conduction electrons. The ground state of this coupled system involves correlations between many electrons, and results in a net spin singlet. The Kondo problem can't be solved by perturbation theory, like many impurity problems.

The point is, with nanostructures it is now possible to implement all kinds of impurity problems experimentally. What is really exciting is the prospect of using these kinds of tunable model systems to study strong correlation physics (e.g. quantum phase transitions in heavy fermion compounds; non-Fermi liquid "bad metals") in a very controlled setting, or in regimes that are otherwise hard to probe (e.g., impurities driven out of equilibrium). This workshop is about 70 or 80 people, a mix of theorists and experimentalists, all interested in this stuff. When I get back I'll highlight a couple of the talks.

Thursday, August 16, 2007

Superluminality



Today this blurb from the New Scientist cause a bit of excitement around the web. While it sounds at first glance like complete crackpottery, and is almost certainly a case of terrible science journalism, it does involve an interesting physics story that I first encountered back when I was looking at grad schools. I visited Berkeley as a prospective student and got to meet Ray Chiao, who asked me how long it takes a particle with energy E to tunnel through a rectangular barrier of energetic height U > E and thickness d. He went to get a glass of water, and wanted me to give a quick answer when he got back a couple of minutes later. Well, if I wasn't supposed to do a real calculation, I figured there were three obvious guesses: (1) \( d/c\); (2) \(d/ (\hbar k/m)\), where \(k = \sqrt{2 m (U-E)}/\hbar\) - basically solving for the (magnitude of the imaginary) classical velocity and using that; (3) 0. It turns out that this tunneling time controversy is actually very subtle. When you think about it, it's a funny question from the standpoint of quantum mechanics. You're asking, of the particles that successfully traversed the barrier, how long were they in the classically forbidden region? This has a long, glorious history that is discussed in detail here. Amazingly, the answer is that the tunneling velocity (d / the tunneling time) can exceed c, the speed of light in a vacuum, depending on how it's defined. For example, you can consider a gaussian wave packet incident on a barrier, and ask how fast does the packet make it through. There will be some (smaller than incident) transmitted wavepacket, and if you look at how long it takes the center of the transmitted wave packet to emerge from the barrier after the center of the incident packet hits the barrier, you can get superluminal speeds out for the center of the wavepacket. (You can build up these distributions statistically by doing lots of single-photon counting experiments.) Amazingly, you can actually have a situation where the exiting pulse leaves the barrier before the entering pulse peak hits the barrier. This would correspond to negative (average) velocity (!), and has actually been demonstrated in the lab. So, shouldn't this bother you? Why doesn't this violate causality and break special relativity? The conventional answer is that no information is actually going faster than light here. The wavepackets we've been considering are all smooth, analytic functions, so that the very leading tail of the incident packet contains all the information. Since that leading tail is, in Gaussian packets anyway, infinite in extent, all that's going on here is some kind of pulse re-shaping. The exiting pulse is just a modified version in some sense of information that was already present there. It all comes down to how one defines a signal velocity, as opposed to a phase velocity, group velocity, energy velocity, or any of the other concepts dreamed up by Sommerfeld back in the early 20th century when people first worried about this. Now, this kind of argument from analyticity isn't very satisfying to everyone, particularly Prof. Nimtz. He has long argued that something more subtle is at work here - that superluminal signalling is possible, but tradeoffs between bandwidth and message duration ensure that causality can't be violated. Well, according to his quotes in today's news, apparently related to this 2-page thing on the arxiv, he is making very strong statements now about violating special relativity. The preprint is woefully brief and shows no actual data - for such an extraordinary claim in the popular press, this paper is completely inadequate. Anyway, it's a fun topic, and it really forces you to think about what causality and information transfer really mean.

Sunday, August 12, 2007

Kinds of papers

I've seen some recent writings about how theory papers come to be, and it got me thinking a bit about how experimental condensed matter papers come about, at least in my experience. Papers, or more accurately, scientific research projects and their results, seem to fall into three rough groupings for me:
  • The Specific Question. There's some particular piece of physics in an established area that isn't well understood, and after reading the literature and thinking hard, you've come up with an approach for getting the answer. Alternately, you may think that previous approaches that others have tried are inadequate, or are chasing the wrong idea. Either way, you've got a very specific physics goal in mind, a well-defined (in advance) set of experiments that will elucidate the situation, and a plan in place for the data analysis and how different types of data will allow you to distinguish between alternative physics explanations.
  • The New Capability. You've got an idea about a new experimental capability or technique, and you're out to develop and test this. If successful, you'll have a new tool in your kit for doing physics that you (and ideally everyone else) has never had before. While you can do cool science at this stage (and often you need to, if you want to publish in a good journal), pulling off this kind of project really sets the stage for a whole line of work along the lines of The Specific Question - applying your new skill to answer a variety of physics questions. The ideal examples of this would be the development of the scanning tunneling microscope or the atomic force microscope.
  • The (Well-Motivated) Surprise. You're trying to do either The Specific Question or The New Capability, and then all of the sudden you see something very intriguing, and that leads to a beautiful (to you, at least, and ideally to everyone else) piece of physics. This is the one that can get people hooked on doing research: you can know something about the universe that no one else knows. Luck naturally can play a role here, but "well-motivated" means that you make your own luck to some degree: you're much more likely to get this kind of surprise if you're looking at a system that is known to be physically interesting or rich, and/or using a new technique or tool.
Hopefully sometime in the future I'll give an anecdote or two about these. In the mean time, does anyone have suggestions on other categories that I've missed?

Behold the power of google

I am easily amused. They just put up google street-view maps of Houston, and while they didn't do every little road, they did index the driving routes through Rice University. In fact, you can clearly see my car here (it's the silver Saturn station wagon just to the right of the oak tree). Kind of cool, if a bit disturbing in terms of privacy.

Tuesday, August 07, 2007

This week in cond-mat

Another couple of papers that caught my eye recently....

arxiv:0707.2946 - Reilly et al., Fast single-charge sensing with an rf quantum point contact
arxiv:0708.0861 - Thalakulam et al., Shot-noise-limited operation of a fast quantum-point-contact charge sensor
It has become possible relatively recently to use the exquisit charge sensitivity of single-electron transistors (SETs) to detect motion of single electrons at MHz rates. The tricky bit is that a SET usually has a characteristic impedance on the order of tens of kOhms, much higher than either free space (377 Ohms) or typical radio-frequency hardware (50 Ohms). The standard approach that has developed is to terminate a coax line with an rf-SET; as the charge environment of the rf-SET changes, so does its impedance, and therefore so does the rf power reflected back up the coax. One can improve signal to noise by making an LC resonant circuit down at the rf-SET that has a resonance tuned to the carrier frequency used in the measurement. With some work, one can use a 1 GHz carrier wave and detect single charge motion near the rf-SET with MHz bandwidths. Well, these two papers use a gate-defined quantum point contact in a 2d electron gas instead of an rf-SET. See, rf-SETs are tough to make, are fragile, and have stability problems, all because they rely on ultrathin (2-3 nm) aluminum oxide tunnel barriers for their properties. In contrast, quantum point contacts (formed when a 2d electron gas is laterally constricted down to a size scale comparable to the Fermi wavelength of the electrons) are tunable, and like rf-SETs can be configured to have an impedance (typically 13 kOhms) that can be strongly dependent on the local charge configuration. Both the Harvard and Dartmouth groups have implemented these rf-QPCs, and the Dartmouth folks have demonstrated very nicely that theirs is as optimized as possible - its performance is limited by the fact that the current flowing through the QPC is composed of discrete electrons.

arxiv:0708.0646 - Hirsch, Does the h-index have predictive power?
*sigh*. The h-index is, like all attempts to quantify something inherently complex and multidimensional (in this case, scientific productivity and impact) in a single number, of limited utility. Here, Hirsch argues that the h-index is a good predictor of future scientific performance, and takes the opportunity to rebut criticisms that other metrics (e.g. average citations per paper) are better. This paper is a bit depressing to me. First, I think things like the citation index, etc. are a blessing and a curse. It's great to be able to follow reference trails around and learn new things. It's sociologically and psychologically of questionable good to be able to check on the impact of your own work and any competitor whose name you can spell. Second, Hirsch actually cites wikipedia as an authoritative source on how great the h-index is in academic fields beyond physics. I love wikipedia and use it all the time, but citing it in a serious context is silly. Ahh well. Back to trying to boost my own h-index by submitting papers.