Tag Archives: Niels Bohr

In Living Color

by Shane L. Larson

I don’t often talk to people for great lengths of time on airplanes. I’m kind of shy around people I don’t know.  But on a recent long flight home from the East Coast, I found myself sitting next to a wonderful woman, 75 years young, and we talked for the entire 5 hour flight.  Stefi was a gold- and silversmith from Vermont, and a German immigrant.  Our conversation ranged from the art of jewelry making, to winters in Vermont, to her grandchildren whom she was going to visit in Utah.

But at some point, the conversation strayed to her childhood in Berlin, where she lived as a young girl during the closing days of World War II.  There, sitting right next to me on the plane, was a living, breathing soul who had lived through the heart of World War II.  Not a soldier, not a support person who worked the fronts or factory lines, but a person caught in the middle of the war itself. From vivid memory, she recounted tales of what she saw, living huddled in a basement with her mother and sisters as the end of the War approached.  She was there as the Red Army advanced toward the Battle of Berlin, and saw the Russian occupation of the city.  In her mind’s eye, she could see it all in living color again.  But my vision was only pale black and white; the only images of the War that anyone in my generation has ever seen are stark black and white images.

In the days after the flight, long after Stefi and I had gone back to our respective lives, I was thinking about the differences between memory and historical record. In physics, we have similar historical records of our distant past, also preserved in stark, black and white images.  One of the most famous images in my field, is of the Fifth Solvay Conference on Electrons and Photons, held in the city of Brussels in October of 1927.  The reason this particular conference is so well known is the iconic black and white photograph captured of the participants.

The participants in the 1927 Solvay Conference on Electrons and Photons, in Brussels.

Scanning through the photograph, or running your finger down a list of names, one encounters names that are completely synonymous with the development of modern physics. Of the 29 participants, 17 went on to win Nobel Prizes; included among them is Marie Curie, the only person who has ever won two Nobel Prizes in different scientific disciplines!  This single image captured almost all of the architects of modern physics.  These are the minds that seeded the genesis of our modern technological world.  The conference itself has passed into the folklore of our civilization, as this was the place that Einstein expressed his famous utterance, “God does not play dice!”  Neils Bohr famously replied, “Einstein, stop telling God what to do!”

For many of us in this game called physics, the people in this image are icons, idols, and inspirations.  We know their names, we know their stories, and we can pick them out of pictures as easily as we can pick friends out of modern pictures.  But always in the dull and muted grain of black and white photographs. I’ve never met a person (that I know of) that met Einstein, or Bohr, or Heisenberg, or Curie; no one to recount for me the vivid colors of these great minds in their living flesh.

It is an interesting fact that all of these historical images are black and white. Color photography was first demonstrated some 66 years earlier at the suggestion of another great mind in physics, James Clerk Maxwell.

The first color photograph ever taken, by Thomas Sutton in 1861 at the behest of James Clerk Maxwell. The image is of a tartan ribbon, captured by a three-color projection technique.

The process worked by taking three black and white pictures through colored filters — red, green and blue — then reprojecting the black and white pictures through the filters again to produce a colored image.  This is more or less the same principle that is used today to generate color on TV screens and computer monitors.  If you look very closely at the screen you are reading this on, you will see that the pixels are all combinations of red, green and blue.

An example of RGB image construction. The three black and white images on the left were taken through Red (top), Blue (middle) and Green (bottom) filters. When recombined through those colored filters, they produce a full color image (right).

The famous tartan picture was generated for a lecture on color that Maxwell was giving.  Maxwell’s interest in light and color derived from the reason he is famous  — Maxwell was the first person to understand that several different physical phenomena in electricity and magnetism are linked together.  His unification of the two is called electromagnetism.  One consequence of that unification was the discovery that the agent of electromagnetic phenomena is light.  Today, the four equations describing electromagnetism are called the Maxwell Equations.

As with all things in science, the elation of discovery is always accompanied by new mysteries and new questions. One of the central realizations of electromagnetism was that light is a wave, and that the properties of the wave (the wavelength, or the frequency) define what our eyes perceive as color.

The electromagnetic spectrum — light in all its varieties, illustrated in the many different ways that scientists describe the properties of a specific kind (or “color”) of light. What your eye can see is visible light, the small rainbow band in the middle.

The realization that the color of light could be defined by a measurable property was a tremendous leap forward in human understanding of the world around us, and it naturally led to the idea and discovery that there are “colors” of light that our eyes cannot see!  Those kinds of light have often familiar names — radio light, microwave light, infrared light, ultraviolet light, and x-ray light.  But knowing of the existence of a thing (“Look! Infrared light!”) and being able to measure its properties (“This radio wave has a wavelength of 21 centimeters.”) are not the same thing as knowing why something exists.  How Nature made all the different kinds of light and why, were mysteries that would not be solved until the early Twentieth Century, by many of the great minds who attended the Solvay Conference.

Einstein famously discovered the idea that there is a Universal speed limit — nothing can travel faster than the speed of light in the vacuum of space.  Max Planck postulated that on microscopic levels, energy is delivered in discreet packets called quanta — in the case of light, those quanta are called photons.  Neils Bohr used the Planck hypothesis to explain how atoms generate discrete spectral lines — a chromatic fingerprint that uniquely identify each of the individual atoms on the periodic table. Marie Curie investigated the nature of x-ray emission from uranium, and was the first to postulate that the x-rays came from the atoms themselves — this was a fundamental insight that went against the long held assumption that atoms were indivisible, leading to the first modern understandings of radioactivity.  Louis deBroglie came to the realization that on the scales of fundamental particles, objects can behave and both waves and particles — this “duality” of character highlights the strangeness of the quantum world and is far outside our normal everyday experiences on the scales of waffles, Volkswagens and house sparrows.  Erwin Schroedinger pushed the quantum hypothesis on very general grounds, developing a mathematical equation (which now bears his name) that gives us predictive power about the outcome of experiments on the scales of the atomic world — his famous gedanken experiment with a cat in a box with a vial of cyanide captures the mysterious differences in “knowledge” between the macroscopic and microscopic worlds.  And so on.

The visible light fingerprints (“atomic spectra”) of all the known chemical elements. Each atom emits and absorbs these unique sets of colors, making it possible to identify them.

It is fashionable in today’s political climate to question the usefulness of scientific investigations, and to ask what benefit (economic or otherwise) that basic research investment begets society. Looking at the picture of the Solvay participants and considering their contributions to the knowledge of civilization one very rapidly comes to the realization that their investigations changed the world; in a way, their contemplations made the world we know today.  The discovery of radiation led directly to radiological medicine, radiation sterilization, nuclear power, and nuclear weapons.  The behaviour of atoms and their interactions with one another to generate light leads to lasers, LED flashlights, cool running lights under your car or skateboard, and the pixels in the computer screen you are reading from at this very moment. The quantum mechanical beahviour of the microscopic world, and our ability to understand that behaviour leads directly to integrated circuits and every modern electronic device you have ever seen.  That more than anything else should knock your sense of timescales off kilter; at the time quantum mechanics was invented, computers were mechanical devices, and no one had ever imagined building a “chip.”  The first integrated circuit wasn’t invented until 1958, when Jack Kilby at Texas Instruments built the first working example, 31 years after the Solvay Conference; the first computer using integrated circuits that you could own didn’t appear until the 1970s, and smartphones showed up in the early 2000’s.  The economic powerhouses of Apple, Microsoft, Hewlett-Packard, Dell, and all the rest, are founded on basic research that was done in the 1920s and 1930s.

Which brings me back to where we started — pictures from those bygone days.  After the first tri-color image of Maxwell’s tartan, the development of color photography progressed slowly. The 1908 Nobel Prize in Physics was awarded for an early color emulsion for photography, but the first successful color film did not emerge until Kodak created their famous Kodachrome brand in 1935.  Even so, color photography was much more expensive than black and white photography, and was not widely adopted until the late 1950s.  As a result, our history is dominated by grainy, black and white images.

So it was a great surprise last week when the Solvay Conference picture passed by in one of my friend’s Facebook stream, in color!  Quite unexpectedly, it knocked my socks off. I spent a good long time just staring at it.  Never before had I known of the flash of blue in Marie Curie’s scarf, Einstein’s psychedelic tie, or Schroedinger’s red bow tie (is Pauli looking at that tie with envy?).  But more importantly, the people were in color, as plain as if they were sitting across the table from me. It’s a weird twist of psychology that that burst of color, soft skin tones of human flesh, suddenly made these icons all the more real to me.

Colorized version of the famous 1927 portrait of the Solvay Conference participants [colorized version by Sanna Dullaway].

No longer just names and grainy pictures from history books, but rather remarkable minds from our common scientific heritage, seen for the first time in living color by a generation of scientists long separated from them.

Advertisements

Quantum Mechanics, the Bangles, and Another Manic Monday

by Shane L. Larson

At the opening of Harry Potter and the Deathly Hallows, Part 1 we find Professor Snape stalking up to the imposing wrought-iron gate of Malfoy Manor.  With a casual flick of his wand, he passes through the gate, unimpeded.  If you look quickly around the theatre, you can spot every physicist in the crowd because they are all nodding sagely: magic is just science, and Snape just quantum tunneled through the Malfoy Manor Gate.

Quantum tunneling. The name evokes little trills of excitement, wonder, and possibly confusion because it uses the magic “q” word from the Twentieth Century: quantum.  No aspect of fundamental physics challenged our understanding of Nature more than the ideas of quantum mechanics.  At the start of the 1900s, we were for the first time using our wits and our technology to attempt to understand Nature on scales that had been unaccessible to us since the dawn of time –– the scales of atoms.  The consequence of those explorations and discoveries is all the wonder and convenience of our modern technological society.

Every one of us encounters quantum mechanics everyday.  This morning at 6:21am my alarm clock went off, blaring the dulcet admonitions of the Bangles that this Monday, like last Monday, is just another Manic Monday.  That doleful message was enabled by quantum mechanics.   Deep down inside most devices of modern convenience is one of the great marvels of our time –– the integrated circuit. They are small and innocuous, little black squares of ceramic and rare earth metals with small metal legs splayed out, giving them the appearance of some strange robotic bug. But deep inside they are machines if wonder. They have no moving parts, but their job is to corral and gate billions of tiny electronic denizens that we have dispatched to do our bidding (in the case of your clock radio, to wake you up). The beasts of burden in the world of electronics are electrons, fundamental particles if Nature with a mass 0.000 000 000 000 000 000 000 000 000 006 times the mass of a regulation baseball. Electrons are very small!  It is only in the last 100 years that we have truly understood the laws of physics for the very small –– we call those laws “quantum mechanics.” If you want to herd electrons around a semiconductor and make them do your bidding, then you have to be a master if quantum mechanics.

So what are the laws of quantum mechanics? At its most rudimentary level it is just mechanics, that branch of physics that tells us about the motion of objects. The purpose of mechanics is to determine the location and position of objects as a function of time.  If I throw a baseball, how long does it take to cross home plate? If I slam on my brakes in an attempt to prevent my 1979 Yugo from rear-ending a Lexus that is sitting at a green light, how far do I skid?  How fast does a rocket have to go to break the bonds of Earth, heading outbound through the solar system?  These questions are the purview of mechanics. Quantum mechanics is about these same kinds of questions, but applied to the sub-atomic world.

Many of the axioms of life that you learned as a child provide equally good advice for doing science. An important one is “there is more than one way to skin a cat.”  For every problem in physics, there is more than one way to solve it. In mechanics, there are often many ways to think about problems.  One of the most common ways to think about mechanics is in terms of speed and acceleration –– how fast an object is moving and how that motion is changing. Another common way to think about mechanics is in terms of energy.  Energy has an intuitive foundation you can imagine, characterizing how much effort you had to expend to get an object moving, or the potential an object has to do something.  For instance, a Mack truck rolling down the highway at 65 mph has a lot of energy; it takes a lot of effort to get such a large object moving so fast! In a similar way, an anvil perched precariously on the edge of a cliff must have some energy stored in it, because it has the potential to do a lot of damage to someone far below (as Wile E. Coyote knows well).

Fundamentally, quantum mechanics is an approach to mechanics that thinks about energy. The “quantum” part of the name comes from a discovery of the physicist, Max Planck.  Planck stumbled on the fact that when you look at the energy of sub-atomic systems, the energy comes in discrete packets that he called quanta.  Planck argued that you could not have any energy, but you had to have a specific energy that was a discrete number of quanta added together.  These quanta are tiny, like the objects Planck was trying to describe.  For typical atoms, a quantum of energy is about a hundred billion-billion times smaller the energy of a Major League fastball.

This simple prediction, borne out to high precision in laboratory experiments, has famously non-intuitive consequences for the world.  Perhaps the most renowned is the Heisenberg Uncertainty Principle, which tells us that precision knowledge of all the physical properties of a quantum mechanical system is not possible.  Suppose I wanted to look at the electrons in my alarm clock, scurrying around their integrated circuits in their quest to insure the Bangles get me out of bed at the right time. If the electrons were little cars on highways (like in the Tron movies), I might be tempted to ask “where is each electron and how fast is it moving?”  But the Heisenberg Uncertainty Principle tells me this is not possible; if I want to I can measure where the electron is or how fast it is moving, but I can’t know both accurately.  The more well known the position is, the more uncertain the speed is, and vice versa.

This is disturbing to say the least because it sounds completely counter-intuitive to our everyday experience.  But more importantly to scientists, it suggests a deadly conundrum for our deepest held passions about how the world should work: we believe that there should be a consistent set of physical laws that apply to everything, whether they are baseballs or electrons!  As hundreds of thousands of Major League Baseball replays have shown, we can know where a baseball is (over the outside corner of the plate) and the speed at the same time (it’s printed right there on the screen!).  But Heisenberg says this is not possible.  How is this conundrum resolved?

One way to ask the question is to apply quantum mechanics to a system that is not small.  One could ask “Is a cow a quantum mechanical system?”  When you are first exposed to quantum mechanics, this is the obvious question to ask.  Neils Bohr, one of the early architects of quantum mechanics, pondered the same question.  How could it be that quantum mechanics governs how electrons move around in a semiconductor, but not how a cow walks around a corral?  The resolution to this is that the classical world, on the scale of meatballs and Boeing passenger jets and wallabies, is a gigantic quantum system operating on scales much different than those of an individual atom.  The physical properties of a cow, say its energy or its angular momentum (how fast it is spinning –– imagine a cow on ice skates, if that helps), are 1030 larger than those of a hydrogen atom.  When we look at the quantum world, what we observe are changes in quantum states.  We give each state a name to keep track of a quantity called the principle quantum number, n.  For a hydrogen atom, the n values typically have very small numbers: n = 1, n = 2.  A cow has very large quantum numbers, like 1030.  The quantum nature of a cow is very hard to detect because the difference between two quantum states, say ncow = 1000000000000000000000000000000 and ncow = 1000000000000000000000000000002 is much harder to detect than the quantum states of a hydrogen atom, say nhydrogen = 2 and nhydrogen = 4.

“But wait!” says the Hermione Granger in the back of the room.  “Both of those are still just different by a factor of 2!” Yes, that’s true.  But identifying the quantum state is about counting the values and then noting the differences.  If I lay 5 Oreos on the table, and you sneak one when you think I’m not looking, I’ll probably notice.  If I lay even a moderately large number of Oreos on the table, say 50 (let alone 1030), it is far less likely I’ll notice that you snarfed one.  The same is true of measuring the quantum nature of everyday objects –– it’s hard to notice the small, quantum mechanical changes.

And so it seems physics is saved –– we could use quantum mechanics to describe a cow because in the macroscopic world, the more convenient physical laws we use to describe the mechanics of cows, passed down to us from Galileo and Newton, are derived from quantum mechanics itself; we just don’t notice the small, quantum mechanical changes in the state of the system.  We call this the classical or the continuum limit.  This is the great lesson of science –– our knowledge of the world is continuously changing, and when it does change, our goal is to understand how our old knowledge fits in the new framework.  Sometimes it requires us to discard long held passions and beliefs, and other times it requires us to bend and stretch our minds to encompass a larger world view than we had before.  Either way, the process is confusing, exhilarating, painful, but ultimately rewarding.  It’s what gets a scientist out of bed after a long weekend –– the promise that your job is going to be just another Manic Monday.