Tag Archives: Marie Curie

Looking Back 108 Years…

by Shane L. Larson

Today would have been Carl Sagan’s 82th birthday. It is an auspicious year, because after a 108 year drought, the Chicago Cubs have won a World Series title. The Cubs win reminded me of Sagan because his son, Nick, had told a story once of introducing his dad to computer baseball based on statistics, whereby you could pit famous teams in history against one another. Sagan apparently said to Nick, “Never show me this again; I like it too much.”

Today, Carl Sagan would have been 82 years old.

Today, Carl Sagan would have been 82 years old.

It is an instantly recognizable feeling to those of us who do science — a nearly uncontrollable urge to ask, “What if…” and then construct an experiment to answer that question. When faced with the prospect of being able to pit two great teams from baseball history against each other, the little science muse in the back of your mind begins to ask, who would win? What if I changed up the pitchers? Does the batting order matter? What if they played at home instead of away?

This incessant wondering is the genesis of all the knowledge that our species has accumulated and labeled “science.”  And so, to commemorate Sagan’s birthday, and the Cubs win this season, I’d like to look back at what we knew of the world the last time the Cubs won the World Series, 108 years ago, a time well within the possible span of a human life.  The year is 1908…

The title page of Einstein's PhD Thesis.

The title page of Einstein’s PhD Thesis.

In 1908, a young physicist named Albert Einstein, 3 years out from his college degree and after a multi-year stint working as a clerk in the Swiss Patent Office, got his first job as a professor, at the University of Bern. This era was a time in the history of physics where scientists were trying to understand the fundamental structure of matter. Einstein’s PhD thesis was titled, “A New Determination of Molecular Dimensions.” Despite the fact that he could not find a job as a faculty member in the years after he graduated, Einstein worked dutifully at the Patent Office, and did physics “in his spare time.” During 1905, he wrote a handful of transformative papers that would change physics forever. Like his PhD work, some of those were about the invisible structure of matter on the tiniest scales. One explained an interaction between light and matter known as the “photoelectric effect,” which would be the work for which he would win the Nobel Prize in 1921. Physicists had for sometime known that some materials, when you shone a light on them, generated electric current. Einstein was the first person to be able to explain the effect by treating light as if it were little baseballs (Go, Cubs! Go!) that were colliding with electrons and knocking them off of the material. Today we use that technology for devices like infrared remote controls to turn your TV on and off!  By the time Einstein became a professor, he was thinking about new and different things that had caught his attention, sorting out some new ideas about gravity that would, after an additional seven years of work become known as General Relativity.

(Top) Marie Curie in her laboratory. (Bottom) Curie's business card from the Sorbonne. [Image: Musee Curie]

(Top) Marie Curie in her laboratory. (Bottom) Curie’s business card from the Sorbonne. [Image: Musee Curie]

Other physicists were hard at work exploring other aspects of the properties of matter. In 1908, already having earned her first Nobel Prize (in 1903), Marie Curie became the first female professor ever at the Sorbonne in Paris. Her 1903 Nobel Prize in physics was for her work in the characterization of radioactive materials. She and her collaborators were not only trying to understand the nature of radiation and the properties of radioactive materials, but were discovering many of them for the first time. Today, we look at a periodic table of the elements and there are no gaps, but in 1908 there were. Curie and her colleagues discovered radium and polonium. They also discovered that some previously known elements, like thorium, were radioactive and we hadn’t known it. Before this pioneering work, the world knew nothing of radioactivity. At this time, the dangers of radiation were unknown. Curie for years exposed herself to radiation from samples in her laboratory; today, many of her notebooks are still too radioactive to be handled safely without protective equipment. In 1934, Curie died of aplastic anemia, a blood disease brought on by radiation exposure whereby your body cannot make mature blood cells.

We think a great deal of Curie’s exposure to radiation came not just from carrying radioactive samples around in her pockets (something that today we know is a bad idea), but also exposure from a new technology that she was a proponent of: medical x-rays. During World War I she developed, built, and fielded mobile x-ray units to be used by medical professionals in field hospitals. These units became known as petites Curies (“Little Curies”).

Orville Wright (R) and Lt. Thomas Selfridge (L) in the Wright Flyer, just before take off at Fort Myer. [Image: Wright Brothers Aeroplane Co]

Orville Wright (R) and Lt. Thomas Selfridge (L) in the Wright Flyer, just before take off at Fort Myer. [Image: Wright Brothers Aeroplane Co]

There were other technological advances being introduced to the world in 1908.  That year, the world was still becoming acquainted with the notion of flying machines. The Wright Brothers had successfully demonstrated a powered flying machine at Kitty Hawk in 1903, but in May of 1908, for the first time ever, a passenger was carried aloft when Charlie Furnas flew with Wilbur Wright over the Kill Devil Hills in North Carolina. Just as with Curie, the Wrights were in unexplored territory, learning about the art and science of flying for the first time. Dangers and unexpected events abounded — the Wright Flyer experienced a crash late in 1908 after a propellor broke during a demonstration for the military at Fort Myer. Orville Wright was seriously injured, but his passenger, Lieutenant Thomas Selfridge, sustained a serious skull injury and died 3 hours after the crash: the first person to perish in the crash of a self-powered aircraft.

(L) The 60-inch Telescope at Mount Wilson. (R) A young Harlow Shapley. [Images: Mt. Wilson Observatory]

(L) The 60-inch Telescope at Mount Wilson. (R) A young Harlow Shapley. [Images: Mt. Wilson Observatory]

On the opposite coast of the United States, also in 1908, the largest telescope in the world was completed on Mount Wilson, outside of Los Angeles: the 60-inch Reflector, built by George Ellery Hale. The 60-inch was built in an era when astronomers had discovered that building bigger and bigger telescopes enabled them to see deeper into the Cosmos in an effort to understand the size and shape of the Universe and our place within it. One of the biggest discoveries made with the 60-inch was still ten years away — astronomer Harlow Shapley would use the great machine to measure the distances to globular clusters near the Milky Way and discover that the Sun did not lie at the center of the galaxy(see Shapley’s paper here); today we know the Sun orbits the Milky Way some 25,000 light yeas away from the center.

The nature of the Milky Way was still, at that time, a matter of intense debate among astronomers. Some thought the Milky Way was the entire Universe. Others argued that some of the fuzzy nebulae that could be seen with telescopes were in fact “island universes” — distant galaxies not unlike the Milky Way itself.  The problem was there was no good way to measure distances. But 1908 saw a breakthrough that would give astronomers the ability to measure vast distances across the Cosmos when astronomer Henrietta Swan Leavitt published her observation that there was a pattern in how some stars changed their brightness. These were the first Cepheid variables, and by 1912 Leavitt had shown how to measure the distance to them by simply observing how bright they appeared in a telescope. A decade and a half later, in 1924, Edwin Hubble would use Leavitt’s discovery to measure the distance to the Andromeda Nebula (M31), clearly demonstrating that the Universe was far larger than astronomers had ever imagined and that the Milky Way was not, in fact, the only galaxy in the Cosmos. By the end of the 1920’s, Hubble and Milton Humason would use Leavitt’s discovery to demonstrate the expansion of the Universe, the first hint of what is today known as Big Bang Cosmology.

(L) Leavitt at her desk in the Harvard College Observatory. (R) The Magellanic Clouds, which Leavitt's initial work was based on, framed between telescopes at the Parnal Observatory in Chile. [Images: Wikimedia Commons]

(L) Leavitt at her desk in the Harvard College Observatory. (R) The Magellanic Clouds, which Leavitt’s initial work was based on, framed between telescopes at the Parnal Observatory in Chile. [Images: Wikimedia Commons]

Today, it is 108 years later. When I reflect on these items of historical note, I am struck by two things. First, it is almost stupefying how quickly our understanding of the workings of the world has evolved. It really wasn’t that long ago — barely more than the common span of a human life — that we didn’t know how to fly and we didn’t know that the Cosmos was ginormous beyond imagining. The pace of discovery continues to this day, dizzying and almost impossible to keep up with. The second thing that is amazing to me is how quickly we disperse and integrate new discoveries into the collective memory of our society. Flying is no longer a novelty; it is almost as common and going out and getting in a car. Large reflecting telescopes capable of making scientific measurements are in the hands of ordinary citizens like you and me, gathering starlight every night in backyards around the world. Most people know that the Milky Way is not the only galaxy in the Cosmos, and that radioactive materials should not be carried around in your pocket.

It is a testament to our ability to collect and disperse knowledge to all the far flung corners of our planet and civilization. In a world faced by daunting challenges, in a society in a tumultuous struggle to rise above its own darker tendencies, it is a great encouragement to me that the fruits of our knowledge and intellect are so readily shared and accessible. When the challenges facing the world seem to me too daunting to overcome, I often retreat to listen to Carl’s sonorous and poetic view of our history and destiny (perhaps most remarkably captured in his musings on the Pale Blue Dot). He was well aware of the problems we faced, but always seems to me to promote a never ending optimism that we have the power to save ourselves– through the gentle and courageous application of intellect tempered with compassion.  It seems today to be a good message.

Happy Birthday, Carl.

And Go Cubs, Go.


Stories from a Race Called Humans

by Shane L. Larson

On a forgotten autumn day long ago, I sat amidst hundreds of strangers in the far-away ballroom of a convention center in Oregon. I was younger than I am now, younger than most of those who were sitting around me. Yet somehow, I had been chosen. I had been waiting. I had rolled that singular moment of time around in my head, over and over again. Out of all the hundreds of hands in the air, mine was chosen, and I stood to ask a question.

My palms were sweaty, my heart raced. I took the microphone, and thankfully didn’t drop it. In those days, I had yet to ever speak in front of more than a small group of my friends in class, but here I was, and damnit I was going to ask my question!  Five hundred pairs of eyes stared right at me, and from the stage, the cool gaze of our guest, encouraging and expectant. I’m sure I squeaked; I must have squeaked. But out came my question: “What is the responsibility of science fiction to bring plausible visions of the future to us?

shatnerThe person I had directed my inquiry to was William Shatner, who had been regaling the crowd of Trekkers with tales of life in the Big Chair, and answering questions about how to properly act out a Star Trek Fight Scene, whether he really thought Kirk should have let the Gorn survive, and whether Spock ever just burst out laughing on set.  And then I stood up to ask my question.

What is the responsibility of science fiction to bring plausible visions of the future to us?

For all the ribbing that Shatner takes for being Shatner, I think he responded in a way that might surprise many people. He smiled, he didn’t laugh. He looked me straight in the eye and told me this: “Science fiction is like all art — it is a medium for telling stories about our humanity. Visions of the future are just stories about us.”  It was a brilliant and thoughtful answer, and I’ve always remembered it.

Now, many years later, I practice science.  I still watch a lot of Star Trek, and I absorb a lot of science fiction, and every time I reach the end of a novel or movie, I know Bill was right — the best stories are the ones that use the future as a backdrop to tell human stories.


Larry Yando as the titular character in the Chicago Shakespeare Theatre’s 2014 production of King Lear.

That’s a very interesting thought when I drape it across the tapestry of art, literature, and theatre. All forms of art are explorations of what it means to be human, attempts to understand on a very deep level who we are. Just this past weekend, sitting in the darkened theatre of the Chicago Shakespeare Theatre, I was bludgeoned by that simple fact once again watching King Lear. Though the story is set long ago, and though the language is not all together our own, we sat there enraptured. The tale is full of intrigue and betrayal, but at the core is the King. Watching the play reflected the dark pools of shadowed eyes, you could see an audience tearfully and painfully aware that the tragedy unfolding from the King’s descent into madness was an all too relevant tale for those of us who have suffered the loss of elderly friends and relatives to the ravages of age.  Our humanity was laid out, naked and bare on the stage, in a tale written more than 400 years ago. The core message is as relevant and pertinent today as it was when the Bard penned it those long centuries ago.

The exploration of the nature of the human spirit has long been the purview of all forms of art, especially performance. The whole point in acting out stories is to tell stories about people.  Even when the characters aren’t people, they still talk and act like people, anthropomorphized by their actions, their thoughts, and their words in the tapestry of story on the stage or screen.  And so I suppose it should not have surprised me that Shatner did not think of tales from the far future any differently — the stories are still stories about us. They still are stories about our triumphs, our tragedies, our frailties, and our fallacies.

But that younger version of myself carried a particular conceit — I wasn’t clear about it then, but I still harbor it today: I fundamentally believe that art and science have exactly the same purpose — to discover the stories of who we are, and what our place in the Cosmos is. The truth of this is hidden in every science book and every textbook you have ever picked up and thumbed through.  How?  Very seldom is science explained without the context, the wrapper, of the human story around it.

Newton witnessed the falling of an apple when visiting his mother's farm, inspiring him to think about gravity. It almost certain is apocryphal that it hit him on the head! But art gives the story a certain reality!

Newton witnessed the falling of an apple when visiting his mother’s farm, inspiring him to think about gravity. It is almost certainly apocryphal that it hit him on the head! But art gives the story a certain reality!

When we are first taught about the Universal Law of Gravitation, very seldom are you simply told the equation that relates mass and distance to gravitational force. Instead, we cast our minds back to a late summer day in the 17th Century. On a warm evening after dinner, Isaac Newton was sitting in his mother’s garden, on her farm in Lincolnshire and was witness to an ordinary event: an apple falling to the ground. A simple, ordinary event, part of a tree’s ever-repeating cycle of reproduction. But witnessing the event sparked a thought in Newton’s mind that ultimately blossomed into the first modern Law of Nature. The tale inspires a deep sense of awe in us. How many everyday events have we witnessed, but never taken the time to heed? How many secrets of Nature have passed us by, because we never connected the dots the Cosmos so patiently lays out before us?

Marie Curie in her laboratory.

Marie Curie in her laboratory.

When we first learn about the discovery of radioactivity, very seldom are we only told the mass of polonium and the half-life of uranium.  Instead, we relive the discovery of radioactive decay alongside Marie Curie, who unaware of the dangers of radiation, handled samples with her bare hands and carried test tubes full of the stuff around in her pockets. We know that she developed the first mobile x-ray units, used in World War I, a brilliant realization of mobile medical technology at the dawn of our modern age. But we also know that Curie perished from aplastic anemia, brought on by radiation exposure. Today, her notebooks and her belongings are still radioactive and unsafe to be around for long periods of time. Curie’s death is a tragic tale of how the road to discovery is fraught with unknown dangers. While we mourn her loss we celebrate also the wonder that our species has such brilliant minds as Marie Skłodowska-Curie, the only person ever to win TWO Nobel Prizes in different sciences (Chemistry and Physics)

Alexander Fleming in his lab.

Alexander Fleming in his lab.

When we learn about antibiotics, seldom do we begin in the lab with petri dishes full of agar. Instead, we are taught the value of serendipity through the tale of Alexander Fleming. In late September of 1928 he returned to the laboratory to find that he had accidentally left a bacterial culture plate uncovered and it had developed a mold growth. You can imagine a visceral emotional reaction — anger! Another days-long experiment ruined! By sheer carelessness! It happens to all of us every day when we burn a carefully prepared dinner, or break a favorite coffee mug, or accidentally drop a smartphone down an elevator shaft. But through the haze of aggravation, Fleming noticed something subtle and peculiar — there were no bacterial growths in the small halo around the mold. The mold, known as Penicillium rubens, could stop a bacteria in its tracks. That single moment of clarity launched the development of antibiotics, so crucial in modern medical care. What world would we inhabit today, if Fleming had thrown that petri dish away in disgust, without a second glance? Surely a tragedy of world-girdling proportions.

All of these stories illustrate a subtle but singular truth about our species: we are different from all the other lifeforms on our planet.  Not in sciencey ways — we have the same biochemical machinery as sunflowers, opossums and earthworms — but in less tangible abstract ways.  What separates us from all the other plants and animals is the way we respond to the neurological signals from our brains. Our brains are wired to do two interesting things: they imagine and they create. The truth is we don’t fully understand how our brains do these things, or why there is an apparent biological imperative to do either. But the result of those combined traits is an insatiable curiosity to know and understand ourselves and the world around us, and an uncontrollable urge to express what we discover.

Sometimes those expressions burst out of us in moments of creation that lead to lightbulbs, intermittent windshield wipers, kidney dialysis machines, and iPads. Sometimes those same expressions burst out of us in moments of creation that lead to Jean van Eyck’s Arnolfini Portrait, or Auguste Rodin’s The Kiss, or Steve Martin’s “Picasso at the Lapine Agile,” or Ridley Scott’s desolate future in “Blade Runner.”

(Top L) Jean van Eyck's Arnolfini Portrait; (Top R) Rodin's The Kiss; (Bottom) The urban dystopia of the future in Ridley Scott's Blade Runner.

(Top L) Jean van Eyck’s Arnolfini Portrait; (Top R) Rodin’s The Kiss; (Bottom) The urban dystopia of the future in Ridley Scott’s Blade Runner.

Art is like science. Imagination expressed through long hours of practice, many instances of trial and error, and moments of elation that punctuate the long drudgery of trying to create something new.  Science is like art. Trying to understand the world by constantly bringing some new creative approach to the lab bench in an attempt to do something no one else has ever done before.

Both science and art are acts of creation with one express goal: to tell our stories. Both require deep reservoirs of creativity. Both require vast amounts of imagination. Both require great risks to be taken. But in the end, the scientist/artist creates something new that changes who we are and how we fit into the world. And wrapped all around them are all-together human tales of the struggles encountered along the road to discovery.

It is not entirely the way we are taught to think about scientists and artists. Isaac Asimov famously noted this in his 1983 book Roving Mind: “How often people speak of art and science as though they were two entirely different things, with no interconnection. An artist is emotional, they think, and uses only his intuition; he sees all at once and has no need of reason. A scientist is cold, they think, and uses only his reason; he argues carefully step by step, and needs no imagination. That is all wrong. The true artist is quite rational as well as imaginative and knows what he is doing; if he does not, his art suffers. The true scientist is quite imaginative as well as rational, and sometimes leaps to solutions where reason can follow only slowly; if he does not, his science suffers.” An interesting thought to ruminate on the next time you are preparing DNA samples or soldering stained glass mosaics.

I have to go now. The crew of the Enterprise have some moments of humanity to show me. See you in an hour.

In Living Color

by Shane L. Larson

I don’t often talk to people for great lengths of time on airplanes. I’m kind of shy around people I don’t know.  But on a recent long flight home from the East Coast, I found myself sitting next to a wonderful woman, 75 years young, and we talked for the entire 5 hour flight.  Stefi was a gold- and silversmith from Vermont, and a German immigrant.  Our conversation ranged from the art of jewelry making, to winters in Vermont, to her grandchildren whom she was going to visit in Utah.

But at some point, the conversation strayed to her childhood in Berlin, where she lived as a young girl during the closing days of World War II.  There, sitting right next to me on the plane, was a living, breathing soul who had lived through the heart of World War II.  Not a soldier, not a support person who worked the fronts or factory lines, but a person caught in the middle of the war itself. From vivid memory, she recounted tales of what she saw, living huddled in a basement with her mother and sisters as the end of the War approached.  She was there as the Red Army advanced toward the Battle of Berlin, and saw the Russian occupation of the city.  In her mind’s eye, she could see it all in living color again.  But my vision was only pale black and white; the only images of the War that anyone in my generation has ever seen are stark black and white images.

In the days after the flight, long after Stefi and I had gone back to our respective lives, I was thinking about the differences between memory and historical record. In physics, we have similar historical records of our distant past, also preserved in stark, black and white images.  One of the most famous images in my field, is of the Fifth Solvay Conference on Electrons and Photons, held in the city of Brussels in October of 1927.  The reason this particular conference is so well known is the iconic black and white photograph captured of the participants.

The participants in the 1927 Solvay Conference on Electrons and Photons, in Brussels.

Scanning through the photograph, or running your finger down a list of names, one encounters names that are completely synonymous with the development of modern physics. Of the 29 participants, 17 went on to win Nobel Prizes; included among them is Marie Curie, the only person who has ever won two Nobel Prizes in different scientific disciplines!  This single image captured almost all of the architects of modern physics.  These are the minds that seeded the genesis of our modern technological world.  The conference itself has passed into the folklore of our civilization, as this was the place that Einstein expressed his famous utterance, “God does not play dice!”  Neils Bohr famously replied, “Einstein, stop telling God what to do!”

For many of us in this game called physics, the people in this image are icons, idols, and inspirations.  We know their names, we know their stories, and we can pick them out of pictures as easily as we can pick friends out of modern pictures.  But always in the dull and muted grain of black and white photographs. I’ve never met a person (that I know of) that met Einstein, or Bohr, or Heisenberg, or Curie; no one to recount for me the vivid colors of these great minds in their living flesh.

It is an interesting fact that all of these historical images are black and white. Color photography was first demonstrated some 66 years earlier at the suggestion of another great mind in physics, James Clerk Maxwell.

The first color photograph ever taken, by Thomas Sutton in 1861 at the behest of James Clerk Maxwell. The image is of a tartan ribbon, captured by a three-color projection technique.

The process worked by taking three black and white pictures through colored filters — red, green and blue — then reprojecting the black and white pictures through the filters again to produce a colored image.  This is more or less the same principle that is used today to generate color on TV screens and computer monitors.  If you look very closely at the screen you are reading this on, you will see that the pixels are all combinations of red, green and blue.

An example of RGB image construction. The three black and white images on the left were taken through Red (top), Blue (middle) and Green (bottom) filters. When recombined through those colored filters, they produce a full color image (right).

The famous tartan picture was generated for a lecture on color that Maxwell was giving.  Maxwell’s interest in light and color derived from the reason he is famous  — Maxwell was the first person to understand that several different physical phenomena in electricity and magnetism are linked together.  His unification of the two is called electromagnetism.  One consequence of that unification was the discovery that the agent of electromagnetic phenomena is light.  Today, the four equations describing electromagnetism are called the Maxwell Equations.

As with all things in science, the elation of discovery is always accompanied by new mysteries and new questions. One of the central realizations of electromagnetism was that light is a wave, and that the properties of the wave (the wavelength, or the frequency) define what our eyes perceive as color.

The electromagnetic spectrum — light in all its varieties, illustrated in the many different ways that scientists describe the properties of a specific kind (or “color”) of light. What your eye can see is visible light, the small rainbow band in the middle.

The realization that the color of light could be defined by a measurable property was a tremendous leap forward in human understanding of the world around us, and it naturally led to the idea and discovery that there are “colors” of light that our eyes cannot see!  Those kinds of light have often familiar names — radio light, microwave light, infrared light, ultraviolet light, and x-ray light.  But knowing of the existence of a thing (“Look! Infrared light!”) and being able to measure its properties (“This radio wave has a wavelength of 21 centimeters.”) are not the same thing as knowing why something exists.  How Nature made all the different kinds of light and why, were mysteries that would not be solved until the early Twentieth Century, by many of the great minds who attended the Solvay Conference.

Einstein famously discovered the idea that there is a Universal speed limit — nothing can travel faster than the speed of light in the vacuum of space.  Max Planck postulated that on microscopic levels, energy is delivered in discreet packets called quanta — in the case of light, those quanta are called photons.  Neils Bohr used the Planck hypothesis to explain how atoms generate discrete spectral lines — a chromatic fingerprint that uniquely identify each of the individual atoms on the periodic table. Marie Curie investigated the nature of x-ray emission from uranium, and was the first to postulate that the x-rays came from the atoms themselves — this was a fundamental insight that went against the long held assumption that atoms were indivisible, leading to the first modern understandings of radioactivity.  Louis deBroglie came to the realization that on the scales of fundamental particles, objects can behave and both waves and particles — this “duality” of character highlights the strangeness of the quantum world and is far outside our normal everyday experiences on the scales of waffles, Volkswagens and house sparrows.  Erwin Schroedinger pushed the quantum hypothesis on very general grounds, developing a mathematical equation (which now bears his name) that gives us predictive power about the outcome of experiments on the scales of the atomic world — his famous gedanken experiment with a cat in a box with a vial of cyanide captures the mysterious differences in “knowledge” between the macroscopic and microscopic worlds.  And so on.

The visible light fingerprints (“atomic spectra”) of all the known chemical elements. Each atom emits and absorbs these unique sets of colors, making it possible to identify them.

It is fashionable in today’s political climate to question the usefulness of scientific investigations, and to ask what benefit (economic or otherwise) that basic research investment begets society. Looking at the picture of the Solvay participants and considering their contributions to the knowledge of civilization one very rapidly comes to the realization that their investigations changed the world; in a way, their contemplations made the world we know today.  The discovery of radiation led directly to radiological medicine, radiation sterilization, nuclear power, and nuclear weapons.  The behaviour of atoms and their interactions with one another to generate light leads to lasers, LED flashlights, cool running lights under your car or skateboard, and the pixels in the computer screen you are reading from at this very moment. The quantum mechanical beahviour of the microscopic world, and our ability to understand that behaviour leads directly to integrated circuits and every modern electronic device you have ever seen.  That more than anything else should knock your sense of timescales off kilter; at the time quantum mechanics was invented, computers were mechanical devices, and no one had ever imagined building a “chip.”  The first integrated circuit wasn’t invented until 1958, when Jack Kilby at Texas Instruments built the first working example, 31 years after the Solvay Conference; the first computer using integrated circuits that you could own didn’t appear until the 1970s, and smartphones showed up in the early 2000’s.  The economic powerhouses of Apple, Microsoft, Hewlett-Packard, Dell, and all the rest, are founded on basic research that was done in the 1920s and 1930s.

Which brings me back to where we started — pictures from those bygone days.  After the first tri-color image of Maxwell’s tartan, the development of color photography progressed slowly. The 1908 Nobel Prize in Physics was awarded for an early color emulsion for photography, but the first successful color film did not emerge until Kodak created their famous Kodachrome brand in 1935.  Even so, color photography was much more expensive than black and white photography, and was not widely adopted until the late 1950s.  As a result, our history is dominated by grainy, black and white images.

So it was a great surprise last week when the Solvay Conference picture passed by in one of my friend’s Facebook stream, in color!  Quite unexpectedly, it knocked my socks off. I spent a good long time just staring at it.  Never before had I known of the flash of blue in Marie Curie’s scarf, Einstein’s psychedelic tie, or Schroedinger’s red bow tie (is Pauli looking at that tie with envy?).  But more importantly, the people were in color, as plain as if they were sitting across the table from me. It’s a weird twist of psychology that that burst of color, soft skin tones of human flesh, suddenly made these icons all the more real to me.

Colorized version of the famous 1927 portrait of the Solvay Conference participants [colorized version by Sanna Dullaway].

No longer just names and grainy pictures from history books, but rather remarkable minds from our common scientific heritage, seen for the first time in living color by a generation of scientists long separated from them.