Leonardo's Brain Page 6
In the 1960s, Roger Sperry discovered an experimental model that could be used to study brain activity. He surgically divided the corpus callosum of cats, anticipating that such radical surgery would create a dramatic, observable neurologic deficit. The corpus callosum, connecting the two cortical hemispheres, is the largest single structure in the cat’s brain. To his surprise, his cats’ behavior after the surgery appeared normal. How could that be? Sperry wondered. How could relatively normal behavior follow the disconnection of the great cable connecting one hemisphere to the other? And what, Sperry wondered, was the implication for human brain function?
Initially, he hypothesized that perhaps cats were too distant from humans on evolution’s extensively branching bush. So he performed the same surgery on monkeys. The postsurgical monkeys pretty much resembled the disconnected cats in their observable behavior. Now Sperry was more intrigued than ever. He remained convinced that something must have happened to brain function as a result of his drastic cleaving.
Around the same time, two neurosurgeons in Los Angeles, Joseph Bogen and Philip Vogel, were mulling over the feasibility of performing on humans a similar procedure to the one that Sperry had performed on cats and monkeys. They speculated that commissurotomy (the technical name of the surgical procedure that severs the corpus callosum) could be an effective surgical method of treating intractable epilepsy.
An epileptic fit can be compared to an electrical storm occurring inside the head. Typically, it begins as a squall in one specific focus in the brain that manifests an abnormality in the local conduction of electrical current. At the outset of the fit, the electrical signal, in a process poorly understood, becomes greatly amplified in this focal area and, rapidly gathering force, spreads to neighboring normal regions, exciting them also to begin to fire erratic electrical impulses. Soon the brain is flashing like a pinball machine gone haywire, resulting in one of the most frightening paroxysms there is in the pathological catalog of conditions that can afflict a human.
Depending on where in the brain the abnormal locus resides, the body part directly under its control is the first to begin what the ancients called the Devil’s Dance. As one possessed, the epileptic rapidly loses consciousness amid wild flailing of one of his or her arms or legs. As the brain storm moves across the nervous system’s surface, eventually all four limbs join in the involuntary jerky motions. An inhuman sound issues from the vocal tract. The affected person’s eyes roll back in their sockets until the pupils are not visible. While the eye muscles jerk the eyeball in rapid movements called nystagmus, the eyelids flutter violently.
At the time that Bogen and Vogel were investigating possible surgical treatments available to sufferers of this condition, a variety of efficacious drugs entered the market that could ameliorate or inhibit most epileptic seizures. Unfortunately, there still remained a small subset of patients for whom the medications were ineffective. It was to this group of intractable epileptics that Bogen and Vogel contemplated offering their radical approach. Commissurotomy, in theory, had an attractive premise. Bogen reasoned that if the electrical discharge initiating the paroxysm could not leap across from one cerebral hemisphere to the other by way of the corpus callosum, because it was surgically disrupted, then the fit would remain confined to only one hemisphere, leaving the other one aware and alert enough to call for help. Further, by confining the seizure to only one hemisphere, its outward manifestations would be less severe or protracted.
Of course, no one could claim with the slightest degree of confidence what would be the consequences of such a radical approach on the patient’s behavior, language, coordination, balance, personality, rationality, sexuality, or consciousness. Cats, monkeys, and humans are all mammals, but a deep divide separates the last mammal in this list from the other two. The risks were high.
Nevertheless, a small group of patients signed their informed consents to go under the knife, even though they were forewarned about the untested and highly dangerous surgical procedure. Their lives were so chaotic that they were willing to have their brains permanently and drastically altered on the chance that they might gain some semblance of control over the singular factor causing the disruption.
Bogen and Vogel performed the first successful commissurotomy in 1963. The patient survived, and, remarkably, there was little in his observable behavior and answers to posed questions that seemed different from his preoperative state. Friends and family confirmed that he walked, talked, and went about his business seemingly unaffected by the fact that his right hand quite literally no longer knew what his left hand was up to, and vice versa.
Thus emboldened, Bogen and Vogel repeated their performance. For reasons unknown, the surgery not only ameliorated the severity of the patients’ epileptic attacks, as had been anticipated, but there was also an unexplained decrease in their frequency. For a limited time, physicians recommended the split-brain operation as a reasonable approach to drug-resistant intractable epilepsy. Over a thousand patients had their brains split in this fashion. Commissurotomy fell out of favor in the 1970s because a new generation of drugs ultimately proved more successful in controlling epilepsy.
Sperry followed the initial success of Bogen and Vogel with great interest, recognizing that their accumulating pool of patients possessed a veritable treasure chest filled with information concerning how the human brain functioned. Along with Michael Gazzaniga, David Galin, and many other researchers, they designed a series of exceedingly clever experiments that allowed them to examine each cortical hemisphere in relative isolation. Never before had an experimental model presented itself to neuroscientists that could assess which functions of the brain resided predominantly in which hemisphere.
Soon, various centers around the world began to accumulate their own group of split-brain patients.*
* In 1981, Sperry was awarded the Nobel Prize in Medicine for his groundbreaking work.
The results of all this activity produced a body of knowledge that was truly revolutionary despite the serious objections some neuroscientists raised. A number of skeptics correctly pointed out that there were flaws in the experimental model. Epileptics have abnormal brains. Extrapolating the results obtained from the split-brain group to normal populations was like comparing apples and oranges. Additionally, they pointed out that the surgery itself radically reconfigured the brain, and that it was scientifically imprecise to superimpose the research findings of these altered brains on the brains of those in the population that had not undergone a splitting of the cortical lobes.
Even when considering these objections, however, the sheer volume of information coming from split-brain studies provided a fascinating window into the workings of the human brain. Newer methods of brain imaging of normal subjects have substantially corroborated the more important claims made by the earlier split-brain researchers. There is much still to be learned, but one fact remains beyond doubt: Natural Selection designed each cortical hemisphere in the human brain to process dramatically different functions.
The brain is an immensely complicated organ. Words such as never, always, and for sure have little place in a discussion concerning it. So the reader will forgive me when, for the sake of clarity, I characterize some of the functions that are still ambiguous as black and white.
Progressing in parallel to the understanding of brain function wrought by split-brain studies were the equally important advances made in the field of neurobiochemistry. Scientists first began to identify and then to understand the function of a wide variety of molecular messengers called neurotransmitters. Some were blood-borne hormones acting over long distances; others were local agents never straying far from home. For example, hormones such as testosterone, insulin, estrogen, and thyroxin (from the thyroid) can profoundly affect emotions, mood, and mental functioning, even though they are secreted by organs distant from the brain. Local agents such as dopamine, epinephrine, and serotonin in tiny doses can initiate seismic shifts in a person’s emotional tenor and intell
ectual clarity.
Neurotransmitters work by changing the state of neurons, the defining cells of the nervous system. Neurons differ from other cells in their two distinctive extensions leading away in opposite directions from the cell’s body. The shorter of the two resembles an extensively branching tree at the end of which forms a tangle of twiglike projections called dendrites. These filaments are the receiving ends of a nerve cell. Extending away from the cell body opposite to the dendrites is a solitary longer trunk called an axon.
In humans, some axons achieve lengths of over three feet. For example, the axons of the nerves that enervate the toes must travel from their origin in the lower spinal cord down the length of the thigh and leg to reach their final destination. There is one other notable nerve in which axons achieve extraordinary length. Bundled in the vagus nerve originating in the brain, axons journey down through the neck, chest, and abdomen, finally terminating deep in the pelvis at the level of the anus. All along the way, other shorter vagal axons peel off on their path to various internal organs, such as the heart, lungs, stomach, or colon. Anatomists chose to name this nerve with the Latin word that means “wanderer.” Vagabond and vagrant are other English words derived from the same Latin root.
Dendrites, like sensitive weather vanes, react to changes in local conditions, such as stimuli arriving from the axons of other nerve cells. Neurotransmitters impinging directly along their length can also activate them. Once a signal travels from the end of the dendrite to the cell’s main body, a mighty assessment must then be made. Like the ones and zeros winking on and off in the motherboards of a computer, the cell decides to either fire or not fire its singular axon depending upon the intensity of impulses tickling the ends of the nerve cell’s dendrites.
Should the determination be in favor of activating its single axon, a chemical chain reaction begins at its root, resulting in an electrical current progressing along the axon’s length. Differing from the nature of a current generated along a copper wire that moves at near the speed of light, an electrochemical nerve impulse moves along in a manner more resembling digestive peristalsis; think of a snake digesting a mouse. The distance traveled in one second in a typical motor axon is 100 yards per second compared to light speed, which is 186,000 miles per second.
For many years, neuroscientists believed that a nerve fired in gradations. According to this early thinking, a powerful stimulus would produce a strong discharge, and a feeble stimulus would produce a weak discharge. Subsequently, we have learned that a neuron either fires or it doesn’t. It is an all or none phenomenon.
Despite the close proximity of nerve endings, neither axons nor dendrites actually touch. In between them is a small gap called a synapse. It is here that neurotransmitters principally act. Some neurotransmitters behave like barricades to inhibit the transmission of impulses across a synapse, while others facilitate or amplify its passage, increasing the likelihood that the nerve will fire. Synapses, although empty spaces in the materiality of nervous tissue, nevertheless play a critical role in nerve function. One of the most puzzling aspects of the hunt for neurotransmitters has been the realization that the majority of active chemicals bear a molecular structure indistinguishable from substances found in plants.
Another significant development to occur, and the one that really opened the field, was the discovery that brain function could be observed in an alert subject by tagging one of the brain’s various fuels with a radioactive isotope. After injecting it into the subject’s arm, the brain could be examined by observing on a scanning device which areas in the brain “lit up” when the subject was asked to perform a task. Brain scans have provided an invaluable window into the black box. The number and type of brain scans has proliferated at an astounding rate, with each one monitoring a slightly different aspect of brain activity in real time.
The newest most exciting advance in studying the brain has emerged from the field of genomic research. Since the DNA code was broken a little over half a century ago, the pace of research has logarithmically accelerated. Recently, research scientists have begun to identify the specific genes that control aspects of human behavior. The discovery that the gene FOXP2 is responsible for language not only provides scientists with a dramatic new tool for understanding this key human attribute, but it also opens a window into the past. Knowing when the gene appeared, and in what species, gives researchers new insights into the age-old question as to when human speech began. The field is in its infancy, but mapping the genes that determine different kinds of mental functioning has opened an entirely new field of neuroresearch.
The development of the computer and then the Internet had a profound impact on human culture. An entirely new set of metaphors and conceptual models concerning how information is transmitted, interpreted, processed, and stored came into being. In one sense, both the brain and computers are information-processing devices, and the intense resources devoted to the progress of computer networks provided cognitive neuroscientists with new models and metaphors with which to think about consciousness. Neurons and wires are not very different from each other; neither are transistors and synapses. The intense effort to develop artificial intelligence has increased our understanding of neural networks because at its core, AI is but an attempt to improve artificially what the brain already does effortlessly.
All of these models provide us with insight into the working of our subject Leonardo’s brain.
Chapter 5
Leonardo/Renaissance Art
The dragonfly flies with four wings, and when the anterior are raised, the posterior are dropped. However, each pair of wings must be capable individually of supporting the entire weight of the animal.
—Leonardo da Vinci
This book is not concerned with Leonardo as an inventor, but his studies of flight have a bearing on his art because they prove the extraordinary quickness of his eye. There is no doubt that the nerves of his eye and brain, like those of certain famous athletes, were really supernormal, and in consequence he was able to draw and describe movements of a bird which were not seen again until the invention of the slow-motion cinema . . .
—Kenneth Clark
Ted Williams, the legendary baseball player renowned for his extraordinary ability to hit a ball, used to claim that he could see the seams on a pitched baseball.
—Bülent Atalay
Leonardo’s works were so breathtakingly beautiful and show-stoppingly original that during his life, many artists, art lovers, and the just plain curious journeyed considerable distances to gaze in wonder at his paintings and sculpture. A few were lucky enough to have Leonardo give them a glimpse of the drawings in his notebooks. His influence on Renaissance artists was profound, and his innovations subsequently appeared in many of his contemporaries’ works.
Leonardo-mania continued long after his death. Artists and art lovers from all over Europe traveled to Italy and France to study his masterpieces. From diaries, letters, and the very visible record that these “pilgrims” left behind, both in the unabashed reproductions of Leonardo’s paintings or in their own art, art historians have been able to trace Leonardo’s impact on their works. To name but a few, Raphael, Peter Paul Rubens, and Albrecht Dürer enthusiastically adopted many of Leonardo’s novel techniques.
The term avant-garde, a French military expression, refers to an army’s most forward advancing unit. Exposed to the greatest risks, it leads the charge. Art historians have co-opted this idea to describe artists who break away from their era’s conventions and initiate a new style of art. Had such a term existed in the Renaissance, Leonardo would have certainly been considered an innovative painter by his fellow artists. Nevertheless, few art historians catalog in their entirety how many of his innovations foreshadowed the signature styles of modern art that so discombobulated the art world of the late nineteenth and entire twentieth century.
When looking for precedents for the revolutions wrought by Édouard Manet, Claude Monet, Paul Cézanne, Eadweard Muybrid
ge, Pablo Picasso, Georges Braque, Marcel Duchamp, Giorgio de Chirico, Salvador Dalí, René Magritte, Max Ernst, Jackson Pollock, Robert Rauschenberg, Henry Moore, and many others, art historians rarely extend their purview beyond several earlier generations of painters. Moreover, in the above artists’ written notes, interviews, and biographies (with the exception of Max Ernst), none mention that the breakthroughs for which he became famous were influenced by any Renaissance painters.
Yet, as the following parallels between Leonardo’s works and modern art will demonstrate, no other artist in history incorporated so many concepts into his images that would remain dormant for hundreds of years only to reappear in the context of what we have come to characterize as Modernism. Like his scientific discoveries, Leonardo’s art exhibited an uncanny prescience that as yet lacks any compelling explanation. Before describing the aspects of Leonardo’s art that foretold the advent of modern art, it’s important to consider the innovations that immediately affected artists in his own time.
At the age of twenty-one, Leonardo paused on a hill overlooking a valley near his hometown of Vinci. Moved by the sheer beauty of the scene, he used pen, ink, and some watercolors for shading to quickly sketch in that fleeting moment all that his restless eye surveyed (Val d’Arno) [Fig. 2].