Home EEE Resume Contact

Fields and waves in communication electronics

Power Engineering   |   Telecommunication   |   Control system Engineering   |   Electronics  |   Differential equation

Maxwell's equations

Maxwell's equations are the vital theoretical basis for communication electronics. Maxwell unified electricity and magnetism in an unified framework. Four coupled equations describe all the phenomena involving electromagnetism. These are simple yet elegant. In communication engineering electromagnetic fields carry informations from one place to another. Radio , tv and even our cell phones work on this principle.

maxwell equations

The greatest prediction of Maxwell was that light was a form of elctromagnetic wave. Both static and dynamic electricity can be explained with Maxwell's equation. All that is invoved are electrons in motion. And the motion of electron called classical electrodynamics is explained using Maxwell equation. It is only the interpretation of maxwell equation , which enable us to say that an accelerated charge like electron emits electromagnetic wave. The full account of the explanation of Maxwell equation is given in this page.
But what is electron? We know it is a tiny particle which has property like charge. This was classical view of electron. If we try to explain the behaviour of electron inside the atom completely we need to resort to quantum mechanics. Due to Heisenberg's uncertainly relation we can not precisely predict the position of electron. So Bohr's orbit does not exist in usual sense. There is no physical orbit of electron which we can find. We can at best predict where will electron be at certain position and at certain time. There are only electron clouds around the nucleus which describe the structure of atom in tiny scale.

Introduction


Without calculus, we wouldn’t have cell phones, computers, or microwave ovens. We wouldn’t have radio. Or television. Or ultrasound for expectant mothers, or GPS for lost travelers. We wouldn’t have split the atom, unraveled the human genome, or put astronauts on the moon. We might not even have the Declaration of Independence. It’s a curiosity of history that the world was changed forever by an arcane branch of mathematics. How could it be that a theory originally about shapes ultimately reshaped civilization? The essence of the answer lies in a quip that the physicist Richard Feynman made to the novelist Herman Wouk when they were discussing the Manhattan Project. Wouk was doing research for a big novel he hoped to write about World War II, and he went to Caltech to interview physicists who had worked on the bomb, one of whom was Feynman. After the interview, as they were parting, Feynman asked Wouk if he knew calculus. No, Wouk admitted, he didn’t. “You had better learn it,” said Feynman. “It’s the language God talks.” For reasons nobody understands, the universe is deeply mathematical. Maybe God made it that way. Or maybe it’s the only way a universe with us in it could be, because nonmathematical universes can’t harbor life intelligent enough to ask the question. In any case, it’s a mysterious and marvelous fact that our universe obeys laws of nature that always turn out to be expressible in the language of calculus as sentences called differential equations. Such equations describe the difference between something right now and the same thing an instant later or between something right here and the same thing infinitesimally close by. The details differ depending on what part of nature we’re talking about, but the structure of the laws is always the same. To put this awesome assertion another way, there seems to be something like a code to the universe, an operating system that animates everything from moment to moment and place to place. Calculus taps into this order and expresses it. Isaac Newton was the first to glimpse this secret of the universe. He found that the orbits of the planets, the rhythm of the tides, and the trajectories of cannonballs could all be described, explained, and predicted by a small set of differential equations. Today we call them Newton’s laws of motion and gravity. Ever since Newton, we have found that the same pattern holds whenever we uncover a new part of the universe. From the old elements of earth, air, fire, and water to the latest in electrons, quarks, black holes, and superstrings, every inanimate thing in the universe bends to the rule of differential equations. I bet this is what Feynman meant when he said that calculus is the language God talks. If anything deserves to be called the secret of the universe, calculus is it. By inadvertently discovering this strange language, first in a corner of geometry and later in the code of the universe, then by learning to speak it fluently and decipher its idioms and nuances, and finally by harnessing its forecasting powers, humans have used calculus to remake the world. That’s the central argument of this book. If it’s right, it means the answer to the ultimate question of life, the universe, and everything is not 42, with apologies to fans of Douglas Adams and The Hitchhiker’s Guide to the Galaxy. But Deep Thought was on the right track: the secret of the universe is indeed mathematical. Calculus for Everyone Feynman’s quip about God’s language raises many profound questions. What is calculus? How did humans figure out that God speaks it (or, if you prefer, that the universe runs on it)? What are differential equations and what have they done for the world, not just in Newton’s time but in our own? Finally, how can any of these stories and ideas be conveyed enjoyably and intelligibly to readers of goodwill like Herman Wouk, a very thoughtful, curious, knowledgeable person with little background in advanced math? In a coda to the story of his encounter with Feynman, Wouk wrote that he didn’t get around to even trying to learn calculus for fourteen years. His big novel ballooned into two big novels—Winds of War and War and Remembrance, each about a thousand pages. Once those were finally done, he tried to teach himself by reading books with titles like Calculus Made Easy—but no luck there. He poked around in a few textbooks, hoping, as he put it, “to come across one that might help a mathematical ignoramus like me, who had spent his college years in the humanities—i.e., literature and philosophy—in an adolescent quest for the meaning of existence, little knowing that calculus, which I had heard of as a difficult bore leading nowhere, was the language God talks.” After the textbooks proved impenetrable, he hired an Israeli math tutor, hoping to pick up a little calculus and improve his spoken Hebrew on the side, but both hopes ran aground. Finally, in desperation, he audited a high-school calculus class, but he fell too far behind and had to give up after a couple of months. The kids clapped for him on his way out. He said it was like sympathy applause for a pitiful showbiz act. I’ve written Infinite Powers in an attempt to make the greatest ideas and stories of calculus accessible to everyone. It shouldn’t be necessary to endure what Herman Wouk did to learn about this landmark in human history. Calculus is one of humankind’s most inspiring collective achievements. It isn’t necessary to learn how to do calculus to appreciate it, just as it isn’t necessary to learn how to prepare fine cuisine to enjoy eating it. I’m going to try to explain everything we’ll need with the help of pictures, metaphors, and anecdotes. I’ll also walk us through some of the finest equations and proofs ever created, because how could we visit a gallery without seeing its masterpieces? As for Herman Wouk, he is 103 years old as of this writing. I don’t know if he’s learned calculus yet, but if not, this one’s for you, Mr. Wouk.

The World According to Calculus

As should be obvious by now, I’ll be giving an applied mathematician’s take on the story and significance of calculus. A historian of mathematics would tell it differently. So would a pure mathematician. What fascinates me as an applied mathematician is the push and pull between the real world around us and the ideal world in our heads. Phenomena out there guide the mathematical questions we ask; conversely, the math we imagine sometimes foreshadows what actually happens out there in reality. When it does, the effect is uncanny. To be an applied mathematician is to be outward-looking and intellectually promiscuous. To those in my field, math is not a pristine, hermetically sealed world of theorems and proofs echoing back on themselves. We embrace all kinds of subjects: philosophy, politics, science, history, medicine, all of it. That’s the story I want to tell—the world according to calculus. This is a much broader view of calculus than usual. It encompasses the many cousins and spinoffs of calculus, both within mathematics and in the adjacent disciplines. Since this big-tent view is unconventional, I want to make sure it doesn’t cause any confusion. For example, when I said earlier that without calculus we wouldn’t have computers and cell phones and so on, I certainly didn’t mean to suggest that calculus produced all these wonders by itself. Far from it. Science and technology were essential partners—and arguably the stars of the show. My point is merely that calculus has also played a crucial role, albeit often a supporting one, in giving us the world we know today. Take the story of wireless communication. It began with the discovery of the laws of electricity and magnetism by scientists like Michael Faraday and André- Marie Ampère. Without their observations and tinkering, the crucial facts about magnets, electrical currents, and their invisible force fields would have remained unknown, and the possibility of wireless communication would never have been realized. So, obviously, experimental physics was indispensable here. But so was calculus. In the 1860s, a Scottish mathematical physicist named James Clerk Maxwell recast the experimental laws of electricity and magnetism into a symbolic form that could be fed into the maw of calculus. After some churning, the maw disgorged an equation that didn’t make sense. Apparently something was missing in the physics. Maxwell suspected that Ampère’s law was the culprit. He tried patching it up by including a new term in his equation— a hypothetical current that would resolve the contradiction—and then let calculus churn again. This time it spat out a sensible result, a simple, elegant wave equation much like the equation that describes the spread of ripples on a pond. Except Maxwell’s result was predicting a new kind of wave, with electric and magnetic fields dancing together in a pas de deux. A changing electric field would generate a changing magnetic field, which in turn would regenerate the electric field, and so on, each field bootstrapping the other forward, propagating together as a wave of traveling energy. And when Maxwell calculated the speed of this wave, he found—in what must have been one of the greatest Aha! moments in history—that it moved at the speed of light. So he used calculus not only to predict the existence of electromagnetic waves but also to solve an ageold mystery: What was the nature of light? Light, he realized, was an electromagnetic wave. Maxwell’s prediction of electromagnetic waves prompted an experiment by Heinrich Hertz in 1887 that proved their existence. A decade later, Nikola Tesla built the first radio communication system, and five years after that, Guglielmo Marconi transmitted the first wireless messages across the Atlantic. Soon came television, cell phones, and all the rest. Clearly, calculus could not have done this alone. But equally clearly, none of it would have happened without calculus. Or, perhaps more accurately, it might have happened, but only much later, if at all. Calculus Is More than a Language The story of Maxwell illustrates a theme we’ll be seeing again and again. It’s often said that mathematics is the language of science. There’s a great deal of truth to that. In the case of electromagnetic waves, it was a key first step for Maxwell to translate the laws that had been discovered experimentally into equations phrased in the language of calculus. But the language analogy is incomplete. Calculus, like other forms of mathematics, is much more than a language; it’s also an incredibly powerful system of reasoning. It lets us transform one equation into another by performing various symbolic operations on them, operations subject to certain rules. Those rules are deeply rooted in logic, so even though it might seem like we’re just shuffling symbols around, we’re actually constructing long chains of logical inference. The symbol shuffling is useful shorthand, a convenient way to build arguments too intricate to hold in our heads. If we’re lucky and skillful enough—if we transform the equations in just the right way—we can get them to reveal their hidden implications. To a mathematician, the process feels almost palpable. It’s as if we’re manipulating the equations, massaging them, trying to relax them enough so that they’ll spill their secrets. We want them to open up and talk to us. Creativity is required, because it often isn’t clear which manipulations to perform. In Maxwell’s case, there were countless ways to transform his equations, all of which would have been logically acceptable but only some of which would have been scientifically revealing. Given that he didn’t even know what he was searching for, he might easily have gotten nothing out of his equations but incoherent mumblings (or the symbolic equivalent thereof). Fortunately, however, they did have a secret to reveal. With just the right prodding, they gave up the wave equation. At that point the linguistic function of calculus took over again. When Maxwell translated his abstract symbols back into reality, they predicted that electricity and magnetism could propagate together as a wave of invisible energy moving at the speed of light. In a matter of decades, this revelation would change the world.

Unreasonably Effective

It’s eerie that calculus can mimic nature so well, given how different the two domains are. Calculus is an imaginary realm of symbols and logic; nature is an actual realm of forces and phenomena. Yet somehow, if the translation from reality into symbols is done artfully enough, the logic of calculus can use one real-world truth to generate another. Truth in, truth out. Start with something that is empirically true and symbolically formulated (as Maxwell did with the laws of electricity and magnetism), apply the right logical manipulations, and out comes another empirical truth, possibly a new one, a fact about the universe that nobody knew before (like the existence of electromagnetic waves). In this way, calculus lets us peer into the future and predict the unknown. That’s what makes it such a powerful tool for science and technology. But why should the universe respect the workings of any kind of logic, let alone the kind of logic that we puny humans can muster? This is what Einstein marveled at when he wrote, “The eternal mystery of the world is its comprehensibility.” And it’s what Eugene Wigner meant in his essay “On the Unreasonable Effectiveness of Mathematics in the Natural Sciences” when he wrote, “The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.” This sense of awe goes way back in the history of mathematics. According to legend, Pythagoras felt it around 550 BCE when he and his disciples discovered that music was governed by the ratios of whole numbers. For instance, imagine plucking a guitar string. As the string vibrates, it emits a certain note. Now put your finger on a fret exactly halfway up the string and pluck it again. The vibrating part of the string is now half as long as it used to be—a ratio of 1 to 2 —and it sounds precisely an octave higher than the original note (the musical distance from one do to the next in the do-re-mi-fa-sol-la-ti-do scale). If instead the vibrating string is ⅔ of its original length, the note it makes goes up by a fifth (the interval from do to sol; think of the first two notes of the Stars Wars theme). And if the vibrating part is ¾ as long as it was before, the note goes up by a fourth (the interval between the first two notes of “Here Comes the Bride”). The ancient Greek musicians knew about the melodic concepts of octaves, fourths, and fifths and considered them beautiful. This unexpected link between music (the harmony of this world) and numbers (the harmony of an imagined world) led the Pythagoreans to the mystical belief that all is number. They are said to have believed that even the planets in their orbits made music, the music of the spheres. Ever since then, many of history’s greatest mathematicians and scientists have come down with cases of Pythagorean fever. The astronomer Johannes Kepler had it bad. So did the physicist Paul Dirac. As we’ll see, it drove them to seek, and to dream, and to long for the harmonies of the universe. In the end it pushed them to make their own discoveries that changed the world.
The Infinity Principle
To help you understand where we’re headed, let me say a few words about what calculus is, what it wants (metaphorically speaking), and what distinguishes it from the rest of mathematics. Fortunately, a single big, beautiful idea runs through the subject from beginning to end. Once we become aware of this idea, the structure of calculus falls into place as variations on a unifying theme. Alas, most calculus courses bury the theme under an avalanche of formulas, procedures, and computational tricks. Come to think of it, I’ve never seen it spelled out anywhere even though it’s part of calculus culture and every expert knows it implicitly. Let’s call it the Infinity Principle. It will guide us on our journey just as it guided the development of calculus itself, conceptually as well as historically. I’m tempted to state it right now, but at this point it would sound like mumbo jumbo. It will be easier to appreciate if we inch our way up to it by asking what calculus wants . . . and how it gets what it wants. In a nutshell, calculus wants to make hard problems simpler. It is utterly obsessed with simplicity. That might come as a surprise to you, given that calculus has a reputation for being complicated. And there’s no denying that some of its leading textbooks exceed a thousand pages and weigh as much as bricks. But let’s not be judgmental. Calculus can’t help how it looks. Its bulkiness is unavoidable. It looks complicated because it’s trying to tackle complicated problems. In fact, it has tackled and solved some of the most difficult and important problems our species has ever faced. Calculus succeeds by breaking complicated problems down into simpler parts. That strategy, of course, is not unique to calculus. All good problem-solvers know that hard problems become easier when they’re split into chunks. The truly radical and distinctive move of calculus is that it takes this divide-and-conquer strategy to its utmost extreme—all the way out to infinity. Instead of cutting a big problem into a handful of bite-size pieces, it keeps cutting and cutting relentlessly until the problem has been chopped and pulverized into its tiniest conceivable parts, leaving infinitely many of them. Once that’s done, it solves the original problem for all the tiny parts, which is usually a much easier task than solving the initial giant problem. The remaining challenge at that point is to put all the tiny answers back together again. That tends to be a much harder step, but at least it’s not as difficult as the original problem was. Thus, calculus proceeds in two phases: cutting and rebuilding. In mathematical terms, the cutting process always involves infinitely fine subtraction, which is used to quantify the differences between the parts. Accordingly, this half of the subject is called differential calculus. The reassembly process always involves infinite addition, which integrates the parts back into the original whole. This half of the subject is called integral calculus. This strategy can be used on anything that we can imagine slicing endlessly. Such infinitely divisible things are called continua and are said to be continuous, from the Latin roots con (together with) and tenere (hold), meaning uninterrupted or holding together. Think of the rim of a perfect circle, a steel girder in a suspension bridge, a bowl of soup cooling off on the kitchen table, the parabolic trajectory of a javelin in flight, or the length of time you have been alive. A shape, an object, a liquid, a motion, a time interval—all of them are grist for the calculus mill. They’re all continuous, or nearly so.
Notice the act of creative fantasy here. Soup and steel are not really continuous. At the scale of everyday life, they appear to be, but at the scale of atoms or superstrings, they’re not. Calculus ignores the inconvenience posed by atoms and other uncuttable entities, not because they don’t exist but because it’s useful to pretend that they don’t. As we’ll see, calculus has a penchant for useful fictions. More generally, the kinds of entities modeled as continua by calculus include almost anything one can think of. Calculus has been used to describe how a ball rolls continuously down a ramp, how a sunbeam travels continuously through water, how the continuous flow of air around a wing keeps a hummingbird or an airplane aloft, and how the concentration of HIV virus particles in a patient’s bloodstream plummets continuously in the days after he or she starts combination-drug therapy. In every case the strategy remains the same: split a complicated but continuous problem into infinitely many simpler pieces, then solve them separately and put them back together. Now we’re finally ready to state the big idea.


The Infinity Principle


To shed light on any continuous shape, object, motion, process, or phenomenon—no matter how wild and complicated it may appear—reimagine it as an infinite series of simpler parts, analyze those, and then add the results back together to make sense of the original whole. The Golem of Infinity The rub in all of this is the need to cope with infinity. That’s easier said than done. Although the carefully controlled use of infinity is the secret to calculus and the source of its enormous predictive power, it is also calculus’s biggest headache. Like Frankenstein’s monster or the golem in Jewish folklore, infinity tends to slip out of its master’s control. As in any tale of hubris, the monster inevitably turns on its maker. The creators of calculus were aware of the danger but still found infinity irresistible. Sure, occasionally it ran amok, leaving paradox, confusion, and philosophical havoc in its wake. Yet after each of these episodes, mathematicians always managed to subdue the monster, rationalize its behavior, and put it back to work. In the end, everything always turned out fine. Calculus gave the right answers, even when its creators couldn’t explain why. The desire to harness infinity and exploit its power is a narrative thread that runs through the whole twenty-five-hundred-year story of calculus. All this talk of desire and confusion might seem out of place, given that mathematics is usually portrayed as exact and impeccably rational. It is rational, but not always initially. Creation is intuitive; reason comes later. In the story of calculus, more than in other parts of mathematics, logic has always lagged behind intuition. This makes the subject feel especially human and approachable, and its geniuses more like the rest of us. Curves, Motion, and Change The Infinity Principle organizes the story of calculus around a methodological theme. But calculus is as much about mysteries as it is about methodology. Three mysteries above all have spurred its development: the mystery of curves, the mystery of motion, and the mystery of change. The fruitfulness of these mysteries has been a testament to the value of pure curiosity. Puzzles about curves, motion, and change might seem unimportant at first glance, maybe even hopelessly esoteric. But because they touch on such rich conceptual issues and because mathematics is so deeply woven into the fabric of the universe, the solution to these mysteries has had far-reaching impacts on the course of civilization and on our everyday lives. As we’ll see in the chapters ahead, we reap the benefits of these investigations whenever we listen to music on our phones, breeze through the line at the supermarket thanks to a laser checkout scanner, or find our way home with a GPS gadget. It all started with the mystery of curves. Here I’m using the term curves in a very loose sense to mean any sort of curved line, curved surface, or curved solid —think of a rubber band, a wedding ring, a floating bubble, the contours of a vase, or a solid tube of salami. To keep things as simple as possible, the early geometers typically concentrated on abstract, idealized versions of curved shapes and ignored thickness, roughness, and texture. The surface of a mathematical sphere, for instance, was imagined to be an infinitesimally thin, smooth, perfectly round membrane with none of the thickness, bumpiness, or hairiness of a coconut shell. Even under these idealized assumptions, curved shapes posed baffling conceptual difficulties because they weren’t made of straight pieces. Triangles and squares were easy. So were cubes. They were composed of straight lines and flat pieces of planes joined together at a small number of corners. It wasn’t hard to figure out their perimeters or surface areas or volumes. Geometers all over the world—in ancient Babylon and Egypt, China and India, Greece and Japan—knew how to solve problems like these. But round things were brutal. No one could figure out how much surface area a sphere had or how much volume it could hold. Even finding the circumference and area of a circle was an insurmountable problem in the old days. There was no way to get started. There were no straight pieces to latch onto. Anything that was curved was inscrutable. So this is how calculus began. It grew out of geometers’ curiosity and frustration with roundness. Circles and spheres and other curved shapes were the Himalayas of their era. It wasn’t that they posed important practical issues, at least not at first. It was simply a matter of the human spirit’s thirst for adventure. Like explorers climbing Mount Everest, geometers wanted to solve curves because they were there. The breakthrough came from insisting that curves were actually made of straight pieces. It wasn’t true, but one could pretend that it was. The only hitch was that those pieces would then have to be infinitesimally small and infinitely numerous. Through this fantastic conception, integral calculus was born. This was the earliest use of the Infinity Principle. The story of how it developed will occupy us for several chapters, but its essence is already there, in embryonic form, in a simple, intuitive insight: If we zoom in closely enough on a circle (or anything else that is curved and smooth), the portion of it under the microscope begins to look straight and flat. So in principle, at least, it should be possible to calculate whatever we want about a curved shape by adding up all the straight little pieces. Figuring out exactly how to do this—no easy feat—took the efforts of the world’s greatest mathematicians over many centuries. Collectively, however, and sometimes through bitter rivalries, they eventually began to make headway on the riddle of curves. Spinoffs today, as we’ll see in chapter 2, include the math needed to draw realistic-looking hair, clothing, and faces of characters in computer-animated movies and the calculations required for doctors to perform facial surgery on a virtual patient before they operate on the real one. The quest to solve the mystery of curves reached a fever pitch when it became clear that curves were much more than geometric diversions. They were a key to unlocking the secrets of nature. They arose naturally in the parabolic arc of a ball in flight, in the elliptical orbit of Mars as it moved around the sun, and in the convex shape of a lens that could bend and focus light where it was needed, as was required for the burgeoning development of microscopes and telescopes in late Renaissance Europe.
And so began the second great obsession: a fascination with the mysteries of motion on Earth and in the solar system. Through observation and ingenious experiments, scientists discovered tantalizing numerical patterns in the simplest moving things. They measured the swinging of a pendulum, clocked the accelerating descent of a ball rolling down a ramp, and charted the stately procession of planets across the sky. The patterns they found enraptured them— indeed, Johannes Kepler fell into a state of self-described “sacred frenzy” when he found his laws of planetary motion—because those patterns seemed to be signs of God’s handiwork. From a more secular perspective, the patterns reinforced the claim that nature was deeply mathematical, just as the Pythagoreans had maintained. The only catch was that nobody could explain the marvelous new patterns, at least not with the existing forms of math. Arithmetic and geometry were not up to the task, even in the hands of the greatest mathematicians. The trouble was that the motions weren’t steady. A ball rolling down a ramp kept changing its speed, and a planet revolving around the sun kept changing its direction of travel. Worse yet, the planets moved faster when they got close to the sun and slowed down as they receded from it. There was no known way to deal with motion that kept changing in ever-changing ways. Earlier mathematicians had worked out the mathematics of the most trivial kind of motion, namely, motion at a constant speed where distance equals rate times time. But when speed changed and kept on changing continuously, all bets were off. Motion was proving to be as much of a conceptual Mount Everest as curves were. As we’ll see in the middle chapters of this book, the next great advances in calculus grew out of the quest to solve the mystery of motion. The Infinity Principle came to the rescue, just as it had for curves. This time the act of wishful fantasy was to pretend that motion at a changing speed was made up of infinitely many, infinitesimally brief motions at a constant speed. To visualize what this would mean, imagine being in a car with a jerky driver at the wheel. As you anxiously watch the speedometer, it moves up and down with every jerk. But over a millisecond, even the jerkiest driver can’t make the speedometer needle move by much. And over an interval much shorter than that—an infinitesimal time interval—the needle won’t move at all. Nobody can tap the gas pedal that fast. These ideas coalesced in the younger half of calculus, differential calculus. It was precisely what was needed to work with the infinitesimally small changes of time and distance that arose in the study of ever-changing motion as well as with the infinitesimal straight pieces of curves that arose in analytic geometry, the newfangled study of curves defined by algebraic equations that was all the rage in the first half of the 1600s. Yes, at one time, algebra was a craze, as we’ll see. Its popularity was a boon for all fields of mathematics, including geometry, but it also created an unruly jungle of new curves to explore. Thus, the mysteries of curves and motion collided. They were now both at the center stage of calculus in the mid-1600s, banging into each other, creating mathematical mayhem and confusion. Out of the tumult, differential calculus began to flower, but not without controversy. Some mathematicians were criticized for playing fast and loose with infinity. Others derided algebra as a scab of symbols. With all the bickering, progress was fitful and slow. And then a child was born on Christmas Day. This young messiah of calculus was an unlikely hero. Born premature and fatherless and abandoned by his mother at age three, he was a lonesome boy with dark thoughts who grew into a secretive, suspicious young man. Yet Isaac Newton would make a mark on the world like no one before or since. First, he solved the holy grail of calculus: he discovered how to put the pieces of a curve back together again—and how to do it easily, quickly, and systematically. By combining the symbols of algebra with the power of infinity, he found a way to represent any curve as a sum of infinitely many simpler curves described by powers of a variable x, like x2, x3, x4, and so on. With these ingredients alone, he could cook up any curve he wanted by putting in a pinch of x and a dash of x2 and a heaping tablespoon of x3. It was like a master recipe and a universal spice rack, butcher shop, and vegetable garden, all rolled into one. With it he could solve any problem about shapes or motions that had ever been considered. Then he cracked the code of the universe. Newton discovered that motion of any kind always unfolds one infinitesimal step at a time, steered from moment to moment by mathematical laws written in the language of calculus. With just a handful of differential equations (his laws of motion and gravity), he could explain everything from the arc of a cannonball to the orbits of the planets. His astonishing “system of the world” unified heaven and earth, launched the Enlightenment, and changed Western culture. Its impact on the philosophers and poets of Europe was immense. He even influenced Thomas Jefferson and the writing of the Declaration of Independence, as we’ll see. In our own time, Newton’s ideas underpinned the space program by providing the mathematics necessary for trajectory design, the work done at NASA by African-American mathematician Katherine Johnson and her colleagues (the heroines of the book and hit movie Hidden Figures). With the mysteries of curves and motion now settled, calculus moved on to its third lifelong obsession: the mystery of change. It’s a cliché, but it’s true all the same—nothing is constant but change. It’s rainy one day and sunny the next. The stock market rises and falls. Emboldened by the Newtonian paradigm, the later practitioners of calculus asked: Are there laws of change similar to Newton’s laws of motion? Are there laws for population growth, the spread of epidemics, and the flow of blood in an artery? Can calculus be used to describe how electrical signals propagate along nerves or to predict the flow of traffic on a highway? By pursuing this ambitious agenda, always in cooperation with other parts of science and technology, calculus has helped make the world modern. Using observation and experiment, scientists worked out the laws of change and then used calculus to solve them and make predictions. For example, in 1917 Albert Einstein applied calculus to a simple model of atomic transitions to predict a remarkable effect called stimulated emission (which is what the s and e stand for in laser, an acronym for light amplification by stimulated emission of radiation). He theorized that under certain circumstances, light passing through matter could stimulate the production of more light at the same wavelength and moving in the same direction, creating a cascade of light through a kind of chain reaction that would result in an intense, coherent beam. A few decades later, the prediction proved to be accurate. The first working lasers were built in the early 1960s. Since then, they have been used in everything from compact-disc players and laser-guided weaponry to supermarket bar-code scanners and medical lasers. The laws of change in medicine are not as well understood as those in physics. Yet even when applied to rudimentary models, calculus has been able to make lifesaving contributions.


You can provide your comment and response below:




Read More >>

Sitemap |   portfolio

Resume |   Contact