Historically Speaking: The Tradition of Telling All

From ancient Greece to modern Washington, political memoirs have been irresistible source of gossip about great leaders

The Wall Street Journal, November 30, 2018

ILLUSTRATION: THOMAS FUCHS

The tell-all memoir has been a feature of American politics ever since Raymond Moley, an ex-aide to Franklin Delano Roosevelt, published his excoriating book “After Seven Years” while FDR was still in office. What makes the Trump administration unusual is the speed at which such accounts are appearing—most recently, “Unhinged,” by Omarosa Manigault Newman, a former political aide to the president.

Spilling the beans on one’s boss may be disloyal, but it has a long pedigree. Alexander the Great is thought to have inspired the genre. His great run of military victories, beginning with the Battle of Chaeronea in 338 B.C., was so unprecedented that several of his generals felt the urge—unknown in Greek literature before then—to record their experiences for posterity.

Unfortunately, their accounts didn’t survive, save for the memoir of Ptolemy Soter, the founder of the Ptolemaic dynasty in Egypt, which exists in fragments. The great majority of Roman political memoirs have also disappeared—many by official suppression. Historians particularly regret the loss of the memoirs of Agrippina, the mother of Emperor Nero, who once boasted that she could bring down the entire imperial family with her revelations.

The Heian period (794-1185) in Japan produced four notable court memoirs, all by noblewomen. Dissatisfaction with their lot was a major factor behind these accounts—particularly for the anonymous author of ‘The Gossamer Years,” written around 974. The author was married to Fujiwara no Kane’ie, the regent for the Emperor Ichijo. Her exalted position at court masked a deeply unhappy private life; she was made miserable by her husband’s serial philandering, describing herself as “rich only in loneliness and sorrow.”

In Europe, the first modern political memoir was written by the Duc de Saint-Simon (1675-1755), a frustrated courtier at Versailles who took revenge on Louis XIV with his pen. Saint-Simon’s tales hilariously reveal the drama, gossip and intrigue that surrounded a king whose intellect, in his view, was “beneath mediocrity.”

But even Saint-Simon’s memoirs pale next to those of the Korean noblewoman Lady Hyegyeong (1735-1816), wife of Crown Prince Sado of the Joseon Dynasty. Her book, “Memoirs Written in Silence,” tells shocking tales of murder and madness at the heart of the Korean court. Sado, she writes, was a homicidal psychopath who went on a bloody killing spree that was only stopped by the intervention of his father King Yeongjo. Unwilling to see his son publicly executed, Yeongjo had the prince locked inside a rice chest and left to die. Understandably, Hyegyeong’s memoirs caused a huge sensation in Korea when they were first published in 1939, following the death of the last Emperor in 1926.

Fortunately, the Washington political memoir has been free of this kind of violence. Still, it isn’t just Roman emperors who have tried to silence uncomfortable voices. According to the historian Michael Beschloss, President John F. Kennedy had the White House household staff sign agreements to refrain from writing any memoirs. But eventually, of course, even Kennedy’s secrets came out. Perhaps every political leader should be given a plaque that reads: “Just remember, your underlings will have the last word.”

Historically Speaking: How Potatoes Conquered the World

It took centuries for the spud to travel from the New World to the Old and back again

The Wall Street Journal, November 15, 2018

At the first Thanksgiving dinner, eaten by the Wampanoag Indians and the Pilgrims in 1621, the menu was rather different from what’s served today. For one thing, the pumpkin was roasted, not made into a pie. And there definitely wasn’t a side dish of mashed potatoes.

In fact, the first hundred Thanksgivings were spud-free, since potatoes weren’t grown in North America until 1719, when Scotch-Irish settlers began planting them in New Hampshire. Mashed potatoes were an even later invention. The first recorded recipe for the dish appeared in 1747, in Hannah Glasse’s splendidly titled “The Art of Cookery Made Plain and Easy, Which Far Exceeds Any Thing of the Kind yet Published.”

By then, the potato had been known in Europe for a full two centuries. It was first introduced by the Spanish conquerors of Peru, where the Incas had revered the potato and even invented a natural way of freeze-drying it for storage. Nevertheless, despite its nutritional value and ease of growing, the potato didn’t catch on in Europe. It wasn’t merely foreign and ugly-looking; to wheat-growing farmers it seemed unnatural—possibly even un-Christian, since there is no mention of the potato in the Bible. Outside of Spain, it was generally grown for animal feed.

The change in the potato’s fortunes was largely due to the efforts of a Frenchman named Antoine-Augustin Parmentier (1737-1813). During the Seven Years’ War, he was taken prisoner by the Prussians and forced to live on a diet of potatoes. To his surprise, he stayed relatively healthy. Convinced he had found a solution to famine, Parmentier dedicated his life after the war to popularizing the potato’s nutritional benefits. He even persuaded Marie-Antoinette to wear potato flowers in her hair.

Among the converts to his message were the economist Adam Smith, who realized the potato’s economic potential as a staple food for workers, and Thomas Jefferson, then the U.S. Ambassador to France, who was keen for his new nation to eat well in all senses of the word. Jefferson is credited with introducing Americans to french fries at a White House dinner in 1802.

As Smith predicted, the potato became the fuel for the Industrial Revolution. A study published in 2011 by Nathan Nunn and Nancy Qian in the Quarterly Journal of Economics estimates that up to a quarter of the world’s population growth from 1700 to 1900 can be attributed solely to the introduction of the potato. As Louisa May Alcott observed in “Little Men,” in 1871, “Money is the root of all evil, and yet it is such a useful root that we cannot get on without it any more than we can without potatoes.”

In 1887, two Americans, Jacob Fitzgerald and William H. Silver, patented the first potato ricer, which forced a cooked potato through a cast iron sieve, ending the scourge of lumpy mash. Still, the holy grail of “quick and easy” mashed potatoes remained elusive until the late 1950s. Using the flakes produced by the potato ricer and a new freeze drying method, U.S. government scientists perfected instant mashed potatoes, which only requires the simple step of adding hot water or milk to the mix. The days of peeling, boiling and mashing were now optional, and for millions of cooks, Thanksgiving became a little easier. And that’s something to be thankful for.

For the Wall Street Journal

Historically Speaking: Overrun by Alien Species

From Japanese knotweed to cane toads, humans have introduced invasive species to new environments with disastrous results

The Wall Street Journal, November 1, 2018

Ever since Neolithic people wandered the earth, inadvertently bringing the mouse along for the ride, humans have been responsible for introducing animal and plant species into new environments. But problems can arise when a non-native species encounters no barriers to population growth, allowing it to rampage unchecked through the new habitat, overwhelming the ecosystem. On more than one occasion, humans have transplanted a species for what seemed like good reasons, only to find out too late that the consequences were disastrous.

One of the most famous examples is celebrating its 150th anniversary this year: the introduction of Japanese knotweed to the U.S. A highly aggressive plant, it can grow 15 feet high and has roots that spread up to 45 feet. Knotweed had already been a hit in Europe because of its pretty little white flowers, and, yes, its miraculous indestructibility.

First mentioned in botanical articles in 1868, knotweed was brought to New York by the Hogg brothers, James and Thomas, eminent American horticulturalists and among the earliest collectors of Japanese plants. Thanks to their extensive contacts, knotweed found a home in arboretums, botanical gardens and even Central Park. Not content with importing one of world’s most invasive shrubs, the Hoggs also introduced Americans to the wonders of kudzu, a dense vine that can grow a foot a day.

Impressed by the vigor of kudzu, agriculturalists recommended using these plants to provide animal fodder and prevent soil erosion. In the 1930s, the government was even paying Southern farmers $8 per acre to plant kudzu. Today it is known as the “vine that ate the South,” because of the way it covers huge tracts of land in a green blanket of death. And Japanese knotweed is still spreading, colonizing entire habitats from Mississippi to Alaska, where only the Arctic tundra holds it back from world domination.

Knotweed has also reached Australia, a country that has been ground zero for the worst excesses of invasive species. In the 19th century, the British imported non-native animals such as rabbits, cats, goats, donkeys, pigs, foxes and camels, causing mass extinctions of Australia’s native mammal species. Australians are still paying the price; there are more rabbits in the country today than wombats, more camels than kangaroos.

Yet the lesson wasn’t learned. In the 1930s, scientists in both Australia and the U.S. decided to import the South American cane toad as a form of biowarfare against beetles that eat sugar cane. The experiment failed, and it turned out that the cane toad was poisonous to any predator that ate it. There’s also the matter of the 30,000 eggs it can lay at a time. Today, the cane toad can be found all over northern Australia and south Florida.

So is there anything we can do once an invasive species has taken up residence? The answer is yes, but it requires more than just fences, traps and pesticides; it means changing human incentives. Today, for instance, the voracious Indo-Pacific lionfish is gobbling up local fish in the west Atlantic, while the Asian carp threatens the ecosystem of the Great Lakes. There is only one solution: We must eat them, dear reader. These invasive fish can be grilled, fried or consumed as sashimi, and they taste delicious. Likewise, kudzu makes great salsa, and Japanese knotweed can be treated like rhubarb. Eat for America and save the environment.

Historically Speaking: The Dark Lore of Black Cats

Ever since they were worshiped in ancient Egypt, cats have occupied an uncanny place in the world’s imagination

The Wall Street Journal, October 22, 2018

ILLUSTRATION: THOMAS FUCHS

As Halloween approaches, decorations featuring scary black cats are starting to make their seasonal appearance. But what did the black cat ever do to deserve its reputation as a symbol of evil? Why is it considered bad luck to have a black cat cross your path?

It wasn’t always this way. In fact, the first human-cat interactions were benign and based on mutual convenience. The invention of agriculture in the Neolithic era led to surpluses of grain, which attracted rodents, which in turn motivated wild cats to hang around humans in the hope of catching dinner. Domestication soon followed: The world’s oldest pet cat was found in a 9,500 year-old grave in Cyprus, buried alongside its human owner.

According to the Roman writer Polyaenus, who lived in the second century A.D., the Egyptian veneration of cats led to disaster at the Battle of Pelusium in 525 B.C. The invading Persian army carried cats on the front lines, rightly calculating that the Egyptians would rather accept defeat than kill a cat.

The Egyptians were unique in their extreme veneration of cats, but they weren’t alone in regarding them as having a special connection to the spirit world. In Greek mythology the cat was a familiar of Hecate, goddess of magic, sorcery and witchcraft. Hecate’s pet had once been a serving maid named Galanthis, who was turned into a cat as punishment by the goddess Hera for being rude.

When Christianity became the official religion of Rome in 380, the association of cats with paganism and witchcraft made them suspect. Moreover, the cat’s independence suggested a willful rebellion against the teaching of the Bible, which said that Adam had dominion over all the animals. The cat’s reputation worsened during the medieval era, as the Catholic Church battled against heresies and dissent. Fed lurid tales by his inquisitors, in 1233 Pope Gregory IX issued a papal bull, “Vox in Rama,” which accused heretics of using black cats in their nighttime sex orgies with Lucifer—who was described as half-cat in appearance.

In Europe, countless numbers of cats were killed in the belief that they could be witches in disguise. In 1484, Pope Innocent VIII fanned the flames of anti-cat prejudice with his papal bull on witchcraft, “Summis Desiderantes Affectibus,” which stated that the cat was “the devil’s favorite animal and idol of all witches.”

The Age of Reason ought to have rescued the black cat from its pariah status, but superstitions die hard. (How many modern apartment buildings lack a 13th floor?). Cats had plenty of ardent fans among 19th century writers, including Charles Dickens and Mark Twain, who wrote “I simply can’t resist a cat, particularly a purring one.” But Edgar Allan Poe, the master of the gothic tale, felt otherwise: in his 1843 story “The Black Cat,” the spirit of a dead cat drives its killer to madness and destruction.

So pity the poor black cat, which through no fault of its own has gone from being an instrument of the devil to the convenient tool of the horror writer—and a favorite Halloween cliché.

For the Wall Street Journal’s “Historically Speaking” column

Historically Speaking: When Women Were Brewers

From ancient times until the Renaissance, beer-making was considered a female specialty

The Wall Street Journal, October 9, 2019

These days, every neighborhood bar celebrates Oktoberfest, but the original fall beer festival is the one in Munich, Germany—still the largest of its kind in the world. Oktoberfest was started in 1810 by the Bavarian royal family as a celebration of Crown Prince Ludwig’s marriage to Princess Therese von Sachsen-Hildburghausen. Nowadays, it lasts 16 days and attracts some 6 million tourists, who guzzle almost 2 million gallons of beer.

Yet these staggering numbers conceal the fact that, outside of the developing world, the beer industry is suffering. Beer sales in the U.S. last year accounted for 45.6% of the alcohol market, down from 48.2% in 2010. In Germany, per capita beer consumption has dropped by one-third since 1976. It is a sad decline for a drink that has played a central role in the history of civilization. Brewing beer, like baking bread, is considered by archaeologists to be one of the key markers in the development of agriculture and communal living.

In Sumer, the ancient empire in modern-day Iraq where the world’s first cities emerged in the 4th millennium BC, up to 40% of all grain production may have been devoted to beer. It was more than an intoxicating beverage; beer was nutritious and much safer to drink than ordinary water because it was boiled first. The oldest known beer recipe comes from a Sumerian hymn to Ninkasi, the goddess of beer, composed around 1800 BC. The fact that a female deity oversaw this most precious commodity reflects the importance of women in its production. Beer was brewed in the kitchen and was considered as fundamental a skill for women as cooking and needlework.

The ancient Egyptians similarly regarded beer as essential for survival: Construction workers for the pyramids were usually paid in beer rations. The Greeks and Romans were unusual in preferring wine; blessed with climates that aided viticulture, they looked down on beer-drinking as foreign and unmanly. (There’s no mention of beer in Homer.)

Northern Europeans adopted wine-growing from the Romans, but beer was their first love. The Vikings imagined Valhalla as a place where beer perpetually flowed. Still, beer production remained primarily the work of women. With most occupations in the Middle Ages restricted to members of male-only guilds, widows and spinsters could rely on ale-making to support themselves. Among her many talents as a writer, composer, mystic and natural scientist, the renowned 12th century Rhineland abbess Hildegard of Bingen was also an expert on the use of hops in beer.

The female domination of beer-making lasted in Europe until the 15th and 16th centuries, when the growth of the market economy helped to transform it into a profitable industry. As professional male brewers took over production and distribution, female brewers lost their respectability. By the 19th century, women were far more likely to be temperance campaigners than beer drinkers.

When Prohibition ended in the U.S. in 1933, brewers struggled to get beer into American homes. Their solution was an ad campaign selling beer to housewives—not to drink it but to cook with it. In recent years, beer ads have rarely bothered to address women at all, which may explain why only a quarter of U.S. beer drinkers are female.

As we’ve seen recently in the Kavanaugh hearings, a male-dominated beer-drinking culture can be unhealthy for everyone. Perhaps it’s time for brewers to forget “the king of beers”—Budweiser’s slogan—and seek their once and future queen.

Historically Speaking: At Age 50, a Time of Second Acts

Amanda Foreman finds comfort in countless examples of the power of reinvention after five decades.

ILLUSTRATION BY TONY RODRIGUEZ

I turned 50 this week, and like many people I experienced a full-blown midlife crisis in the lead-up to the Big Day. The famous F. Scott Fitzgerald quotation, “There are no second acts in American lives,” dominated my thoughts. I wondered now that my first act was over—would my life no longer be about opportunities and instead consist largely of consequences?
Fitzgerald, who left the line among his notes for “The Last Tycoon,” had ample reason for pessimism. He had hoped the novel would lead to his own second act after failing to make it in Hollywood, but he died at 44, broken and disappointed, leaving the book unfinished. Yet the truth about his grim line is more complicated. Several years earlier, Fitzgerald had used it to make an almost opposite point, in the essay “My Lost City”: “I once thought that there were no second acts in American lives, but there was certainly to be a second act to New York’s boom days.”
The one comfort we should take from countless examples in history is the power of reinvention. The Victorian poet William Ernest Henley was right when he wrote, “I am the master of my fate/ I am the captain of my soul.”
The point is to seize the moment. The disabled Roman Emperor Claudius, (10 B.C.-A.D. 54) spent most his life being victimized by his awful family. Claudius was 50 when his nephew, Caligula, met his end at the hands of some of his own household security, the Praetorian Guards. The historian Suetonius writes that a soldier discovered Claudius, who had tried to hide, trembling in the palace. The guards decided to make Claudius their puppet emperor. It was a grave miscalculation. Claudius grabbed his chance, shed his bumbling persona and became a forceful and innovative ruler of Rome.

In Russia many centuries later, the general Mikhail Kutuzov was in his 60s when his moment came. In 1805, Czar Alexander I had unfairly blamed Kutuzov for the army’s defeat at the Battle of Austerlitz and relegated him to desk duties. Russian society cruelly treated the general, who looked far from heroic—a character in Tolstoy’s “War and Peace” notes the corpulent Kutuzov’s war scars, especially his “bleached eyeball.” But when the country needed a savior in 1812, Kutuzov, the “has-been,” drove Napoleon and his Grande Armée out of Russia.

Winston Churchill had a similar apotheosis in World War II when he was in his 60s. Until then, his political career had been a catalog of failures, the most famous being the Gallipoli Campaign of 1916 that left Britain and its allies with more than 100,000 casualties.

As for writers and artists, they often find middle age extremely liberating. They cease being afraid to take risks in life. Another Fitzgerald—the Man Booker Prize-winning novelist Penelope—lived on the brink of homelessness, struggling as a tutor and teacher (she later recalled “the stuffy and inky boredom of the classroom”) until she published her first book at 58.

Anna Mary Robertson Moses, better known as Grandma Moses, may be the greatest example of self-reinvention. After many decades of farm life, around age 75 she began a new career, becoming one of America’s best known folk painters.

Perhaps I’ll be inspired to master Greek when I am 80, as some say the Roman statesman Cato the Elder did. But what I’ve learned, while coming to terms with turning 50, is that time spent worrying about “what you might have been” is better passed with friends and family—celebrating the here and now.

WSJ Historically Speaking: When We Rally ‘Round the Flag: A History

Flag Day passes every year almost unnoticed. That’s a shame—it celebrates a symbol with ties to religious and totemic objects that have moved people for millennia

The Supreme Court declared in 1989 that desecrating the American flag is a protected form of free speech. That ended the legal debate but not the national one over how we should treat the flag. If anything, two years of controversies over athletes kneeling during “The Star-Spangled Banner,” which led last month to a National Football League ban on the practice, show that feelings are running higher than ever.

Yet, Flag Day—which honors the adoption of the Stars and Stripes by Congress on June 14, 1777—is passing by almost unnoticed this year, as it does almost every year. One reason is that Memorial Day and Independence Day—holidays of federally sanctioned free time, parades and spectacle—flank and overshadow it. That’s a shame, because we could use a day devoted to reflecting on our flag, a precious national symbol whose potency can be traced to the religious and totemic objects that have moved people for millennia.

The first flags were not pieces of cloth but metal or wooden standards affixed to poles. The Shahdad Standard, thought to be the oldest flag, hails from Persia and dates from around 2400 B.C. Because ancient societies considered standards to be conduits for the power and protection of the gods, an army always went into battle accompanied by priests bearing the kingdom’s religious emblems. Isaiah Chapter 49 includes the lines: “Thus saith the Lord God, Behold, I will lift up mine hand to the Gentiles, and set up my standard to the people.”

Ancient Rome added a practical use for standards—waving, dipping and otherwise manipulating them to show warring troops what to do next. But the symbols retained their aura as national totems, emblazoned with the letters SPQR, an abbreviation of Senatus Populusque Romanus, or Senate and People of Rome. It was a catastrophe for a legion to lose its standard in battle. In Germania in A.D. 9, a Roman army was ambushed while marching through Teutoburg Forest and lost three standards. The celebrated general Germanicus eventually recovered two of them after a massive and bloody campaign.

In succeeding centuries, the flag as we know it today began to take shape. Europeans and Arabs learned silk production, pioneered by China, which made it possible to create banners light enough to flutter in the wind. As in ancient days, they were most often designed with heraldic or religious motifs.

In the U.S., the design of the flag harked back to the Roman custom of an explicitly national symbol, but the Star-Spangled Banner was slow to attain its unique status, despite the popularity of Francis Scott Key’s 1814 anthem. It took the Civil War, with its dueling flags, to make the American flag an emblem of national consciousness. As the U.S. Navy moved to capture New Orleans from the Confederacy in 1862, Marines went ashore and raised the Stars and Stripes at the city’s mint. William Mumford, a local resident loyal to the Confederacy, tore the flag down and wore shreds of it in his buttonhole. U.S. General Benjamin Butler had Mumford arrested and executed.

After the war, the Stars and Stripes became a symbol of reconciliation. In 1867 Southerners welcomed Wisconsin war veteran Gilbert Bates as he carried the flag 1,400 miles across the South to show that the nation was healing.

As the country developed economically, a new peril lay in store for the Stars and Stripes: commercialization. The psychological and religious forces that had once made flags sacred began to fade, and the national banner was recruited for the new industry of mass advertising. Companies of the late 19th century used it to sell everything from beer to skin cream, leading to national debates over what the flag stood for and how it should be treated.

President Woodrow Wilson instituted Flag Day in 1916 in an effort to concentrate the minds of citizens on the values embodied in our most familiar national symbol. That’s as worthy a goal today as it was a century ago.

WSJ Historically Speaking: Undying Defeat: The Power of Failed Uprisings

From the Warsaw Ghetto to the Alamo, doomed rebels live on in culture

John Wayne said that he saw the Alamo as ‘a metaphor for America’. PHOTO: ALAMY

Earlier this month, Israel commemorated the 75th anniversary of the Warsaw Ghetto Uprising of April 1943. The annual Remembrance Day of the Holocaust and Heroism, as it is called, reminds Israelis of the moral duty to fight to the last.

The Warsaw ghetto battle is one of many doomed uprisings across history that have cast their influence far beyond their failures, providing inspiration to a nation’s politics and culture.

Nearly 500,000 Polish Jews once lived in the ghetto. By January 1943, the Nazis had marked the surviving 55,000 for deportation. The Jewish Fighting Organization had just one machine gun and fewer than a hundred revolvers for a thousand or so sick and starving volunteer soldiers. The Jews started by blowing up some tanks and fought on until May 16. The Germans executed 7,000 survivors and deported the rest.

For many Jews, the rebellion offered a narrative of resistance, an alternative to the grim story of the fortress of Masada, where nearly 1,000 besieged fighters chose suicide over slavery during the First Jewish-Roman War (A.D. 66–73).
The story of the Warsaw ghetto uprising has also entered the wider culture. The title of Leon Uris’s 1961 novel “Mila 18” comes from the street address of the headquarters of the Jewish resistance in their hopeless fight. Four decades later, Roman Polanski made the uprising a crucial part of his 2002 Oscar-winning film, “The Pianist,” whose musician hero aids the effort.

Other doomed uprisings have also been preserved in art. The 48-hour Paris Uprising of 1832, fought by 3,000 insurrectionists against 30,000 regular troops, gained immortality through Victor Hugo, who made the revolt a major plot point in “Les Misérables” (1862). The novel was a hit on its debut and ever after—and gave its world-wide readership a set of martyrs to emulate.

Even a young country like the U.S. has its share of national myths, of desperate last stands serving as touchstones for American identity. One has been the Battle of the Alamo in 1836 during the War of Texas Independence. “Remember the Alamo” became the Texan war cry only weeks after roughly 200 ill-equipped rebels, among them the frontiersman Davy Crockett, were killed defending the Alamo mission in San Antonio against some 2,000 Mexican troops.

The Alamo’s imagery of patriotic sacrifice became popular in novels and paintings but really took off during the film era, beginning in 1915 with the D.W. Griffith production, “Martyrs of the Alamo.” Walt Disney got in on the act with his 1950s TV miniseries, “ Davy Crockett : King of the Wild Frontier.” John Wayne’s 1960 “The Alamo,” starring Wayne as Crockett, immortalized the character for a generation.

Wayne said that he saw the Alamo as “a metaphor of America” and its will for freedom. Others did too, even in very different contexts. During the Vietnam War, President Lyndon Johnson, whose hometown wasn’t far from San Antonio, once told the National Security Council why he believed U.S. troops needed to be fighting in Southeast Asia: “Hell,” he said, “Vietnam is just like the Alamo.”

WSJ Historically Speaking: When Blossoms and Bullets Go Together: The Battles of Springtime

Generals have launched spring offensives from ancient times to the Taliban era

ILLUSTRATION: THOMAS FUCHS

‘When birds do sing, hey ding a ding, ding; Sweet lovers love the spring,” wrote Shakespeare. But the season has a darker side as well. As we’re now reminded each year when the Taliban anticipate the warm weather by announcing their latest spring offensive in Afghanistan, military commanders and strategists have always loved the season, too.

The World War I poet Wilfred Owen highlighted the irony of this juxtaposition—the budding of new life alongside the massacre of those in life’s prime—in his famous “Spring Offensive”: “Marvelling they stood, and watched the long grass swirled / By the May breeze”—right before their deaths.

The pairing of rebirth with violent death has an ancient history. In the 19th century, the anthropologist James George Frazer identified the concept of the “dying and rising god” as one of the earliest cornerstones of religious belief. For new life to appear in springtime, there had to be a death or sacrifice in winter. Similar sacrifice-and-rejuvenation myths can be found among the Sumerians, Egyptians, Canaanites and Greeks.

Mediterranean and Near Eastern cultures saw spring in this dual perspective for practical reasons as well. The agricultural calendar revolved around wet winters, cool springs and very hot summers when almost nothing grew except olives and figs. Harvest time for essential cereal crops such as wheat and barley took place in the spring. The months of May and June, therefore, were perfect for armies to invade, because they could live off the land. The Bible says of King David, who lived around 1,000 B.C., that he sent Joab and the Israelite army to fight the Ammonites “in the spring of the year, when kings normally go out to war.”

It was no coincidence that the Romans named the month of March after Mars, the god of war but also the guardian of agriculture. As the saying goes, “An army fights on its stomach.” For ancient Greek historians, the rhythm of war rarely changed: Discussion took place in the winter, action began in spring. When they referred to a population “waiting for spring,” it was usually literary shorthand for a people living in fear of the next attack. The military campaigns of Alexander the Great (356-323 B.C.) into the Balkans, Persia and India began with a spring offensive.

In succeeding centuries, the seasonal rhythms of Europe, which were very different from those of warmer climes, brought about a new calendar of warfare. Europe’s reliance on the autumn harvest ended the ancient marriage of spring and warfare. Conscripts were unwilling to abandon their farms and fight in the months between planting and harvesting.

 This seasonal difficulty would not be addressed until Sweden’s King Gustavus Adolphus (1594-1632), a great military innovator, developed principles for the first modern army. According to the British historian Basil Liddell Hart, Gustavus made the crucial shift from short-term conscripts, drawn away from agricultural labor, to a standing force of professional, trained soldiers on duty all year round, regardless of the seasons.

Gustavus died before he could fully implement his ideas. This revolution in military affairs fell instead to Frederick the Great, king of Prussia (1712-1786), who turned military life into a respectable upper-class career choice and the Prussian army into a mobile, flexible and efficient machine.

Frederick believed that a successful army attacks first and hard, a lesson absorbed by Napoleon a half century later. This meant that the spring season, which had become the season for drilling and training in preparation for summer campaigning, became a fighting season again.

But the modern iteration of the spring offensive is different from its ancient forebear. Its purpose isn’t to feed an army but to incapacitate enemies before they have the chance to strike. The strategy is a risky gambler’s throw, relying on timing and psychology as much as on strength and numbers.

For Napoleon, the spring offensive played to his strength in being able to combine speed, troop concentration and offensive action in a single, decisive blow. Throughout his career he relied on the spring offensive, beginning with his first military campaign in Italy (1796-7), in which the French defeated the more-numerous and better-supplied Austrians. His final spring campaign was also his boldest. Despite severe shortages of money and troops, Napoleon came within a hair’s breadth of victory at the Battle of Waterloo on June 18, 1815.

The most famous spring campaign of the early 20th century—Germany’s 1918 offensive in World War I, originated by Gen. Erich Ludendorff—reveals its limitations as a strategy. If the knockout blow doesn’t happen, what next?

 At the end of 1917, the German high command had decided that the army needed a spring offensive to revive morale. Ludendorff thought that only an attack in the Napoleonic mode would work: “The army pined for the offensive…It alone is decisive,” he wrote. He was convinced that all he had to do was “blow a hole in the middle” of the enemy’s front and “the rest will follow of its own accord.” When Ludendorff’s first spring offensive stalled after 15 days, he quickly launched four more. Lacking any other objective than the attack itself, all failed, leaving Germany bankrupt and crippled by July.

In this century, the Taliban have found their own brutal way to renew the ancient tradition—with the blossoms come the bombs and the bloodshed.

WSJ Historically Speaking: How Mermaid-Merman Tales Got to This Year’s Oscars

ILLUSTRATON: DANIEL ZALKUS

‘The Shape of Water,’ the best-picture winner, extends a tradition of ancient tales of these water creatures and their dealings with humans

Popular culture is enamored with mermaids. This year’s Best Picture Oscar winner, Guillermo del Toro’s “The Shape of Water,” about a lonely mute woman and a captured amphibious man, is a new take on an old theme. “The Little Mermaid,” Disney ’senormously successful 1989 animated film, was based on the Hans Christian Andersen story of the same name, and it was turned into a Broadway musical, which even now is still being staged across the country.

The fascination with mermythology began with the ancient Greeks. In the beginning, mermen were few and far between. As for mermaids, they were simply members of a large chorus of female sea creatures that included the benign Nereids, the sea-nymph daughters of the sea god Nereus, and the Sirens, whose singing led sailors to their doom—a fate Odysseus barely escapes in Homer’s epic “The Odyssey.”

Over the centuries, the innocuous mermaid became interchangeable with the deadly sirens. They led Scottish sailors to their deaths in one of the variations of the anonymous poem “Sir Patrick Spens,” probably written in the 15th century: “Then up it raise the mermaiden, / Wi the comb an glass in her hand: / ‘Here’s a health to you, my merrie young men, / For you never will see dry land.’”

In pictures, mermaids endlessly combed their hair while sitting semi-naked on the rocks, lying in wait for seafarers. During the Elizabethan era, a “mermaid” was a euphemism for a prostitute. Poets and artists used them to link feminine sexuality with eternal damnation.

But in other tales, the original, more innocent idea of a mermaid persisted. Andersen’s 1837 story followed an old literary tradition of a “virtuous” mermaid hoping to redeem herself through human love.

Andersen purposely broke with the old tales. As he acknowledged to a friend, his fishy heroine would “follow a more natural, more divine path” that depended on her own actions rather than that of “an alien creature.” Egged on by her sisters to murder the prince whom she loves and return to her mermaid existence, she chooses death instead—a sacrifice that earns her the right to a soul, something that mermaids were said to lack.

Richard Wagner’s version of mermaids—the Rhine maidens who guard the treasure of “Das Rheingold”—also bucked the “temptress” cliché. While these maidens could be cruel, they gave valuable advice later in the “Ring” cycle.

The cultural rehabilitation of mermaids gained steam in the 20th century. In T.S. Eliot’s 1915 poem, “The Love Song of J. Alfred Prufrock,” their erotic power becomes a symbol of release from stifling respectability. The sad protagonist laments, “I have heard the mermaids singing, each to each. / I do not think that they will sing to me.” By 1984, when a gorgeous mermaid (Daryl Hannah) fell in love with a nerdy man ( Tom Hanks ) in the film comedy “Splash,” audiences were ready to accept that mermaids might offer a liberating alternative to society’s hang-ups, and that humans themselves are the obstacle to perfect happiness, not female sexuality.

What makes “The Shape of Water” unusual is that a scaly male, not a sexy mermaid, is the object of affection to be rescued. Andersen probably wouldn’t recognize his Little Mermaid in Mr. del Toro’s nameless, male amphibian, yet the two tales are mirror images of the same fantasy: Love conquers all.