Thursday, July 10, 2025

How Christianity Created and Propagated Antisemitism

 The term antisemitism is thrown about regularly in public discourse these days.  Perhaps, now would be a good time to review some historical perspective on the term.

Early Christians were Jewish followers of Jesus, while the majority of Jewish people were not.  The Christians wished to claim the legacy from the Biblical prophecies and welcome Gentiles into their religion.  The existence of the Jewish nonbelievers was at best an embarrassment.  Christians had to convince others that they were the true beneficiaries of God’s covenant with Abraham not the nonbelieving Jews.  Magda Teter writes about how this dogmatic conflict drifted into social conflict and led to the recognition of Jews as a unique class of humans in her book Christian Supremacy: Reckoning with the Roots of Antisemitism and Racism.  The nature of anti-Jewish sentiment would evolve over the centuries and develop into an extremely threatening form often referred to as modern antisemitism.  David I. Kertzer, in the book The Popes Against the Jews: The Vatican's Role in the Rise of Modern Anti-Semitism, provides perspective on the role of the Vatican and the Catholic Church in encouraging antisemitism in the years leading up to the Holocaust.

From Teter’s book we learn the earliest attempt to resolve the issue of how to deal with the non-Christian Jews is attributed to Paul. 

“In Galatians, seeking to dissuade Judaizing Christians from following Jewish rituals and ceremonies, he wrote: ‘For it is written that Abraham had two sons, one by a slave woman and the other by a free woman.  One, the child of the slave, was born according to the flesh; the other, the child of the free woman, was born through the promise.  Now this is an allegory: these women are two covenants.  One woman, in fact is Hagar, from Mount Sinai, bearing children for slavery…But the other woman…she is free and she is our mother.’  Christians were now, Paul continued, the children of promise, like Isaac.  Jews, who continued to follow God’s commandments, were in turn, Hagar’s children, ‘born into slavery’.”

The free woman was Abraham’s wife, and the son was promised by God.  The son by the slave woman was a mere act of fornication.  Paul’s interpretation would always place Christians in a position superior to the “born into slavery Jews, and thus be popular with Christians, both Jews and Gentiles.  God’s covenant with Abraham was thus transferred to the Christians, and practicing Jews were deemed inferior to Christians before God.  This self-serving interpretation would consign practicing Jews to servitude and ultimately to slavery to their Christian masters.

Through the early centuries as Christians aspired to an influential role in the Roman Empire.  How to live with the Jews was often discussed, but only from the perspective of superior Christians.  This viewpoint probably had little effect on the Jews until Constantine the Great allowed a greater degree of freedom for Christians.  Ultimately, in 380 AD, Emperor Theodosius I would make Christianity the state religion and provide Christian leaders the power to insert their feelings towards Jews into public practice.  The subservience of Jews must be maintained.  An initial horror over the possibility that a Jew might have control over a Christian slave evolved over time to spread to restricting Jews from any occupation that would suggest even the slightest ability to control a Christian.

“In pre-Christian Roman Empire, Jews were able to hold public office and were granted accommodations for their religious observance.  There was no concern about Jewish power over non-Jews, and the only concern about Jews’ holding slaves was over the slaves’ conversion to Judaism.  But in Christian Roman imperial law, the anxiety over proselytism was increasingly mixed with anxiety over Jewish authority and power.  The first explicit prohibition against Jews holding public office came in 418, declaring that ‘the entrance to the civil service [militiae] shall be closed from now to those living in Jewish superstition’.”

These sentiments would then be officially entered into canon law where they could be implemented in any nation with Christian leanings.

“A similarly explicit language was used by the VI Council of Toledo in 638, which ruled that ‘Jews should not be allowed to have Christian slaves, or buy them, or acquire them through gift of any man whatever; for it is monstrous that members of Christ should serve the ministers of Antichrist.’…Such anxieties about Jewish authority and power, in relation to public offices, especially judges and tax collectors, entered canon law through compilations by Burchardt of Worms or Gratian’s Decretales, thus becoming part of the long legal tradition that would have an impact for centuries to come.  In these legal texts and legal corpus, Jews were marked as inferior members of society, whose presence was not questioned but whose status was to be controlled.”

“There were no comparable laws for any other specific group in antiquity, Greek or Roman, pagan or Christian.”

Declaring a whole people to be inferior and in need of control places that whole people at risk of all sorts of abuse.  If they need to be controlled, one will question what they might do if they were not controlled.  The more different and the more unliked they are the more bizarre the speculation is likely to be.  A constant source of irritancy to the Christian leaders was a lack of gratitude on the part of the Jews for the Christian permission for them to coexist.

The religious did not always follow the commands of their religious leaders.  There were always places where Jews and Christians learned to get along.  This infuriated Pope Innocent III who in 1205 issued Etsi Judaeos reminding people that it was ungodly to treat Jews as equals.

“’While Christian piety accepts the Jews, who, by their own guilt, are consigned to perpetual servitude because they crucified the Lord…and while [Christian piety] permits them to dwell in the Christian midst…the Jews ought not to be ungrateful to us and not requite Christian favor with contumely and intimacy with contempt.’  The pope then recounted ‘how Jews have become insolent’ and ‘hurled unbridled insults at the Christian faith,’ relating alleged reports that Jews made Christian nurses who had taken communion ‘pour their milk into the latrine.’  To stem this ‘insolence’ and violation of canon law that prohibited Jews from having Christian servants, he urged the king of France ‘to restrain the excesses of the Jews that they shall not dare to raise their neck, bowed under the yolk of perpetual slavery,’ and ‘to forbid them to have any nurses nor other kinds of Christian servants in the future, lest the children of a free woman should be servants to the children of a slave; but rather as slaves rejected by God, in whose death they wickedly conspired, they shall by the effect of this very action, recognize themselves as the slaves of those whom Christ’s death set free at the same time that it enslaved them’.”

The pope was reminding Christians of other nations that they must treat Jews as slaves just as their church does.  This was an attempt to propagate the church’s anti-Jewish practices into national legal systems.  This would certainly qualify as propagating what we would now call antisemitism.

Another pope, Paul IV, would provide the ultimate example of how to treat Jews in 1555 by enclosing them in a ghetto in Rome.

“In Cum nimis absurdum, Paul IV did not just reiterate the concept of Christian freedom and Jewish servitude; he wanted to embody it in real lifeto reify it.  One of the most dramatic and tangible of outcomes was the creation of a restricted Jewish quarter ‘with one entry alone, and so to one exit,’ controlled by Christian authorities.  The premise of the bull was then implemented across Papal States and then in other principalities in Italy, where Jews were forced to live in ghettoes.”

From Wikipedia:

“The bull restricted Jews in other ways as well. They were forbidden to have more than one synagogue per city—leading, in Rome alone, to the destruction of seven "excess" places of worship. All Jews were forced to wear distinctive yellow hats especially outside the ghetto, and they were forbidden to trade in everything but food and secondhand clothes.  Christians of all ages were encouraged to treat the Jews as second-class citizens; for a Jew to defy a Christian in any way was to invite severe punishment, often at the hands of a mob.”

This may be suggesting that anti-Jewish sentiment was restricted to the Catholic version of Christianity.  However, Teter tells us that no one was more virulently anti-Jewish than Martin Luther.  He expressed his feelings in On the Jews and Their Lies.

“’They live among us, enjoy our security and shield, they use our land and our highways, our markets and streets.’  But the princes and lords do nothing, and ‘permit the Jews to take, steal, and rob from their open money bags and treasures whatever they want.  That is, they let the Jews, by means of their usury, skin and fleece them and their subjects and make them beggars with their own money.  For the Jews, who are exiles, should really have nothing, and whatever they have must surely be our property.  They do not work and they do not earn anything…and yet they are in possession of our money and goods and are our masters in our own land and in their exile.’  The social hierarchy, thus, was turned upside down, the Christian social order disrupted.  More ominously, any material possessions and wealth Jews had was considered ill-gotten and illegitimate.”

But Luther had recommendations to make that would influence generations of antisemites in the future.

“Luther ended his long venomous work on Jews with recommendations of expulsion to ‘their land’ and their elimination from Christendom.  But his ideas would not gain ground until the late nineteenth and twentieth century, when modern antisemites rediscovered this particular work and began to use it as a font of new antisemitic ideas or to justify them by grounding them in historical sources.”

David I Kertzer is interested in whether the Vatican was guilty of contributing to the events that led to the Holocaust.  The Catholic leadership was forced to ask itself the same question.  It is not surprising that a commission that was founded managed to find the Church innocent.  The argument was that although the Church was guilty of generating anti-Judaic feelings over religious stances, these had nothing to do with the Holocaust.  Rather, the forms of nationalism developing in the nineteenth and twentieth centuries changed the focus on Jews from their religion to their social and economic activities.  It would be from that perspective that the Holocaust would develop.  The church claimed it had nothing to do with that evolution.

Kertzer will have none of this.

“This argument, sadly, is not the product of a Church that wants to confront its history.  If Jews acquired equal rights in Europe in the eighteenth and nineteenth centuries, it was only over the angry, loud, and indeed indignant protests of the Vatican and the Church.  And if Jews in the nineteenth century began to be accused of exerting a disproportionate and dangerous influence, and if a form of anti-Judaism ‘that was essentially more sociological and political than religious’ was taking shape, this was in no small part due to the efforts of the Roman Catholic Church itself.”

“What, after all, were the major tenants of this modern anti-Semitic movement if not such warnings as these: Jews are trying to take over the world; Jews have already spread their voracious tentacles around the nerve centers of Austria. Germany, France, Austria, Poland, and Italy; Jews are rapacious and merciless, seeking at all costs to get their hands on all the world’s gold, having no concern for the number of Christians they ruin in the process; Jews are unpatriotic, a foreign body ever threatening the well-being of the people with whom they live; special laws are needed to protect society, restricting the Jews rights and isolating them.  Every single one of these elements of modern anti-Semitism was not only embraced by the Church but actively promulgated by official and unofficial Church organs.”

Kertzer provides the following selection of quotes to prove his point.  They were taken from articles written for Civiltà cattolica, the unofficial medium by which the Vatican disseminated its views, and for L’Osservatore romano, the Vatican’s own publication.  They were published in the late nineteenth century.

“The Jews—eternal insolent children, obstinate, dirty, thieves, liars, ignoramuses, pests and the scourge of those near and far….They managed to lay their hands on …all public wealth…and virtually alone they took control not only of all the money…but of the law itself in those countries where they have been allowed to hold public offices.”

“The whole sinew of modern Judaism—that is of the antisocial, antihumanitarian, and above all anti-Christian law that the Jews now observe believing that they are obeying mosaic law—consists essentially in that fundamental dogma according to which the Jew cannot and should not ever recognize as his fellow human being anyone other than a Jew.  All others, whether Christian or non-Christian, must be considered, by every good Jew observing his law,…as hateful enemies, to be persecuted and, if possible, exterminated…from the face of the earth.”

“…brotherhood and peace were and are merely pretexts to enable them to prepare—with the destruction of Christianity, if possible, and with the undermining of the Christian nations—the messianic reign that they believe the Talmud promises them.”

“…if this foreign Jewish race is left too free, it immediately becomes the persecutor, oppressor, tyrant, thief, and devastator of the countries where it lives.”

“The whole Jewish race…is conspiring to achieve this reign over all the world’s peoples.”

“…the Jews truly do murder Christians to use their blood in their detestable Talmudic and rabbinical rites…”

“Content yourselves…with the Christians’ money, but stop shedding and sucking their blood.”

Even when war struck and the Holocaust was already underway, the Church was in no position to complain about the poor treatment of Jews because the ready rejoinder from the Nazis and Italians was always “we are merely treating the Jews the same way you treated them.”

Even more ominously, the Vatican made it clear to civil governments that they had the right to address the problem of Jews in any way they saw fit.  Kertzer provides an apt quote from a sermon given by the Bishop of Cremona in 1939 as Mussolini was implementing his racial laws against the Jews.

“Only a few weeks after the second wave of Italian racial laws was announced, the bishop told his congregation: ‘The Church has never denied the state’s right to limit or to impede the economic, social, and moral influence of the Jews, when this has been harmful to the nation’s tranquility and welfare.  The Church has never said or done anything to defend the Jews, the Judaics, or Judaism.’  The sermon received broad attention, and was quoted in the Vatican’s own Osservatore romano.”

This is a history that too few are familiar with, a history that needs to be told and understood for what it reveals.  We must realize that religion can be a powerful yet dangerous thing.  Perhaps the most frightening words ever spoken are “God is on our side.”

A new definition of antisemitism is being promoted: any criticism of Israel is antisemitic.  There are reasons why Israel should be criticized.  There are reasons why Israel should be detested.  Punishing anyone who possesses these views does nothing to protect Jews from actual antisemitism.  Most Jews live in the diaspora.  Do any of them feel safer because of what Netanyahu and Trump are doing?


Sunday, June 29, 2025

The Counterintuitive Nature of Cancer Screening

 The first rule of cancer recovery is the extreme need for early discovery and treatment.  Researchers are continually seeking more accurate methods to deliver those early discoveries.  Yet, when physicians discuss cancer screening, they seem hesitant in recommending it, as if screening too often or too early might be bad for a patient.  Siddhartha Mukherjee provides a enlightening summary of the relevant issues in a The New Yorker article: The Catch in Catching Cancer Early.

Mukherjee tells the reader that several things must be recognized to understand the issues.  The first is that not all cancers are equal.  In fact, not all cancers are even dangerous.  Detecting them is one thing, responding to them is another.

“We have become adept at locating cancer’s physical presence—its corporeal form—but remain largely blind to its character, its behavior, its future. We employ genomic assays and histopathological grading, but many early-stage tumors remain biologically ambiguous. They might be the kind of early cancers that surgery can cure. They might be slow-growing and unlikely to cause harm. Or, most concerning, they might already have metastasized, rendering local intervention moot. Three possibilities—yet we often cannot tell which we’re confronting.”

The second concern that must be recognized is that there can be numerous false positives leading to unnecessary medical interventions that might risk the health of the patient.  Mukherjee uses the example of a group of 1000 smokers being screened with one case of cancer being present.  The screening procedure seems quite good with an accuracy of 99%.

“So what does it signify if someone in the group tests positive—what are the chances the person actually has cancer? Bayesian arithmetic gives a surprising answer: the test can be expected to identify the one person who actually has cancer, but it will also wrongly flag about ten people who don’t. That means there will be roughly eleven positive results, but only one of them is accurate. The chance that someone who tests positive has cancer, then, is just over nine per cent. In other words, eleven people would be sent for follow-up procedures, such as biopsies. And ten of them would go through a risky and invasive process—which can involve a punctured lung, bleeding, or other complications—with no benefit.”

It is critical that the statistical reality from this example be understood.  Apparently, it is either not understood, or it is being ignored.

“The consequences of ignoring these principles are staggering. In 2021, according to one estimate, the United States spent more than forty billion dollars on cancer screening. On average, a year’s worth of screenings yields nine million positive results—of which 8.8 million are false. Millions endure follow-up scans, biopsies, and anxiety so that just over two hundred thousand true positives can be found, of which an even smaller fraction can be cured by local treatment, like excision. The rest is noise mistaken for signal, harm mistaken for help.”

There are other statistical approaches that have been used to suggest that screening is always beneficial, but Mukherjee debunks them and concludes that the only way to show that screening works is to demonstrate that people who were screened actually live longer than those who were not.  This is a long and expensive process, but it has been done.

Colonoscopies have been proved to be beneficial as a screening process.  A colonoscopy is a rather risk-free process that allows a suspicious structure to be extracted as part of the procedure for detailed study after the fact.  The nature of the cancer can be identified if it is cancerous.

“The point isn’t that screening can’t pay off. The success stories are real. In 2022, The New England Journal of Medicine published the results of a landmark colonoscopy trial involving 84,585 participants in Poland, Norway, and Sweden. After more than a decade, the data showed an estimated fifty-per-cent reduction in deaths associated with colorectal cancer among those who received colonoscopies. Every four to five hundred colonoscopies prevented a case of colorectal cancer. The benefit was real—but demonstrating it required years of painstaking research.”

Not all endeavors have been so successful.

“The effectiveness of screening varies dramatically by cancer type. Consider ovarian cancer, a disease that often remains hidden until it has scattered itself across the abdomen. In 1993, researchers launched a major trial to test whether annual ultrasounds and blood tests could lower mortality. The scale was extraordinary: more than seventy-eight thousand women enrolled, half randomly assigned to screening. For four years, they endured transvaginal ultrasounds; for six years, routine blood draws. Then came more than a decade of monitoring.”

“And what did we learn? Among the screened, 3,285 received false positives. More than a thousand underwent unnecessary surgeries. A hundred and sixty-three suffered serious complications—bleeding, infection, bowel injury. But after eighteen years there was no difference in mortality. Even with three to six additional years of follow-up, the results held.”

 There is an intriguing opportunity to replace the current screening procedures with a simple blood testsimple for the patient at least.  As part of the normal cellular life cycle, cells die and spread their DNA in bits and pieces into the blood stream.  Researchers have determined that cancer cells go through the same process and DNA from cancerous tumors can be detected in a blood sample.  In principle one could detect the existence of cancer this way and also determine properties of the cancer that would provide an informed treatment plan.  Also, in principle, one could detect not just one form of cancer but screen for many options.  There are research teams out there trying to accomplish that and demonstrating potential for the approach.  It seems an incredibly difficult task, but one absolutely necessary.  Mukherjee details the state of progress and ends with guarded optimism.

“Perhaps, in time, we’ll build tools that can not only detect cancer’s presence but predict its course—tests that listen not just for signals but for intent. Early work with cell-free DNA hints at this possibility: blood tests that may one day tell us not only where a cancer began but whether it’s likely to pose a threat to health. For now, we dwell in a liminal space between promise and proof. It’s a space where hope still outpaces certainty and the holy grail of perfect screening remains just out of reach.”

 

Saturday, June 21, 2025

Boys Seem to be Losing in the Parenting Game: Gender Preference

 Evolution has tried its hardest to produce equal numbers of male and female babies.  However, humans are cultural animals, and they develop preferences that effectively alter the ratio by abortion, infanticide or other means of disposal of an unwanted gender.  For most of recorded human history male cultural dominance has produced a tendency to view a male offspring as more important than a female one.  A fascinating article in The Economist, More and more parents around the world prefer girls to boys, tells us that this attitude is changing.

In some developing cultures, male preference has created an overabundance of males, or a cohort of disappeared females.  This was a common situation in countries such as China and India, but recent data from developing nations indicates that fewer girls are disappearing and gender ratios are approaching the natural value.  That is definitely a significant and pleasing discovery.   However, recent data from wealthier countries seems to indicate that it may be part of a trend favoring girl children over boys.

 When women wanted to revolt against male domination they took to the streets to protest their plight and worked and studied hard to compete with men for jobs and other positions.  As girls/women became more prominent in society’s activities, many boys/men seem to be unwilling or unable to compete. 

“The growing desire for daughters may also reflect the social ills that afflict men in much of the rich world. Men still dominate business and politics and earn more for the same work almost everywhere—but they are also more likely to go off the rails. In many rich countries, teenage boys are more likely to be both perpetrators and victims of violent crimes. They also are more likely to commit suicide. Boys trail girls at all stages of education and are expelled from school at far higher rates. They are less likely than women to attend university.”

This has led to many activities aimed at helping boys become more functional members of society. 

“A telling sign of the general alarm about boys in the rich world is the interest politicians have begun taking in the subject. Last year Britain’s Parliament opened an investigation into male underachievement in schools. Norway has gone a step further, launching a Men’s Equality Commission in 2022. Its final report in 2024 concluded that tackling challenges for boys and men would be the ‘next step’ in gender equality.”

“Legislators across America’s political spectrum are making similar noises. Utah’s governor, Spencer Cox, a Republican, has created a task-force on male well-being; Maryland’s governor, Wes Moore, a Democrat, has committed to 'targeted solutions to uplift our men and boys'; Michigan’s governor, Gretchen Whitmer, a woman (and a Democrat), wants to get more young men into Michigan’s colleges and vocational courses.”

Prospective parents are aware of these issues, and they seem to take them into account when considering having a child.  The article provides plenty of evidence that preferences are strongly tilting towards girl babies

“In the rich world…evidence is growing of an emerging preference for girls. Between 1985 and 2003, the share of South Korean women who felt it ‘necessary’ to have a son plunged from 48% to 6%, according to South Korea’s statistics agency. Nearly half now want daughters. Similarly in Japan, polls suggest a clear preference for girls. The Japanese National Fertility Survey, a poll conducted every five years, shows that in 1982, 48.5% of married couples wanting only one child said they would prefer a daughter. By 2002, 75% did.”

“In America parents with only daughters were once more likely than parents with only sons to keep having children, presumably to try for a boy. That was the thesis set out in a study published in 2008 by Gordon Dahl of the University of California, San Diego, and Enrico Moretti at the University of California, Berkeley. The report, which analysed census data from 1960 to 2000, concluded that parents in America favoured sons.”

“That preference has since reversed, however. A study in 2017 led by Francine Blau, an economist at Cornell University, found that having a girl first is now associated with lower fertility rates in America. The research, which uses data from 2008 to 2013, suggested a preference for girls among married couples.”

Adoption and IVF provide opportunities for prospective parents to express their gender preferences.

“Adoptive parents, too, tend to want girls. Those in America were willing to pay as much as $16,000 to secure a daughter, according to a study published in 2010. In 2009 Abbie Goldberg of Clark University asked more than 200 American couples hoping to adopt whether they wanted a boy or a girl. Although many of them said they did not mind, heterosexual men and women and lesbians all leaned on average towards girls; only gay men preferred boys.”

“IVF and other fertility treatments are also becoming cheaper, more effective and so more common. In America, where sex-selective IVF is legal, around a quarter of all IVF attempts now lead to live births, compared with 14% during the 1990s. Some 90% of couples who use a technique called sperm-sorting to select the sex of their child said they wanted a balance of sons and daughters. Even so, in practice 80% of them opted for girls. If that imbalance endures even as such methods spread, America’s sex ratios will soon start to skew.”

The article also provided this interesting perspective on the factors that enter into child gender preference.

“Competitive parents may see girls as more likely to reflect well on them than boys. After all, boys develop fine motor skills later than girls. They are also worse at sitting still. Those are drawbacks in a world of toddlers’ music lessons and art classes. ‘We no longer have trophy wives,’ says Richard Reeves, president of the American Institute for Boys and Men, which seeks to remedy male social problems. ‘We have trophy kids’.”

If you are someone who sees your child as an indicator of your status as a human being and as a parent, you are more likely to look good if you choose a girl baby.

 

Wednesday, June 11, 2025

Police, Protestors, Violence, and Tear Gas (Tear Gas Is Illegal in Warfare, but Legal on Protestors)

 As this note is being written, organized protests have broken out in Los Angeles against Trump’s immigrant deportation activities.  These protests are far from being the biggest and the most contentious that that the LAPD has ever had to deal with.  No request for assistance was made, yet Trump decided to inflate the affair by sending in federal National Guard troops.  In addition, several hundred marines were ordered to go to LA and assist the National Guard troops.  Whatever one might think about the legality or intention of what Trump is doing, one should be questioning the wisdom of sending in people trained to use lethal force against enemies of the state to do duty controlling fellow citizens staging a protest.  The LAPD knows it will face these situations and has plans for how to deal with them.  Marines and National Guard soldiers do not. 

Kevin M. Kruse and Julian E. Zelizer have produced an interesting collection of essays by noted historians in the book Myth America: Historians Take On the Biggest Legends and Lies About Our Past.  One is titled Police Violence, by Elizabeth Hinton, a Yale professor of history.  According to her, the myth that police are merely doing their duty when using force against protestors produces a dangerous misunderstanding of our history.

“…officials and much of the American public have widely assumed that police violence primarily occurs as a reaction to community provocation.  The myth holds that the fires and the looting of the immediate post-civil rights period, and in our own time, began with disaffected groups themselves; the police were ‘merely doing their jobs’ in reacting to a dangerous situation with force.”

“Contrary to the fearmongering rhetoric of politicians, history reveals that police violence very often inflamed community violence, not the other way around.  Although protestors are often blamed for creating violent situations where the police are forced to respond in kind, from Miami in 1967 to the George Floyd uprisings in 2020, law enforcement officials were the instigators.  Indeed, protests have grown more peaceful since the fiery post-civil rights era, but the police response to them has escalated in its violence.  Dominant narratives have confused where the responsibility lies, in part because of the police’s increased reliance on tear gas and other chemical weapons that tend to be regarded as relatively benign riot-control tactics.”

Tear gas is not the only “benign” crowd control weapon available to the police, rubber bullets, bean bag rounds, and stun grenades are also available.

That premature violence was counterproductive in crowd control, was well known back in the 1960s.

“The widely distributed 1967 FBI manual called Prevention and Control of Mobs and Riots recognized that the application of force was a delicate matter, for its premature use ‘would only contribute to the danger, aggravate the mob, and instill in the individual a deep-rooted hatred of the police.’  Likewise, in it 1969 report the National commission on the Causes and Prevention of Violence concluded that ‘excessive force is an unwise tactic for handling disorder’ that ‘often has the effect of magnifying turmoil not diminishing it’.”

Tear gas was invented during World War I for use in rendering an enemy incapable of fighting back and requiring him to retreat, but to not kill him.  In 1925, the international community banned its use in warfare, but for reasons of convenience nations decided it was okay to use it against their own misbehaving citizens.

“Tear gas had been intended for the battlefield, to harm enemy forces for a brief period of time without leaving a trail of blood.  The chemical weapon produced immediate effects, attacking the senses of its intended targets, leaving them incapacitated anywhere from ten seconds to ten minutes and causing, as the army manual Military Chemistry and Chemical Agents described it…’extreme burning of the eyes accompanied by copious flow of tears, coughing, difficulty in breathing, and chest tightness, involuntary closing of the eyes, stinging sensations of moist skin, running nose, and dizziness or swimming of the head’.”

“The 1993 Chemical Weapons Convention upheld the international prohibition of tear gas and other chemical weapons in warfare, but they are still regarded as humane alternatives to violently arresting or shooting into crowds of people, and law enforcement officials around the world are still permitted to apply the devices at their discretion.”

The summer of 2020 with its numerous protests following George Floyd’s murder gave Donald Trump opportunities to demonstrate his feelings for his fellow citizens.

“One of the most egregious incidents of that summer involved US Park Police and Secret Service agents who used tear gas, riot batons, and other weapons against nonviolent protestors in Lafayette Square near the White House to make way for a photo opportunity for Trump in front of the nearby St. John’s Church.”

Let us hope that the people in charge of dealing with the current protests, and the many future protests, against the current immigrant deportation activities realize that tormenting people who attended peacefully with chemical weapons because a few troublemakers angered you is not a good strategy.  The anger, as well as the number of troublemakers, will only increase.

Trump may or may not understand this dynamic.  He probably doesn’t care.  He is acting as though he sees a benefit from inducing violence.  Yes, things can get much worse.


Monday, June 2, 2025

Ukraine, Russia, and Fighting the Next War: Fighter Jets

 Technological developments have made it difficult and expensive to prepare for the next military conflict.  The mindsets of the military leaders and defense contractors call for weapons of greater lethality and greater survivability.  These goals, of course, lead to greater expense that results in the ability to pay for ever smaller numbers of weapons, which in turn leads to even greater emphasis on lethality and survivability, and even greater costs.  This military, technological, political dance came face to face with an example of how the next war might be fought.  When Russia invaded Ukraine some three years ago, it was interesting to note that a vast superiority in manpower, tanks, and other mechanized vehicles was of little value to the Russians.  They were prepared for an easy victory, while the Ukrainians viewed the invasion as a more severe version of the war they had been fighting since 2014.  Russia was forced to surrender many of its initial gains but were able to retreat to lines they could hold, and the battle became essentially a stalemate.

Then things got really interesting.  Ukraine, at a material disadvantage in almost all areas, began to look into strategies that could be effective, but at little cost.  Thus begins the battlefield use of surveillance and attack drones.  It was discovered that a few small weaponized attack drones could be used to destroy tanks and other armored vehicles, and that those expensive beasts had little opportunity to go unobserved while in motion.  Russia was forced to duplicate such tactics.  Both sides compete for the slightest advantage with new systems being introduced in weeks or months.  The battle line now comes with a kill zone where no one can be safe that extends for about ten miles.  If the Russians wish to attack, they must span that space as quickly as possible without any vulnerable troop concentrations.  Traditional military vehicles are too dangerous.  Russia seems to have decided that individual motorcycles are the best bet for an attacking force.

Ukraine has also used relatively cheap marine drones to drive Russian warships away from its shores, another example of inexpensive technology putting at risk tremendously expensive assets.

One has to wonder what NATO forces think about these developments.  Are the weapons in their warehouses rapidly becoming obsolete?  Will NATO generals have to be taught how to wage war by the Ukrainians?  Are defense contractors trying to figure out how to make billions of dollars manufacturing motorcycles?

Another area in which tactics have moved in an unexpected direction is air warfare.  There are no tales of dogfights between jet pilots because neither side can afford to let its expensive jets cross into enemy territory because air defenses have become so effective.  Fighter pilots launch missiles at the enemy from afar or they help provide defense against missiles and drones.  Thus far there does not appear to be any cheap technology that could change this picture.  The enemy’s planes are most at risk when they are on the ground.  Both sides try to use missiles and drones, whatever is available, to take out planes they can reach.  The Russians have had to move their planes further back into Russia or keep them in armored bunkers to protect them.  As this article was being written, Ukraine demonstrated an audacious way to use inexpensive drones to destroy Russia’s advanced planes. More on that below.

The F-16s that took so long to migrate to Ukraine for use are considered fourth-generation planes.  The US has already developed a fifth-generation plane, the F-35.  That plane is a Lockheed Martin product whose development began in 1996.  The contract for the company to produce the F-35 was signed in 2001.  Full-rate production did not begin until 2021.  It took over twenty years to go from concept to mass production.  The world’s militaries seem poised to commit to producing sixth-generation fighters. 

The Economist summarizes what is known of plans in the article The race to build the fighter planes of the future.  The need for more advanced models is influenced by a number of factors, including fear of the performance of current air defense systems as experienced in the Ukraine-Russia conflict.

“One shift they all predict is more, and better, surface-to-air missile systems, a lesson reinforced by the strong performance of air defences in Ukraine. That requires more stealth to keep planes hidden from enemy radar. Stealth, in turn, requires smooth surfaces—bombs and missiles cannot hang off the wing, but must be tucked away inside a larger body.” 

There are a number of developments underway, all indicating much larger structures to hold interior weapons, more fuel for longer range, and more complex electronics.  The US entry is suspiciously named the F-47.  More troubling is the award of the development contract to Boeing, which, of late, seems to have problems doing anything.  Little is known of the F-47 except that it is expected to be much larger than either the F-35 or F-22.

“In December China showed off what was believed to be a prototype of the J-36, an imposing plane with stealthy features and a large flying-wing design. Britain, Italy and Japan are co-developing their own plane, in Britain provisionally called the Tempest, which is due to enter service in 2035. France, Germany and Spain hope that their Future Combat Air System (FCAS) will be ready by 2040. Together, these represent the future of aerial warfare.”

One reason why longer range is required is security for the planes when not in use. 

“Finally, planes are especially vulnerable to long-range missiles when they are on the ground. That means they need to fly from more distant airfields, requiring larger fuel tanks and less drag for more efficient flight. The huge wings seen on the Tempest and the J-36 allow for both those things…Range is a particular concern for America. Its airbases in Japan are within reach of vast numbers of Chinese ballistic missiles. It plans to disperse its planes more widely in wartime and to fly them from more distant runways, such as those in Australia and on Pacific islands.”

Any strategy for development of expensive future aircraft must consider Ukraine’s most recent use of cheap drones.  The Economist provides comments in the article An astonishing raid deep inside Russia rewrites the rules of war.  The following lede is provided.

“Ukraine’s high-risk strikes damage over 40 top-secret strategic bombers”

Ukraine managed to get numerous small weaponized drones stationed near multiple Russian air bases on trucks.  A mechanism could be activated to release the drones at the appropriate time and as first-person-view drones they could be controlled by operators in Ukraine.  They were too small and too close in to be defended against.  And it doesn’t take much to destroy an expensive plane.

“Today’s operation is likely to be ranked among the most important raiding actions in modern warfare. According to sources, the mission was 18 months in the making. Russia had been expecting attacks by larger fixed-wing drones at night and closer to the border with Ukraine. The Ukrainians reversed all three variables, launching small drones during the day, and doing so far from the front lines.”

“Commentators close to the Ukrainian security services suggest that as many as 150 drones and 300 bombs had been smuggled into Russia for the operations. The quadcopters were apparently built into wooden cabins, loaded onto lorries and then released after the roofs of the cabins were remotely retracted. The drones used Russian mobile-telephone networks to relay their footage back to Ukraine, much of which was released by the gleeful Ukrainians.”

So, Ukraine did figure out a way to take out troublesome enemy planes and it really did not cost much, and did not require any exotic technologies either.  There is a lesson in this event for all those who wish to build ever more expensive aircraft.

“Western armed forces are watching closely. For many years they have concentrated their own aircraft at an ever smaller number of air bases, to save money, and have failed to invest in hardened hangars or shelters that could protect against drones and missiles. America’s own strategic bombers are visible in public satellite imagery, sitting in the open. ‘Imagine, on game-day,’ writes Tom Shugart of CNAS, a think-tank in Washington, ‘containers at railyards, on Chinese-owned container ships in port or offshore, on trucks parked at random properties…spewing forth thousands of drones that sally forth and at least mission-kill the crown jewels of the [US Air Force].’ That, he warns, would be ‘entirely feasible’.”

Perhaps people should learn how to protect the assets they already have before spending a fortune and a decade or two on new assets.

 

Tuesday, April 8, 2025

Famine and Migration: The Luck of the Irish

Most people with a minimal education have heard of Ireland’s Great Famine or recognize it by the labels “Irish Potato Famine” or “The Great Hunger.”   It is known as a terrible event in which a large fraction of the population either died or left Ireland.  We in the United States recognize the surge of Irish immigrants as a significant event in our own history.  But do we ever stop and ask ourselves how it was possible in the nineteenth century for a country just a few miles from England and Europe to suffer mass starvation over a period of several years because potato crops failed?  Does this imply that the Irish only had one crop?  Does this suggest the Irish could raise only one crop?  Not at all.  The real cause of the tragedy was British denigration of the Irish as a people, and absurd British belief in economic principles.

Fintan O’Toole provides a wide-ranging description of the famine era in a New Yorker article titled What Made the Irish Famine So Deadly.  O’Toole’s work was prompted by a book on the famine, “Rot,” by Padraic X. Scanlan.

“There have been, in absolute terms, many deadlier famines, but as Amartya Sen, the eminent Indian scholar of the subject, concluded, in “no other famine in the world [was] the proportion of people killed . . . as large as in the Irish famines in the 1840s.” The pathogen that caused it was a fungus-like water mold called Phytophthora infestans. Its effect on the potato gives ‘Rot,’a vigorous and engaging new study of the Irish famine by the historian Padraic X. Scanlan, its title. The blight began to infect the crop across much of western and northern Europe in the summer of 1845. In the Netherlands, about sixty thousand people died in the consequent famine—a terrible loss, but a fraction of the mortality rate in Ireland.”

“Only about one in three people born in Ireland in the early eighteen-thirties would die at home of old age. The other two either were consumed by the famine or joined the exodus in which, between 1845 and 1855, almost 1.5 million sailed to North America and hundreds of thousands to Britain and Australia, making the Irish famine a central episode in the history of those countries, too.”

No one now has any memory of a famine in the Netherlands.  What was it about Ireland that produced such a terrible disaster?

The first thing to realize is Ireland produced a lot of food, but the British did not allow the Irish to possess much of it.

“The problem was not that the land was barren: Scanlan records that, “in 1846, 3.3 million acres were planted with grain, and Irish farms raised more than 2.5 million cattle, 2.2 million sheep and 600,000 pigs.” But almost none of this food was available for consumption by the people who produced it. It was intended primarily for export to the burgeoning industrial cities of England. Thus, even Irish farmers who held ten or more acres and who would therefore have been regarded as well off, ate meat only at Christmas.”

From 1801, Ireland was considered part of the United Kingdom, but for practical purposes it was a colony inhabited by people of a lower rank than that of the British.

“In the mid-nineteenth century, Scanlan notes, fewer than four thousand people owned nearly eighty per cent of Irish land. Most of them were Protestant descendants of the English and Scottish settlers who benefitted from the wholesale expropriation of land from Catholic owners in the seventeenth century. Many lived part or all of the year in England. They rented their lands to farmers, a large majority of whom were Catholics. Scanlan points out that, whereas in England a tenant farmer might pay between a sixth and a quarter of the value of his crops in rent, in Ireland ‘rent often equalled the entire value of a farm’s saleable produce’.”

 The Irish had no particular affinity for potatoes, but it was an easy crop to grow, and it was nourishing.  It would be the colonial extraction of the fruit of their agricultural labor that would drive the dependance on the potato.

“Their historically varied diet, based on oats, milk, and butter, had been reduced by economic oppression to one tuber. Nor were they reluctant to work for wages. Many travelled long distances to earn money as seasonal migrant laborers on farms in England and Scotland, and Irish immigrants were integrating themselves into the capitalist money economy in the mills of Massachusetts and the factories of New York.”

The lack of the ability of a farmer to survive the large rents charged by the landowners left them no choice but to sublet parcels of land to others who were required to grow enough food on that land to survive on.  Ultimately, with a growing population, these parcels shrank to the point that the only product that could provide hope of survival was the potato.

“Landlords could extract these high rents because their tenants, in turn, made money by subletting little parcels of land, often as small as a quarter of an acre, to laborers who had none of their own. The whole system was possible only because of the potato. Most years, those micro farms could produce enough of this wonder crop to keep a family alive. It provided enough calories to sustain hardworking people and also delivered the necessary minerals and vitamins. By the eighteen-forties, as many as 2.7 million people (more than a quarter of the entire population) were surviving on potatoes they grew in tiny fields that encroached on ever more marginal land, clinging to bogs and the sides of stony mountains.”

The potato blight began in 1845.  It continued to grow in 1846 making it clear that something must be done, but it must be done with the realization that providing unearned aid to people so backward that they were willing to live on something as primitive as a potato crop would obviously encourage even greater laziness and lack of responsibility.

“In a neatly circular argument, the conditions that had been forced on the laboring class became proof of its moral backwardness. It was relatively easy to plant and harvest potatoes—therefore, those who did so had clearly chosen the easy life. ‘Ireland, through this lens,’ Scanlan writes, ‘was a kind of living fossil within the United Kingdom, a country where the majority of the poor were inert and indolent, unwilling and unable to exert themselves for wages and content to rely on potatoes for subsistence.”

One might have thought a solution might have been to allow the Irish to share in the agricultural rents they were paying the British, but that would have stolen the property of the deserving British landowners and given it to undeserving Irish workers.  The solution chosen was to import food from America and make it available to the starving Irish.  But the food could not be a grant.  To avoid the moral hazard of encouraging bad behavior by the poorly performing Irish, the food would only be available to those who were industrious enough to be able to pay for it.

“The idea of Irish indolence fused with a quasi-religious faith in the laws of the market to shape the British response to the famine. In its first full year, 1846, Robert Peel’s Conservative government imported huge quantities of corn, known in Europe as maize, from America to feed the starving. The government insisted that the corn be sold rather than given away (free food would merely reinforce Irish indolence), and those who received it had little idea at first how to cook it. Nonetheless, the plan was reasonably effective in keeping people alive.”

“At the end of July, 1846, it became crushingly obvious that the blight had spread even wider, wiping out more than ninety per cent of the new crop. By then, most of the poor tenants had sold whatever goods they had, leaving nothing with which to stave off starvation. Fishermen on the coasts had pawned their nets for money to buy maize. The terrible year that followed is still remembered in Ireland as Black ’47, though the famine would, in fact, last until 1852.”

A new British government would take control and provide a second absurd approach to dealing with the starving Irish: they would enjoy employment in public works projects, allowing them to learn the benefits of hard work.

“The Liberals, under Lord John Russell, were determined that what they saw as an illegitimate intervention in the free market should not be repeated. They moved away from importing corn and created instead an immense program of public works to employ starving people—for them, as for the Conservatives, it was axiomatic that the moral fibre of the Irish could not be improved by giving them something for nothing. Wages were designed to be lower than the already meagre earnings of manual workers so that the labor market would not be upset.”

“The result was the grotesque spectacle of people increasingly debilitated by starvation and disease doing hard physical labor for wages that were not sufficient to keep their families alive. Meanwhile, many of the same people were evicted from their houses as landowners used the crisis to clear off these human encumbrances and free their fields for more profitable pasturage. Exposure joined hunger and sickness to complete the task of mass killing.”

O’Toole believes the story of this tragic abuse of the Irish is important to those living in the current era.  The economic/religious beliefs that drove the British of that period are still alive in the current era.

“Above all, ‘Rot’ reminds us that the Great Hunger was a very modern event, and one shaped by a mind-set that is now again in the ascendant. The poor are the authors of their own misery. The warning signs of impending environmental disaster can be ignored. Gross inequalities are natural, and God-given. The market must be obeyed at all costs.”

There is one aspect of this tale in which the poor Irish were allowed a bit of “luck.”  In that era, migration was possible.  Otherwise, many more would have died.

“There is only one thing about the Irish famine that now seems truly anachronistic—millions of refugees were saved because other countries took them in. That, at least, would not happen now.”

  

Saturday, March 29, 2025

Alzheimer’s Disease Research Goes Viral

When the brains of people who have been diagnosed with Alzheimer’s disease are examined, clusters and tangles of two proteins, amyloid and tau, are observed.  It has long been assumed by most experts that the disease is caused by these protein clusters interfering with neural function.  Research on this path focused on decreasing the accumulation of these proteins in the brain. However, progress has been small, and it has not yet determined whether amyloid and tau are a cause or an effect of the disease.  An article in The Economist, Do viruses trigger Alzheimer’s?, summarizes some recent discoveries that support the possibility the disease could be activated by viruses. 

“In the summer of 2024 several groups of scientists published a curious finding: people vaccinated against shingles were less likely to develop dementia than their unvaccinated peers. Two of the papers came from the lab of Pascal Geldsetzer at Stanford University. Analysing medical records from Britain and Australia, the researchers concluded that around a fifth of dementia diagnoses could be averted through the original shingles vaccine, which contains live varicella-zoster virus. Two other studies, one by GSK, a pharmaceutical company, and another by a group of academics in Britain, also reported that a newer ‘recombinant’ vaccine, which is more effective at preventing shingles than the live version, appeared to confer even greater protection against dementia.”

Kudos go to Professor Ruth Itzhaki who long ago detected a potential correlation between a common virus that infects much of the population, but rarely infects the brain, and Alzheimer’s.  She observed that when the virus does manage to infect the brain it can cause severe inflammation in regions that are sites for Alzheimer’s disease damage.

“Ruth Itzhaki, formerly of Manchester University and now a visiting professor at the University of Oxford, has championed this idea for almost 40 years. The bulk of her work has focused on herpes simplex virus 1 (HSV1), best known for giving people cold sores, which infects around 70% of people, most without symptoms. The virus normally lives outside the brain, where it can lie dormant for years. It is flare-ups that can lead to cold sores.”

“In rare cases, the virus can also lead to massive inflammation in the same brain areas that are most affected by Alzheimer’s. In experiments conducted in the early 2000s, Professor Itzhaki found that if she infected lab-grown human brain cells with HSV1, amyloid levels inside the cells increased dramatically. That led her to suspect a causal connection.”

“What’s more, in 1997 Professor Itzhaki found that people with a genetic variant known to increase Alzheimer’s risk, ApoE4, were only more likely to get the disease if they also had HSV1 in their brain. In 2020 a group of French scientists showed that repeated activations of the virus, seemingly harmless in people without ApoE4, more than tripled the chance of developing Alzheimer’s in those with it.”

One would have hoped that Itzhaki’s research would have generated interest much earlier.  Now the medical researchers are struggling to catch up.

“In a bid to push forward Professor Itzhaki’s theory, a group of 25 scientists and entrepreneurs from around the world have assembled themselves into the Alzheimer’s Pathobiome Initiative (AlzPI). Their mission is to provide formal proof that infection plays a central role in triggering the disease. In recent years their work detailing how viruses trigger the build up of proteins linked to Alzheimer’s has been published in top scientific journals.”

Evidence of a tie between the shingles virus and the HSV1 virus has now been observed.

“Researchers at Tufts University, working with Professor Itzhaki, have probed why such reactivation occurs. In 2022 they found that infection with a second pathogen, the shingles virus, could awaken the dormant HSV1 and trigger the accumulation of plaques and tangles. This may explain why shingles vaccination appears to be protective against dementia. In another study published in January, the Tufts researchers also showed that a traumatic brain injury—a known risk factor for Alzheimer’s—could also rouse HSV1 and start the aggregation of proteins in brain cells grown in a dish.”

If this viral hypothesis proves valid, it is fantastic news for those who fear a long decline into dementia, and those who are already experiencing that decline.  Viruses can be controlled by vaccinations and by antiviral drugs, potentially eliminating future disease and limiting current disease.

“The viral theory has promising implications for treatment. Current therapies for Alzheimer’s, which attempt to reduce levels of amyloid in brain cells, merely work to slow the progression of the disease. If viruses are a trigger, though, then vaccination or antiviral drugs could prevent future cases. Such treatments could also slow or halt the progression of Alzheimer’s in those who already have the disease. None of this requires major breakthroughs. Antivirals for the cold-sore pathogen already exist and are off-patent. And the shingles vaccine is now routinely offered to elderly people in many countries.”

“Around 32m people around the world are living with Alzheimer’s disease. If antiviral treatments can indeed slow, delay or prevent even a small subset of these cases, the impact could be tremendous.”

  

Sunday, March 23, 2025

Plastic Particles in the Brain: The Amount Just Keeps Growing

We are living in the age of plastics. They are fantastically useful.  Their properties can be tailored by tuning the chemical content to provide desired attributes.  They are generally inexpensive to produce.  Some products can be recycled, but the process is more expensive than producing fresh material, so the only economic benefit is derived from public relations gestures.  They have become ubiquitous.  Initially, we worried that they were such long-lasting products that they would be permanent eyesores as they collected on roadsides and beaches.  We now know that they are far from robust materials.  They contain dangerous chemicals that can leach from their surfaces.  Particles can cleave from their surfaces as they interact with the environment.  These particles can then fragment into even smaller particles producing more surface area from which chemicals can leach.  If the particles become small enough, they can enter our blood stream via our lungs or our intestines.  At that point, they can find a home for themselves and the chemicals they bear in all of our organs and body parts.

There is no place on earth that has been found safe from plastic pollution.  It has only been a few years since scientists began to realize these tiny plastic particles can be a threat to human health.  The article, Study Finds Hundreds of Thousands of Plastic Particles in Bottled Water, provides a perspective on current findings with a focus on the common plastic water bottle.

“Plastic water bottles, a long-known enemy of our Earth, are finding their way into human bodies in huge quantities—well, pieces of them are. A study published this week shows just how much plastic we drink with bottled water: Researchers from Columbia University and Rutgers have found at least 240,000 plastic particles in the average liter of bottled water, a major health concern.”

“Most of the plastic particles found by the researchers were extremely small nanoplastics, which have a diameter of less than one micrometer—making them invisible to the naked eye. Nanoplastics have been historically challenging to study due to their extremely small size, but as technology has improved, scientists are now finding them almost everywhere—including in the environment, plants, animals, beverages, foods, and our human bodies.”

“While the full range of health effects of nanoplastics and microplastics in our bodies is not yet fully understood, what experts do know is already very concerning. Like all plastics, microplastics and nanoplastics are known to contain any mix of additive chemicals.  More than 16,000 such chemicals have been counted in plastics, and none have been classified as ‘safe.’ At least 25% are already officially classified as hazardous. A few concerning plastic chemicals include hormone-disrupting and cancer-causing phthalates, PFAS, and bisphenols; asbestos and toxic heavy metals such as lead and arsenic; and much more. Additionally, microplastics can absorb and accumulate toxic chemicals in the environment, which leach into living bodies, waters, soils, and plants.”

“Most plastic water bottles are made of PET (polyethylene terephthalate) plastic. At least 150 chemicals are known to leach from PET plastic beverage bottles into the liquid inside, including heavy metals like antimony and lead, and hormone-disruptors like BPA. Plastic PET bottles are even more likely to leach toxic chemicals if they are recycled, or are kept in warm environments, are exposed to sunlight, or are reused. Single-use plastic bottles also contain PFAS, a class of chemicals that are particularly dangerous to human and environmental health.”

A recent article (March,2025), Human microplastic removal: what does the evidence tell us?, provides a glimpse at what might be happening when these tiny particles accumulate in our bodies.  The focus is on our plastic-laden brains as determined by measurements taken from deceased subjects.

“A recent paper in Nature Medicine by Nihart et al. found that the human brain contains approximately a spoon's worth of microplastics and nanoplastics (MNPs), with levels 3–5 times higher in those with a cohort of decedent brains with a documented dementia diagnosis (with notable deposition in cerebrovascular walls and immune cells).  Particularly, brain tissues were found to have 7–30 times higher amounts of MNPs than other organs such as the liver or kidney. Also of note, the microplastics in the brain were of a smaller size (<200 nm) and most often polyethylene.”

“This aligns with the observed exponential increase in MNP environmental concentrations over the past half-century. Particularly, 10 to 40 million tonnes of emissions of microplastics to the environment are estimated per year, with this figure expected to double by 2040.”

Little is known about what all this means.

“The current evidence base (largely based upon animal and cell culture studies) suggests that MNP exposure can lead to adverse health impacts via oxidative stress, inflammation, immune dysfunction, altered biochemical/energy metabolism, impaired cell proliferation, abnormal organ development, disrupted metabolic pathways, and carcinogenicity.  These can lead to direct or indirect consequences to various organ systems, including respiratory, gastrointestinal, cardiovascular, hepatic, renal, nervous, reproductive, immune, endocrine, and muscular.”

And don’t forget, people diagnosed with dementia tend to have 3-5 times the plastic particle concentrations in their brains when they die.  That has to mean something, but it is not yet known whether plastic concentration is a cause or an effect.

The most interesting findings presented in the article are summarized here.

“Although MNP [microplastic and nanoplastic] concentration was not influenced by factors such as age, sex, race, or cause of death, there was a worrisome 50% increase in MNP concentration based on the time of death (2016 versus 2024).” 

If plastic particles were simply accumulating in our bodies, one would expect older persons to have accumulated higher concentrations.  This suggests that the body has some mechanism by which it can excrete plastics, either as particles or by first breaking them down into component chemicals.  There is little to no evidence yet for any such process in humans, but the authors do make the following observation.

“In fish models, it takes approximately 70 days to clear 75% of accumulated brain microplastics, suggesting that decreased inputs and increased outputs must both be maintained for long enough durations to see measurable changes.”

The authors also state that plastic particle concentrations increased by 50% between the years 2016 and 2024, an eight-year period.  They also predict an increase in the rate of plastic loading of the environment of about 100% over the period 2024 to 2040, a sixteen-year period.  Does that mean we will likely have three spoonful of tiny plastic particles in our brains in 2040, or could it be much higher?

The fact that plastic concentrations increased over the eight-year period means the plastic sources are exceeding any plastic sink mechanisms in the body.  There should then have been some level of age effect. The sample of deceased may have been too similar in age to detect a difference, or the ratio of plastic particle source to sink may have only recently exceeded 1.0.

It is scary to combine how little we know of what is happening to us now with the knowledge that whatever it is, it will get much worse in the near future. 

There are many ways in which small plastic particles enter the environment and proceed to enter our bodies.  Limiting the use of plastics seems to be the only solution.  It is probably too much right now to ask people to stop wearing and washing plastic clothing, but immediately stopping the use of those damn plastic bottles would be a good first step.

  

Saturday, March 15, 2025

South Korea: Can a Falling Population Enter a Death Spiral

 It has been clear for many years that the earth’s population will soon peak and begin to decline.  The number of countries with populations that are already decreasing is growing.  The drop in birth rates seems to be universal, but not identical.  Each national culture provides its unique fertility history, with all seeming to suggest that the decision to produce fewer infants is a voluntary choice, one that does not require economic, political, or environmental conditions to drive it.  Gideon Lewis-Kraus provided a great survey of current thoughts on this phenomenon in a New Yorker article: The End of Children: Birth rates are crashing around the world. Should we be worried?

Lewis-Kraus provides perspective on current discussions and a quote from a well-regarded demographer. 

“Anyone who offers a confident explanation of the situation is probably wrong. Fertility connects perhaps the most significant decision any individual might make with unanswerable questions about our collective fate, so a theory of fertility is necessarily a theory of everything—gender, money, politics, culture, evolution. Eberstadt told me, ‘The person who explains it deserves to get a Nobel, not in economics but in literature’.” 

What was most interesting about the article was a description of the situation in South Korea, a country where the birth rate was not just decreasing, it was plummeting.  Fertility is defined as the average number of children each woman produces.  To maintain a stable population a fertility of about 2.1 is required.  Currently, South Korean fertility is so low that any group of three women is likely to produce two children rather than the six required to maintain the population.  That is demographic collapse at an incredible rate.

“South Korea has a fertility rate of 0.7. This is the lowest rate of any nation in the world. It may be the lowest in recorded history. If that trajectory holds, each successive generation will be a third the size of its predecessor. Every hundred contemporary Koreans of childbearing age will produce, in total, about twelve grandchildren. The country is an outlier, but it may not be one for long. As the Korean political analyst John Lee told me, ‘We are the canary in the coal mine’.”

Does the experience of the Korean population represent something unique to that nation, or is it perhaps something that could be inevitable for populations that willingly decline?  Some South Korean history is necessary.

“A decade after the Korean War, the country’s per-capita G.D.P. was below a hundred dollars—less than that of Haiti. People ate tree bark or boiled grass, and children begged in the streets. After a military coup in 1961, the new authoritarian leadership tied its economic program to the cultivation of a citizenry that was smaller and better educated. It was an all-hands-on-deck approach to the labor force. Social workers fanned out to rural communities, where they encouraged women to have no more than three children. The government legalized contraceptives and pressed for the use of IUDs. These initiatives dovetailed with an emphasis on ethnic homogeneity and traditionalist values. Biracial children of American servicemen, along with the children of unwed mothers, were shipped abroad for adoption, and Korea became known as the world’s largest “exporter” of babies.”

“The program was regarded as a smashing success. In the span of twenty years, Korea’s fertility rate went from six to replacement, a feat described by Asian demographers as ‘one of the most spectacular and fastest declines ever recorded.’ A crucial part of this plan was the educational advancement of women, which the same demographers called ‘unprecedented in the recent history of the world.’ Far fewer Koreans came into existence, but those who did enjoyed a similarly improbable rise in their standard of living. Parents who remembered hunger produced children who could afford cosmetic surgery.”

The government took note of the “less is better” fertility results and seemed to conclude that even less would be even better.

“When Korea neared replacement, in 1983, its leadership might have reconsidered its policies. Instead, it doubled down with a new slogan: ‘Even two are too many.’ By 1986, the Korean fertility rate reached 1.6. This remained stable for about a decade, then fell off a cliff. The government has now devoted approximately two hundred and fifty billion dollars to various pro-natalist efforts, including cash transfers and parental-leave extensions, to no avail.”

What is it like to live in a country that knows its population is heading towards zero?

“Korea’s demographic collapse is mostly taken as a fait accompli. As John Lee, the political analyst, put it, ‘They say South Korea will be extinct in a hundred years. Who cares? We’ll all be dead by then.’ The causes routinely cited include the cost of housing and of child care—among the highest in the world. Very little in Korean society seems to give young people the impression that child rearing might be rewarding or delightful. I met a stylish twentysomething news reporter at an airy, silent café in Seoul’s lively Itaewon district. ‘People hate kids here,’ she told me. ‘They see kids and say, “Ugh”.’ This ambient resentment finds an outlet in disdain for mothers. She said, ‘People call moms “bugs” or “parasites.” If your kids make a little noise, someone will glare at you’.” 

“In the southern city of Gangjin, I stopped at a coffee shop and encountered a sign on the entrance that read ‘This is a no-kids zone. The child is not at fault. The problem is the parents who do not take care of the child.’ The doors of Korean establishments are frequently emblazoned with such prohibitions. The only children I saw on Seoul’s public transit were foreigners.” 

Lewis-Kraus tries a summary statement for the causes of population decline.

“For most of human history, having children was something the majority of people simply did without thinking too much about it. Now it is one competing alternative among many. The only overarching explanation for the global fertility decline is that once childbearing is no longer seen as something special—as an obligation to God, to one’s ancestors, or to the future—people will do less of it. It is misogynistic to equate reproductive autonomy with self-indulgence, and child-free people often devote themselves to loving, conscientious caretaking.”

But, developing disdain for children and mothers is too startling.  One can’t help but feel that some fundamental change has occurred in South Korea, and we need to understand what it is.

Lewis-Kraus provided some intriguing references, with the first coming from the Norwegian demographer, Vegard Skirbekk.

“Two decades ago, Skirbekk helped contrive a thought experiment called “the low-fertility trap hypothesis,” which proposed the possibility of an unrecoverable downward spiral. Ultra-low fertility meant far fewer babies, which meant far fewer people to have babies, or even to know babies; this feedback loop could even shift cultural norms so far that childlessness would become the default option.”

“This eventuality had seemed remote. Then it more or less happened in Korea. When I asked Skirbekk if other countries might follow suit, he replied, ‘Quite a few, possibly’.”

A second insight was provided by a Finnish demographer.

“Rotkirch, the Finnish demographer, underscored the notion that reproductive cues are social. ‘In a forthcoming survey, I want to ask, “Have you ever had a baby in your arms”?’ she told me. ‘I think in Finland it’s a sizable portion that hasn’t’.”

Humans evolved over millions of years while living in groups.  Success of these groups would require that members be capable of collective action and be able to provide resources and an environment in which newborns can become adults.  Traits that would support group success can be expected to develop.  Two that seem relevant here are the hormonal responses generating affection for infants, and peer pressure that encourages members to follow behaviors of the majority.

Humans became wired to appreciate and be pleased by human infants.  Females are born with a consuming interest in babies and mothering.  Men and women both experience hormonal surges in the presence of an infant.  Females excrete more of the bonding hormone oxytocin than males, while males also experience a drop in testosterone level when near an infant.  Evolution has provided these effects to ensure that enough infants will survive to further the species.

We evolved living in groups where babies and children would be as plentiful as resources would allow.  It is easy to see how evolutionary physical responses could develop.  We now live in small family groups.  If, as in South Korea, the most probable outcome is one-child and no-child families, children could easily live most of their lives, or at least their formative years, without ever physically encountering an infant.  Could these hormonal responses to infants fade over time if they have never been activated?

We, of a certain age, grew up in societies where families with children were the norm.  A crying child on an airplane invites feelings of sympathy, initially, at least.  But if the norm has flipped and the majority has no experience with raising children, and no understanding of why a child might be crying are how difficult it is to control the crying, then disdain for the poor parent might be the first response.  It is never easy to be a disruptive minority in any society.  Disgust and intolerance can soon follow.

Have we created societies where the economic and social interests in having children are disappearing, and we are now relying on subtle hormonal surges that may also be disappearing to get us interested in parenting?

Have we stumbled upon yet another existential threat to human civilization?  If Koreans seem unconcerned by the fact that their population will essentially disappear in a few generations, is it surprising that people seem to prefer to live in the moment and not worry about climate change which is coming upon us in a few generations as well, or that more and more plastic particles are entering our bodies?

Here’s a final thought.  If we don’t have children, why worry about the future?

 

Thursday, March 6, 2025

Immune Amnesia: Measles Is Much Worse Than We Thought

It has long been recognized that the measles virus is about the most infectious agent that humans encounter.  It has also long been recognized that vaccination against the virus is very effective.  Consequently, the disease had been erased from our consciousness—until recently when we learned that Robert F. Kennedy Jr. had impressed Trump with his knowledge of medical matters and was selected to be placed in charge of the agencies that treat infectious diseases.  For many years Kennedy has been encouraging people to avoid vaccinations, including that for the measles virus.  Kennedy’s wacky ideas play well with the anti-vaccination sentiments that arose from the politics of the covid pandemic.  He will now have an opportunity to cause even more grief for our nation.

Consider this report: Measles Outbreak Continues to Spread in West Texas.  It provides details on a growing outbreak.

“As a measles outbreak expands in West Texas, Robert F. Kennedy Jr., the health and human services secretary, on Tuesday cheered several unconventional treatments, including cod liver oil, but again did not urge Americans to get vaccinated.”

Kennedy seems to think treatment of measles infection is more important than avoiding infection.

“Texas doctors had seen ‘very, very good results,’ Mr. Kennedy claimed, by treating measles cases with a steroid, budesonide; an antibiotic called clarithromycin; and cod liver oil, which he said had high levels of vitamin A and vitamin D.”

“While physicians sometimes administer doses of vitamin A to treat children with severe measles cases, cod liver oil is ‘by no means’ an evidence-based treatment, said Dr. Sean O’Leary, chair of the American Academy of Pediatrics Committee on Infectious Diseases.”

“Dr. O’Leary added that he had never heard of a physician using the supplement against measles.”

Experts say that measles is so infectious that a vaccination level of 95% is required to inhibit the spread of infections.  Covid-related politics has had an effect on vaccination rates.

“Just 93 percent of kindergarten students nationwide had received the vaccine for measles, mumps and rubella in the 2023-24 school year, down from 95 percent before the pandemic.”

The Texas measles outbreak has spread to nine counties, with the one in which the infection emerged having a school student vaccination rate of about 80%.  Politics and conspiracies encouraged by Kennedy have multiplied the regions that have vaccination rates below 95% allowing infections to spread.

Only one child has died from infection thus far.

“About one in five people who catch measles will be hospitalized, according to the C.D.C.”

“While most measles cases resolve in a few weeks, in rare cases the virus can cause pneumonia, making it difficult for patients, especially children, to get oxygen into their lungs, or brain swelling, which can lead to blindness, deafness and intellectual disabilities.”

Measles is far from a negligible disease that might make vaccination optional.  And it is far worse than the immediate response to infection suggests.

“The virus also weakens the immune system in the long term, making its host more susceptible to future infections. A 2015 study found that before the M.M.R. vaccine was widely available, measles may have been responsible for up to half of all infectious disease deaths in children.”

The weakened immune system is explained in the article Measles and Immune Amnesia.

“The risk associated with measles infection is much greater than the sum of its observable symptoms. The immune memories that you have acquired are priceless, built over many years and from countless exposures to a menagerie of germs. Measles virus is especially dangerous because it has the ability to destroy what’s been earned: immune memory from previous infections. Meanwhile, the process of fighting measles infection leaves patients especially vulnerable to secondary infection. The worldwide increase in measles prevalence is cause for concern because morbidity and mortality from the disease extends far beyond acute measles infection.”

“One of the most unique—and most dangerous—features of measles pathogenesis is its ability to reset the immune systems of infected patients. During the acute phase of infection, measles induces immune suppression through a process called immune amnesia. Studies in non-human primates revealed that MV (measles virus) actually replaces the old memory cells of its host with new, MV-specific lymphocytes. As a result, the patient emerges with both a strong MV-specific immunity and an increased vulnerability to all other pathogens.”

The process of containing the measles virus destroys many of the memory T-cells and B-cells that recall prior infections and provide the ability to fight reinfection.

“The number of T cells and B cells significantly decreases during the acute stage of measles infection, but there is a rapid return to normal WBC (white blood cell) levels after the virus is cleared from the system. This observation masked what was really going on until researchers were able to evaluate the qualitative composition of recovered lymphocyte populations. We now know that the memory T-cells and B-cells that are produced immediately following infection are dramatically different from those that existed before the measles infection. Not only have pre-existing immune memory cells been erased, but there has been a massive production of new lymphocytes. And these have only one memory. Measles. Thus, the host is left totally immune to MV and significantly vulnerable to all other secondary infections.”

It was possible to analyze mortality rates before and after the availability of the measles vaccine.  The results were startling, indicating the effects of measles infection produced mortality from other viruses due to damaged immunity.

“Examination of child mortality rates in the U.S., U.K., and Denmark in the decades before and after the introduction of the measles vaccine revealed that nearly half of all childhood deaths from infectious disease could be related to MV infection when the disease was prevalent. That means infections other than measles resulted in death, due to the MV effect on the immune system.”

Fortunately, the depressed B-cells and T-cells slowly rebuild to levels that make them again effective against other diseases that had been encountered.

“Furthermore, it was determined that it takes approximately 2-3 years post-measles infection for protective immune memory to be restored. The average duration of measles-induced immune amnesia was 27 months in all 3 countries. Corresponding evidence indicates that it may take up to 5 years for children to develop healthy immune systems even in the absence of the immune suppressing effects of MV infection. If MV infection essentially resets a child’s developing immunity to that of a newborn, re-vaccination or exposure to all previously encountered microbes will be required in order to rebuild proper immune function.”

If you have read this far, congratulations are in order.  You now know more about measles than Robert F. Kennedy Jr.

  

Lets Talk Books And Politics - Blogged