Tuesday, November 25, 2014

Asia’s Children Pay the Price for Test-Driven Education Systems: Myopia

Amanda Little has produced an interesting article on the Chinese education system in Bloomberg Businessweek.  It has the intriguing title Fixing the Best Schools in theWorld.  It focuses on Qiu Zhonghai who is the school principal at Qibao one of the highest ranking high schools in Shanghai. 

“Shanghai public schools placed first worldwide on the recent PISA (Programme for International Student Assessment) exams, which are administered every three years by the Paris-based Organisation for Economic Co-operation and Development. The average scores of Shanghai students in reading, science, and mathematics were more than 10 percent higher than the scores of students in the legendary Finnish school system, which had been top-ranked until 2009, when Shanghai was first included in the testing, and about 25 percent higher than those of the U.S., which ranked 36th.”

The notion of these being in need of fixing arises from the realization that preparing students to do well on a test is not the best way to build life skills in a nation’s children.  China has a long tradition of using a single examination to determine those worthiest to succeed in national life.  Think of it as meritocracy taken to the extreme.

“Suicide is the top cause of death among Chinese youth, and depression among students is widely attributed to stress around high-pressure examinations, especially the dreaded gaokao, which seniors take to determine what university, if any, they can attend. The gaokao is like the SAT on steroids: eight hours of testing in math, science, Chinese language, and a foreign language, that takes place over two days, usually in June of a student’s senior year. ‘It commands the high school student’s whole life….They spend 14 hours a day, six days a week, year after year, cramming their brains full of facts for this one test’.”

One Chinese observer, Jiang Xueqin provided this quote:

“Forcing students to study for gaokao is essentially a form of lobotomy—it radically narrows their focus….The effects of chronic fact-cramming are something akin to cutting off their frontal lobes.”

The result is that parents who can afford to are beginning to move their children to private schools that are free from the gaokao system.  The very wealthy have also been looking to send their children abroad, particularly to the United States, to get what they believe will be a more effective education.

“A trend is emerging in which more and more elite students are opting for private schooling outside the gaokao system, says Jiang. If China doesn’t want to lose its best and brightest to U.S. and European universities, eventually Chinese universities will also have to accept students who opt out of the gaokao.”

The daily schedule at Qibao indicates the intensity of this academic environment and suggests why a student might wish to escape to a place with more time for intellectual adventure.

“Their day begins at 6:20 a.m. The 2,000 students at Qibao, half of whom live on campus, gather on an Olympic-size soccer field to the sound of marching band music booming from speakers. Wearing blue-and-white polyester exercise suits with zip-front jackets that serve as their daily uniforms, these 10th- to 12th-graders perform a 20-minute synchronized routine that’s equal parts aerobics and tai chi. Group exercise is followed by a 20-minute breakfast and self-study period, and then, beginning at 7:40, a morning block of five 40-minute classes. Then there’s an hour for PE and lunch, followed by an afternoon block of four 40-minute classes that ends at 4:30 p.m. Evening study hall is 6:30 p.m.-9:30 p.m., then it’s lights out at 10. Of their 14 weekly courses, 12 cover the core national curriculum, which includes math, chemistry, physics, Chinese, English, Chinese literature, and geography. Two are elective courses—Qibao offers nearly 300, ranging from astronomy and paleontology to poetry, U.S. film and culture, visual arts, cooking, and a driving course with simulators.”

This is actually a rather moderate schedule compared to some schools.

“The Qibao schedule is relaxed compared with the infamous Maotanchang High School, for instance, which requires its 10,000 students, all of whom board in a remote town in central China, to wake at 5:30 a.m. to begin their daily schedule of 14 classes—every one designed to optimize their gaokao scores—ending at 10:50 at night.”

What was most troubling about Little’s article was the indication that this regimen was not only unproductive as a national policy, but it was also physically harmful to the children.

“Twice a day at Qibao, at 2:50 every afternoon and 8:15 at night, classical flute music floats through the speakers of every classroom and study hall. It’s a signal to students to put down their pencils, close their eyes, and begin their guided seven-minute eye massage. ‘One, two, three, four, five, six, seven, eight. … One, two …’ the teacher chants as students use their thumbs, knuckles, and fingertips to rub circles into acupressure points under the eyebrows, at the bridge of the nose, the sides of the eyes, temples, cheeks, the nape of the neck, and then in sweeping motions across the brows and eyelids.”

“The eye exercises are a government requirement in all Chinese public schools, a response to an epidemic of myopia caused by too much studying. By the end of high school, as many as 90 percent of urban Chinese students are nearsighted—triple the percentage in the U.S. There’s debate about whether the massage exercises actually help, but the students look happy to take any breaks they can get.”

 An article in The Economist, Myopia:Losing focus, provides more background on this student myopia trend and points out that it pertains not only to Chinese students.

“The incidence of myopia is high across East Asia, afflicting 80-90% of urban 18-year-olds in Singapore, South Korea and Taiwan. The problem is social rather than genetic. A 2012 study of 15,000 children in the Beijing area found that poor sight was significantly associated with more time spent studying, reading or using electronic devices—along with less time spent outdoors. These habits were more frequently found in higher-income families, says Guo Yin of Beijing Tongren Hospital, that is, those more likely to make their children study intensively. Across East Asia worsening eyesight has taken place alongside a rise in incomes and educational standards.”

Spending too much time studying indoors rather than having sufficient outdoor activity seems to be the problem.

“At the age of six, children in China and Australia have similar rates of myopia. Once they start school, Chinese children spend about an hour a day outside, compared with three or four hours for Australian ones. Schoolchildren in China are often made to take a nap after lunch rather than play outside; they then go home to do far more homework than anywhere outside East Asia. The older children in China are, the more they stay indoors—and not because of the country’s notorious pollution.”

“The biggest factor in short-sightedness is a lack of time spent outdoors. Exposure to daylight helps the retina to release a chemical that slows down an increase in the eye’s axial length, which is what most often causes myopia. A combination of not being outdoors and doing lots of work focusing up close (like writing characters or reading) worsens the problem. But if a child has enough time in the open, they can study all they like and their eyesight should not suffer, says Ian Morgan of Australian National University.”

In China, a short-sighted school system produces short-sighted children.

In the bizarre world of education theorizing, commentators in the United States bemoan the fact that we don’t have a system that matches the Asians in test performance, while the Asians wish they could improve the education they provide by having a system more like that in the United States.


Thursday, November 20, 2014

Capitalism’s Paradox: Too Much Profit, Too Little Demand

There are strange things going on in the economies of some of the most developed nations.  Corporations in these countries seem to be earning more money than they know what to do with.  An article in The Economist provides this chart:



The amount of funds that corporations are sitting on is enormous in these countries.  To put this in perspective, the US has only about 11% of GDP banked in corporate savings, about $1.9 trillion.  In the darkest hours of the Great Recession the federal government could only put up about $831 billion to save the economy from disaster.  Now, with low growth still a problem, companies are sitting on more than twice that with no apparent intention to put it to use.

The problem is actually worse because many companies use their excess profits to buy back their own shares.  There are two ways to attain growth in stock price.  One is to demonstrate a business model that projects growth and even greater profits in the future, the other is to purchase your own shares on the market and drive the price higher.  This source indicates that US corporations spent over $500 billion on these buy-backs in the past year, a near record amount.

What is peculiar about these findings is that high profits should be associated with a healthy economy that is providing incentive for expansion.  If business is so good, why are not more companies seeking to reinvest their earnings and go for greater growth?

Marx thought that capitalism carried within it the seeds of its own destruction.  Could we be witnessing capitalist economies wandering into some sort of dead end from which they cannot extract themselves?

There are two obvious explanations for a situation in which large profits are earned yet there is no incentive to reinvest the profits in growth.  In one case demand for products is low and there is no expectation that an improvement is forthcoming.  In the second case the company has essentially a monopoly and does not anticipate any potential growth in customers.

It is an interesting exercise to consider whether or not we have true competition in our major industries, or whether we have allowed a few large corporations to dominate in each arena and pretend to compete while tacitly agreeing that a huge market can be shared two or three ways and all can be wealthy.  Much is said about the high tech competition between Apple and Samsung, and between Microsoft and Google, but is any one of these in danger of being seriously injured by competition from the other?  They all have healthy market shares and enough money on hand to purchase any upstart who might choose to disturb the game.  In a situation where no one is particularly interested in competing on price, how does profit get limited?

The situation where no expectation of growth in demand exists does have a distinct Marxian flavor.  Most corporation executives will claim that low demand is the reason why they are not investing more in their businesses.  Lack of demand usually means there are too few consumers with money to spend.  Coupled with healthy profits, this suggests that consumers lack money not interest, and would purchase more if they could. 

The article in The Economist focuses on the Asian countries where the cash holdings are incredibly large.

“Japanese firms hold ¥229 trillion ($2.1 trillion) in cash, a massive 44% of GDP. Their South Korean counterparts hold 459 trillion won ($440 billion) or 34% of GDP. That compares with cash holdings of 11% of GDP, or $1.9 trillion, in American firms. If East Asia’s firms spent even half of their huge cash hoards, they could boost global GDP by some 2%.”

Consider the correlation between corporate earnings and wages for workers.

“In South Korea, company earnings have grown faster than wages for more than a decade. In Japan wages fell 3.5% between 1990 and 2012, while prices rose by 5.5%.”

“And East Asia’s economies have also suffered. If they had been paid more, Japanese consumers might have spent more. Korean households, struggling with rapidly-growing debt-burdens, have also been squeezed.”

The situation in the US is similar.

The cash available for spending by most consumers comes from earned wages.  However, earned wages have not been keeping up with prices of things they feel a need to purchase, and this has been going on for a very long time.  Clearly this cannot go on indefinitely and consumption must inevitably plummet.  Would corporations be in a healthier state if they had recycled more of their profits back into the economy?

Are we approaching a Marxian moment when corporate greed will be its own undoing?

Stay tuned.


Monday, November 17, 2014

Our Creativity and Productivity as We Age

Ezekiel J. Emanuel produced a rather interesting article recently in The Atlantic: Why I Hope to Die at 75.   His title is a bit misleading; he does not actually wish to die at 75; rather, he believes that at that age he should not take measures to extend his life.  From his view, at age 57, life at 75 would be sufficiently degraded that it would no longer be worth the effort to take measures that might extend his life.

“But here is a simple truth that many of us seem to resist: living too long is also a loss. It renders many of us, if not disabled, then faltering and declining, a state that may not be worse than death but is nonetheless deprived. It robs us of our creativity and ability to contribute to work, society, the world. It transforms how people experience us, relate to us, and, most important, remember us. We are no longer remembered as vibrant and engaged but as feeble, ineffectual, even pathetic.”

This logic and his conclusion was rather surprising to those of us who are approaching or have already moved past that target age.  In Aging: Why Would a 57-Year-Old Man Want to Die at 75? a counter argument was presented that pointed out that those who he described as “feeble, ineffectual, and even pathetic” seemed to believe that those years he didn’t wish to live were actually the most contented of their lives.

What is of interest here is Emanuel’s statement that aging “robs us of our creativity and ability to contribute to work.”  He makes these claims:

“Even if we aren’t demented, our mental functioning deteriorates as we grow older. Age-associated declines in mental-processing speed, working and long-term memory, and problem-solving are well established. Conversely, distractibility increases. We cannot focus and stay with a project as well as we could when we were young. As we move slower with age, we also think slower.”

“It is not just mental slowing. We literally lose our creativity.”

“….the fact is that by 75, creativity, originality, and productivity are pretty much gone for the vast, vast majority of us.”

He then seems to insult his academic colleagues who spend more time mentoring students in the latter years of their careers instead of focusing on their individual efforts.  The implication is that this occurs because of an age-related decrease in capability, rather than as a logical career choice.

“Mentorship is hugely important. It lets us transmit our collective memory and draw on the wisdom of elders. It is too often undervalued, dismissed as a way to occupy seniors who refuse to retire and who keep repeating the same stories. But it also illuminates a key issue with aging: the constricting of our ambitions and expectations.”

“We accommodate our physical and mental limitations. Our expectations shrink. Aware of our diminishing capacities, we choose ever more restricted activities and projects, to ensure we can fulfill them.”

To support his contentions, Emanuel presents this chart attributed to Dean Keith Simonton a psychology professor at the University of California at Davis.



Simonton does seem to be the preeminent scholar when it comes to understanding the aging and productivity of people who have demonstrated a significant degree of creativity.  Let us see what he actually has to say on the subject.  Simonton produced a short summary of relevant conclusions in bullet form here.  A copy of one of his articles is provided in concise but slightly longer form here.  The latter source will be used in the present article.

Simonton tells us that we should be careful in interpreting charts such as the one utilized by Emanuel.  They consist of averages over many types of activities, some of which have quite different time histories.  He also suggests that using chronological age as the variable is misleading because it is the career itself that has a time dependence of its own.  People who choose to pursue a particular creative activity starting later in life will follow a similar curve, but it will be shifted along the age axis.

“….we introduce a central finding of the recent empirical literature: The generalized age curve is not a function of chronological age but rather it is determined by career age….People differ tremendously on when they manage to launch themselves in their creative activities.  Whereas those who get off to an exceptionally early start may….find themselves peaking out early in life, others who qualify as veritable ‘late bloomers’ will not get into full stride until they attain ages at which others are leaving the race.”

This introduces the notion of a career trajectory that is more a function of career duration than physical age.  The shape of this productivity dependence on career duration varies considerably from one creative activity to another.

“The occurrence of such interdisciplinary contrasts endorses the conjecture that the career course is decided more by the intrinsic needs of the creative process than by generic extrinsic forces, whether physical illness, family commitments, or administrative responsibilities.”

Simonton provides some examples of differing productivity histories for various creative disciplines.

“Especially noteworthy is the realization that the expected age decrement in creativity in some disciplines is so minuscule that we can hardly talk of a decline at all.  Although in certain creative activities, such as pure mathematics and lyric poetry, the peak may appear relatively early in life, sometimes even in the late 20s and early 30s, with a rapid drop afterwards, in other activities, such as geology and scholarship, the age optimum may occur appreciably later, in the 50s even, with a gentle, even undetectable decrease in productivity later.”

Presumably, Emanuel would categorize himself as an academic scholar.  If he had read Simonton carefully, he might have concluded that as such he had a right to expect a long and productive life rather than assume that the death of his creativity was imminent.

Simonton provides us with another insight into age and productivity: even though less is produced at later stages of a career, the “quality ratio” is undiminished.

“….if one calculates the ratio of creative products to the total number of offerings at each age interval, one finds that this ‘quality ratio’ exhibits no systematic change with age.  As a consequence, the success rate is the same for the senior colleague as it is for the young whippersnapper.  Older creators may indeed be producing fewer hits, but they are equally producing fewer misses as well.”

This allows Simonton to suggest this startling conclusion:

“This probabilistic connection between quantity and quality, which has been styled the ‘constant probability of success’ principle….strongly implies that an individual’s creative powers remain intact throughout the life span.”

In other words, the decrease in creative output as a career progresses can be caused by many factors other than age.  Perhaps a professor at a university will choose to spend more time with students later in his career.  That is, after all, what professors are supposed to do.  Others may find a new creative outlet and gradually transition to a new discipline.  Artists may try to improve their “quality ratio” by investing more time and effort into each piece.

Simonton finishes with this conclusion:

“….the career trajectory reflects not the inexorable progression of an aging process tied extrinsically to chronological age, but rather entails the intrinsic working out of a person’s creative potential by successive acts of self-actualization.”

Damn!  We might as well live as long as we can.



Ezekiel Emanuel is director of the Clinical Bioethics Department at the U.S. National Institutes of Health and heads the Department of Medical Ethics & Health Policy at the University of Pennsylvania.

Wednesday, November 12, 2014

Evolution and the “Sharing Gene”

Sociobiology is the name given to the attempt to associate observed social traits in humans with genetically acquired traits favored by natural selection.  Attempts to describe human social characteristics within this framework have been controversial, mainly because it is difficult to separate behaviors that might have been learned through social interaction from those that might be innate.  It is also difficult to explain how social characteristics might have become genetically embedded by natural selection.  While the behavior of an individual can certainly be affected by genetic dispositions, these can be characterized as a form of genetic noise leading to individual variation within the population, not a species-wide tendency.  Those who take positions on the issue seem to be driven as much by philosophy as by science.

One of the difficulties faced by sociobiologists is the need to explain the observed altruistic behavior in humans.  People do cooperate even when the benefits of cooperation are unevenly distributed and it might not be in their immediate self-interest.  People have been observed to sacrifice even their lives to protect the lives of others.  How does one explain this type of behavior using natural selection?

The simplest Darwinian approach to evolution is based on the presumed desire for individual organisms to strive to insure that their genes are propagated forward into the gene pool.  This is the “survival of the fittest” prescription.  The fittest is the one who produces the most offspring that make it into the next generation.  The mechanism of natural selection should then deselect any genetically-driven behavior in an individual organism that diminishes its opportunity to procreate and propagate its genes.  Such a trait should then disappear.  What is left is an arena in which individuals compete with each other to breed, to eat, and to control territory.

The situation becomes more complex when animals form kinship relationships and begin living in bands.  All sorts of social constraints and other behaviors become operative, and they vary considerably from species to species.  People seem to have little trouble believing that animal behavior is genetically based, but tend to resist the notion that human interactions are so constructed.

Humans live in groups that impose constraints on individual actions, and they have kin relationships that also impose constraints.  However, from tribe to tribe and society to society, these social rules can vary dramatically.  Consequently, the specific constraints cannot be genetic in nature; it must be the general willingness to live under a set of rules in order to enhance the survival of the band, tribe, or nation that might be genetic.  If this is correct, then how exactly did natural selection propagate this characteristic?

Edward O. Wilson is the scientist most closely associated with sociobiology.  In his book TheSocial Conquest of Earth, he postulates that human evolution was dominated by the need for bands of humans to compete with one another for resources.  This is an extension of the “survival of the fittest” theme.

“Our bloody nature, it can now be argued in the context of modern biology, is ingrained because group-versus-group was a principle driving force that made us what we are.  In prehistory, group selection lifted the hominids that became territorial carnivores to heights of solidarity, to genius, to enterprise.  And to fear.  Each tribe knew with justification that if it was not armed and ready, its very existence was imperiled.  Throughout history, the escalation of a large part of technology has had combat as its central purpose.”

“It should not be thought that war, often accompanied by genocide, is a cultural artifact of a few societies.  Nor has it been an aberration of history, a result of the growing pains of our species’ maturation.  Wars and genocide have been universal and eternal, respecting no particular time or culture.”

The groups that survive this competition are presumably those with the strongest social traits—those that allow its members to cooperate in battle even though some can expect to suffer more than others.  Some advantage of group membership must compete with an individual’s innate tendency to act selfishly to enhance its own survival.  Wilson sees this conflict between individual and group benefits as an inevitable characteristic of human societies.

“An unavoidable and perpetual war exists between honor, virtue, and duty, the products of group selection, on one side, and selfishness, cowardice, and hypocrisy, the products of individual selection on the other.”

However, genetic traits that support group collaboration within individuals must somehow get distributed to the group if group-beneficial behavior is to dominate.  Or, the genetic content of the group as a whole must be selected by superior procreative performance versus less effective groups.  Wilson refers to something called “multilevel selection.”  The process by which this occurs is a bit murky.

“Multilevel selection consists of the interaction between forces of selection that target traits of individual members and other forces of selection that target traits of the group as a whole.  The new theory is meant to replace the traditional theory based on pedigree kinship or some comparable measure of genetic relatedness.”

  Wilson’s concept of human evolution being dominated by “universal and eternal” warfare was discussed in AreHumans Inherently Warlike? and found wanting.  The supposition that humanity’s characteristics were honed in group competition in the context of “universal and eternal” warfare may find some support in the brief moment that is recorded history, but what about the previous million years or so.  In order to seek a genetic basis for subsequent evolution, one should look earlier in time to ascertain the reasons why people felt compelled to form groups in the first place.

What characterizes humans, and differentiates them from the other apes, is the development of social skills.  Chimpanzees are quite capable of conducting warfare with another band of chimps.  In fact, when human soldiers want to conduct an operation silently they use hand signals that would easily be understood by a chimp.  So why would warfare be a means of selecting the development of complex social skills?

Wilson provides his thoughts on the life of the early hunter-gatherer.

“Throughout their evolutionary past, during hundreds of thousands of years, they had been hunter-gatherers.  They lived in small bands, similar to present-day surviving bands composed of at least thirty and no more than a hundred or so individuals.  These groups were sparsely distributed.”

“Between 130,000 and 90,000 years ago, a period of aridity gripped tropical Africa far more extreme than any that had been experienced for tens of millennia previously.  The result was the forced retreat of early humanity to a much smaller range and its fall to a perilously low level in population….The size of the total Homo sapiens population on the African continent descended into the thousands and for a long while the future conqueror species risked complete extinction.”

Early humans faced periods when resources were so scarce that they faced near extinction.  Is this a situation in which one might expect starving people to go looking for someone to fight with?  Might they not have more wisely looked for someone who would be able to help them?

Early hunter-gatherers lived from hand-to-mouth.  They didn’t have stockpiles of food that someone would try to steal.  They were also subject to extreme variations in the success of their hunting and gathering.  If a band was, on the average, finding just enough food to survive, then some individuals would have gathered less than necessary for survival, and some would have gathered more than necessary.  If they had not learned to share food, eventually each individual would have an extended period of lean foraging, starve, and the band would disappear.

Sarah Blaffer Hrdy describes a study performed by observing the history of a present-day band of hunter-gatherers in her book Mothersand Others: The Evolutionary Origins of Mutual Understanding.  It illustrates how even in relatively benign times sharing within a band was necessary.

“The sporadic success and frequent failures of big-game hunters is a chronic challenge for hungry families among traditional hunter-gatherers.  One particularly detailed case study of South American forgers suggests that roughly 27 percent of a time a family would fall short of the 1,000 calories of food per person per day needed to maintain body weight.  With sharing, however, a person can take advantage of someone else’s good fortune to tide him through lean times.  Without it, perpetually hungry people would fall below the minimum number of calories they needed.  The researchers calculated that once every 17 years, caloric deficits for nonsharers would fall below 50 percent of what was needed 21 days in a row, a recipe for starvation.  By pooling their risk, the proportion of days that people suffered from such caloric shortfalls fell from 27 percent to only 3 percent.”

What is the purpose of living in a band if not to benefit from cooperation with other members?

The sharing of food is only one example of a benefit.  Hrdy believes that one of the great advances made by humans occurred when women learned to share the responsibility for raising children.  That allowed an individual mother more time to gather food and allowed her to give birth more frequently.  This and other cooperative activities could only exist if humans developed the ability to interpret, understand, and empathize with the feelings and intentions of others.

Hrdy believes this capability to share and cooperate has become hard-wired within us.

“From a tender age and without special training, modern humans identify with the plights of others and without being asked, volunteer to help and share, even with strangers.  In these respects, our line of apes is in a class by itself.”

“This ability to identify with others and vicariously experience their suffering is not simply learned: It is a part of us.”

Evolution and survival of the fittest need not be viewed as a competition to reward the strongest; it can also be considered as a means of selecting those most effective at cooperating.  Robert Trivers formulated an explanation for how repeated acts of altruism (or cooperation/sharing) could lead to natural selection of the tendency to perform those acts.  The key is to realize that even the earliest of humans realized that self-interest should be a long-term issue not one of immediate gratification

“In a 1971 paper Robert Trivers demonstrated how reciprocal altruism can evolve between unrelated individuals, even between individuals of entirely different species…. As Trivers says, it ‘take[s] the altruism out of altruism.’ The Randian premise that self-interest is paramount is largely unchallenged, but turned on its head by recognition of a broader, more profound view of what constitutes self-interest.”

One can follow Wilson and view humans as having been driven to their current state by “universal and eternal” warfare.

Or, one can view humans as having arrived at their current state via “universal and eternal” cooperation and sharing.

I know which view I prefer.


Saturday, November 8, 2014

When a Woman’s Fetus Has More Civil Rights than She Does

We recently discussed the lack of protection that the Federal Constitution provides citizens from abusive practices put into law by federal, state or local legislatures.  The Bill of Rights is easily trumped by law.  The origins of this imbalance and its ramifications were discussed in

Pregnant, and No Civil Rights.  The desire by many to criminalize abortion and promote fetal rights has led to numerous cases where the rights of the pregnant woman herself have been trampled on.

“But it is not just those who support abortion rights who have reason to worry. Anti-abortion measures pose a risk to all pregnant women, including those who want to be pregnant.”

“Such laws are increasingly being used as the basis for arresting women who have no intention of ending a pregnancy and for preventing women from making their own decisions about how they will give birth.”

These examples are provided:

“In Iowa, a pregnant woman who fell down a flight of stairs was reported to the police after seeking help at a hospital. She was arrested for ‘attempted fetal homicide’.”

“In Utah, a woman gave birth to twins; one was stillborn. Health care providers believed that the stillbirth was the result of the woman’s decision to delay having a cesarean. She was arrested on charges of fetal homicide.”

“In Louisiana, a woman who went to the hospital for unexplained vaginal bleeding was locked up for over a year on charges of second-degree murder before medical records revealed she had suffered a miscarriage at 11 to 15 weeks of pregnancy.”

“In [Florida]….a woman was held prisoner at a hospital to prevent her from going home while she appeared to be experiencing a miscarriage. She was forced to undergo a cesarean. Neither the detention nor the surgery prevented the pregnancy loss, but they did keep this mother from caring for her two small children at home.”

“Anti-abortion reasoning has also provided the justification for arresting pregnant women who experience depression and have attempted suicide. A 22-year-old in South Carolina who was eight months pregnant attempted suicide by jumping out a window. She survived despite suffering severe injuries. Because she lost the pregnancy, she was arrested and jailed for the crime of homicide by child abuse.”

This case from Florida indicates how the rights of a pregnant woman can be taken away by the state.

“….a woman who had been in labor at home was picked up by a sheriff, strapped down in the back of an ambulance, taken to a hospital, and forced to have a cesarean she did not want. When this mother later protested what had happened, a court concluded that the woman’s personal constitutional rights ‘clearly did not outweigh the interests of the State of Florida in preserving the life of the unborn child’.”

These types of events are not unique cases.  They have been occurring for many years and are becoming more frequent.

“Last year, we published a peer-reviewed study documenting 413 arrests or equivalent actions depriving pregnant women of their physical liberty during the 32 years between 1973, when Roe v. Wade was decided, and 2005. In a majority of these cases, women who had no intention of ending a pregnancy went to term and gave birth to a healthy baby. This includes the many cases where the pregnant woman was alleged to have used some amount of alcohol or a criminalized drug.”

“Since 2005, we have identified an additional 380 cases, with more arrests occurring every week. This significant increase coincides with what the Guttmacher Institute describes as a “seismic shift” in the number of states with laws hostile to abortion rights.”

Those who would enshrine fetal rights into law can write that law in such a way that the fetus has a robust set of rights, while the poor pregnant woman has only an indifferent and archaic Constitution to protect her.

The authors conclude with this plea:

“We should be able to work across the spectrum of opinion about abortion to unite in the defense of one basic principle: that at no point in her pregnancy should a woman lose her civil and human rights.”


Lynn M. Paltrow is a lawyer and the executive director of National Advocates for Pregnant Women, where Jeanne Flavin, a sociology professor at Fordham University, is the president of the board of directors.


Friday, November 7, 2014

Are Humans Inherently Warlike?

Edward O. Wilson has written a book titled The Social Conquest of Earth.  In it he includes a chapter with the title War as Humanity’s Hereditary Curse.  He provides this summary:

“Our bloody nature, it can now be argued in the context of modern biology, is ingrained because group-versus-group was a principle driving force that made us what we are.  In prehistory, group selection lifted the hominids that became territorial carnivores to heights of solidarity, to genius, to enterprise.  And to fear.  Each tribe knew with justification that if it was not armed and ready, its very existence was imperiled.  Throughout history, the escalation of a large part of technology has had combat as its central purpose.”

One could argue that recent history (since about 14,000 years ago) was dominated by the founding of large settlements with permanent and collected assets that required protection.  Defense from predation by other groups often required collective military-like action—war essentially.  A similar organizational imperative arises when the desire is to acquire another group’s assets.  The development of fixed and long-lasting sites could be an inflection point in human history in which warlike behavior first began to be selected as a desirable trait.  Wilson will have none of that.

“It should not be thought that war, often accompanied by genocide, is a cultural artifact of a few societies.  Nor has it been an aberration of history, a result of the growing pains of our species’ maturation.  Wars and genocide have been universal and eternal, respecting no particular time or culture.”

Since the human species has been accumulating genetic characteristics for millions of years, it is a rather bold step to claim war as “universal and eternal.”  To support this contention, he turns to chimpanzees and their current behavior patterns for evidence that humans have always been warlike.  Did everyone follow that bit of logic?  Anthropologists believe that chimpanzees and what eventually became homo sapiens shared a common ancestor with the two species diverging about 6 million years ago.

“There is no certain way to decide on the basis of existing knowledge whether chimpanzee and human inherited their pattern of territorial aggression from a common ancestor or whether they evolved it independently in response to parallel pressures of natural selection and opportunities encountered in the African homeland.  From the remarkable similarity in behavioral detail between the two species, however, and if we use the fewest assumptions required to explain it, a common ancestry seems the more likely choice.”

What Wilson describes as our nearest neighbor, genetically, is actually the common chimpanzee, one of two chimp species.  About 1 million years ago a group of chimpanzees became physically isolated from the main body of the species and has evolved independently since that time.  This species is referred to as the Bonobo.  In one million years of separate evolution this chimp has evolved to possess quite different characteristics.

Bonobos have a much more peaceful temperament than the common chimpanzee.  They also have produced a society with a matriarchal tendency.  The alpha male seems to have responsibility for executing tasks, but the alpha female can refuse to follow his orders and the rest of the community will follow her lead.  In fact, the males seem to inherit their authority based on the rank of their mother.  One researcher referred to the alpha male as the general— who could be trumped by the alpha female who was the queen.  Perhaps the most unusual and interesting aspect of Bonobo society is the role of sex.  Someone was quoted as saying that if two groups of Bonobos would meet unexpectedly, it was more likely that an orgy would breakout rather than warfare.  The Bonobos, both male and female, are bisexual, very inventive, and very active.  They occasionally experiment with the missionary position and practice oral sex.  The offer of sexual activity seems to be associated with social bonding and seems to be given in much the same way we might provide a guest with a cup of coffee.

One could describe the social activity of the Bonobos as “human-like” and claim that they are the species that humans are most similar to and make whatever inferences one might wish.  It is easy to get drawn into the study of animal societies and try to draw conclusions about human nature.  The fact that these two lines of chimpanzees could develop so differently over the course of a million years might lead one to conclude that humans could also go off in their own unique direction over a period of six million years.  We are what we are, and we have no one to blame but ourselves.

In any event, Wilson presents a picture in which human society evolved by adapting to the need to continually fight to protect resources or steal them from other bands of humans.  The enhanced social skills we developed relative to the other apes were driven by the need to coordinate actions by males in this quasi-military activity.  He assumes that this need for aggressive behavior persisted throughout prehistory as well as recorded history.  He turns to investigations of primitive hunter-gatherer societies as the only way to guess at what life and society might have been like for our ancestors many millennia ago.  He concludes that the data support his view.

Sarah Blaffer Hrdy is an evolutionary anthropologist (Wilson is best known as an entomologist) who looks at data from hunter-gatherer societies and draws an entirely opposite conclusion.  She describes her views on what has driven our social evolution in Mothersand Others: The Evolutionary Origins of Mutual Understanding.  She might describe Wilson’s conclusions as those one might expect from a male who anticipates a male-dominated history.  She then proceeds to develop a theory that Wilson might claim was one that might be expected from a female anticipating a history heavily dependent on female developments.  Let us hope that Hrdy is the one who is correct.

If one must explain the human tendency for warlike behavior, one should also have to explain why sharing evolved as such a peculiarly human attribute.  Chimps fight but they do not share. 

“Why would sharing with others, even strangers, be so automatic?  And why, in culture after culture, do people everywhere devise elaborate customs for the public presentation, consumption, and exchange of goods.”

“Gift exchange cycles like the famous ‘kula ring’ of Melanesia, where participants travel hundreds of miles by canoe to circulate valuables, extend across the Pacific region and can be found in New Zealand, Samoa and the Trobriand Islands”

There were never large numbers of early humans during most of the evolutionary history in Africa.  The fact that other ape-like versions came and went likely meant that humans spent some time on the verge of extinction themselves.  A species struggling to survive probably consisted of small bands totally focused on the need to find food.  Would animals in such a state be likely to wish to expend their meager energy learning how to kill others, or be more likely to devise means of collaborating with others?

“The sporadic success and frequent failures of big-game hunters is a chronic challenge for hungry families among traditional hunter-gatherers.  One particularly detailed case study of South American forgers suggests that roughly 27 percent of a time a family would fall short of the 1,000 calories of food per person per day needed to maintain body weight.  With sharing, however, a person can take advantage of someone else’s good fortune to tide him through lean times.  Without it, perpetually hungry people would fall below the minimum number of calories they needed.  The researchers calculated that once every 17 years, caloric deficits for nonsharers would fall below 50 percent of what was needed 21 days in a row, a recipe for starvation.  By pooling their risk, the proportion of days that people suffered from such caloric shortfalls fell from 27 percent to only 3 percent.”

This example was meant to illustrate how obviously beneficial it would be to share rather than to attempt to steal by force.  Such aggression might have been a short-term solution, but it would have been a long-term disaster. 

Hrdy also provides a compelling example of how this sharing became ritualized.  She describes research by Polly Wiessner of an African Bushman tribe called the Ju/’hoansi and the exchange networks they formed that were referred to as hexaro.

“Some 69 percent of the items every Bushman used—knives, arrows, and other utensils; beads and clothes—were transitory possessions, fleetingly treasured before being passed on in a chronically circulating traffic of objects.  A gift received one year was passed on the next.  In contrast to our own society where regifting is regarded as gauche, among the Ju/’hoansi it was not passing things on—valuing an object more than a relationship, or hoarding a treasure—that was socially unacceptable.”

“In her detailed study of nearly a thousand hxaro partnerships over thirty years, Wiessner learned that the typical adult had anywhere from 2 to 42 exchange relationships, with an average of 16.  Like any prudently diversified stock portfolio, partnerships were balanced so as to include individuals of both sexes and all ages, people skilled in different domains and distributed across space.  Approximately 18 percent resided in the partner’s own camp, 24 percent in nearby camps, 21 percent in a camp at least 16 kilometers away, and 33 percent in more distant camps, between 51 and 200 kilometers away.”

According to Hrdy, the critical step in human evolution—and human survival—was when human women began to share responsibility for taking care of young children.  At one time females were much smaller than males.  If they were to grow larger and accommodate the growing human brain size they would have to be able to gather more food than they could when encumbered by a child.  In a sense, they developed a capability to leave an infant at childcare while they went off to work.  This meant that the woman would also return the favor when others needed the same service.  The ability to socialize these agreements and accommodate others’ needs and feelings requires an ability to sense emotions and express empathy that was way beyond anything other apes were capable of.

“From a tender age and without special training, modern humans identify with the plights of others and without being asked, volunteer to help and share, even with strangers.  In these respects, our line of apes is in a class by itself.”

“This ability to identify with others and vicariously experience their suffering is not simply learned: It is a part of us.”

In Hrdy’s scenario, we did not develop into creatures capable of great civilizations because our bigger brains allowed us to develop the necessary social skills, it was the development—starting over a million years ago—of social skills required for survival that made possible the evolution of the large-brained human.

While Hrdy would recognize that there are hunter-gatherer tribes that are aggressive and do seem to behave in a fashion similar to common chimpanzees, she would disagree with referring to such tendencies as “universal and eternal.”

Patricia S. Churchland provides yet another perspective in her book Touching a Nerve: The Self as Brain.  She devotes a section to discussing the question “Is genocide in our genes?”

“To claim that genocide is in our genes on the grounds that humans do commit genocide would be like saying that we have genes for reading and writing because humans do read and write.  This latter we know to be wrong.  Writing and reading were invented a mere 5,000 years ago and were made possible by other, more general capacities that we have, such as fine motor control in our hands and the capacity for detailed pattern recognition.”

Acting aggressively and even murderously can be triggered by social and environmental signals.  People go to war because they have been convinced that it is beneficial to their society.  Acting in such a way as to benefit society is the selected attribute (through natural selection) not violence itself.

“From all that we now know, human warfare was not as such selected for biological evolution.  It may have been, like reading, a cultural invention that exploited other capacities.”

It is difficult to believe the notion that violent aggressive behavior is “universal and eternal” in humans.  There is too much contradictory data.  It seems more likely that the growth of agriculture and animal domestication about 14,000 years ago triggered variations in natural selection and conceivably generated humans with a greater tendency to resort to violence to attain their desires.

Churchland provides us an example of how powerful natural selection can be in certain circumstances.  It seems that fruit flies exhibit aggressive behavior.

“Geneticists Herman Dierick and Ralph Greenspan selectively bred aggressive fruit flies for even more aggressive behavior.  After 21 generations, the male fruit flies were 30 times more aggressive than the wild-type flies.  Just like breeding dogs.”

If we take 21 human generations as being about 500 years, then during a long period when survival meant being skilled as a warrior, one might expect a more warlike human to emerge.  On the other hand, 500 years when being warlike was a disadvantage—and could get you killed, or at least removed from the gene pool—would lead one to suspect a more cooperative and sympathetic human to emerge.  If there were a million years of evolution, how many attributes would be selected and then deselected—and how many times.

As we continue to change the social and physical environment in which we live, who knows what natural selection might be doing to us. 

We are not who we were and we are not who we will be—but at least we are not universally and eternally warlike.


Saturday, November 1, 2014

The Death of Justice and the Tyranny of Law

There was a recent article in the New York Times by Shaila Dewan with the provocative title Law Lets I.R.S. Seize Accounts on Suspicion,No Crime Required.  Dewan detailed the plight of small business owners who learned the hard way about civic forfeiture laws.  These people made the mistake of making numerous small deposits in such a way as to make an IRS computer suspect that they were money launderers, presumably involved in the drug trade.  It is apparently easier to just confiscate the funds in the bank accounts than to go to the trouble of actually investigating to determine if any illegal activity is occurring.  The most recent legislative rendering of this law is the Civil Asset Forfeiture ReformAct of 2000.  Not only does this legislation by Congress define the legality of this and other forms of confiscation, it contains a clause claiming that the government agency does not have to have evidence of wrongdoing when it confiscates property.

“Bars dismissal of a complaint on the ground that the Government did not have adequate evidence at the time the complaint was filed to establish the forfeitability of the property.”

In this case, the IRS can confiscate your money by simply filing a form and informing you of what it has done.  If you want your money back you have to file a lawsuit—essentially suing the government for the return of your funds.  There are precise timelines to be met and a long and expensive legal process to follow before you might get your money back.

What the law says is that we—the government agency—have decided that you are guilty, and you will remain guilty until you prove yourself innocent.

This isn’t the way it is supposed to work is it?  Doesn’t the Constitution protect us from things like this?  Well, actually, the Constitution provides little in the way of constraints on what legislatures can do to us.

William J. Stuntz provides an interesting discussion of the failings of the Constitution when it comes to the rights provided to US citizens in his book The Collapse of American Criminal Justice

Stuntz defines two types of criminal laws.  The first are substantive in nature: defining acts that are deemed criminal and the penalties that can be imposed for this behavior.  The second are procedural in that they define which types of actions are available to policing agencies as they try to assign guilt or prevent crimes.  It would seem that a constitution would have to address the restrictions on both types of laws.  He provides an interesting comparison of the US Bill of Rights (1789) and the French National Assembly’s Declaration of the Rights of Man and of the Citizen (1789).  Only eleven weeks separated the introduction of these two documents, yet they were startlingly different in their focus.

The French focused on protecting citizens from the imposition of unjust or unreasonable laws.  Their document included phrases that would thrill libertarians today.

“Liberty consists in the power to do anything that does not injure others....”

“The law has the right to forbid only such actions as are injurious to society....”

“Every man being presumed innocent until he has been pronounced guilty, if it is thought indispensible to arrest him, all severity that may not be necessary to secure his person ought to be strictly suppressed by law.”

The Bill of Rights, on the other hand, focused almost entirely on procedural matters.

“Procedure dominates these texts.  Save for the First Amendment’s protection of speech and religion, nothing in the Bill of Rights limits legislators’ ability to criminalize whatever they wish.  Save for the mild constraints of the Eighth Amendment [cruel and unusual punishments], nothing in the Bill limits the severity of criminal punishment.”

Not being able to comprehend the future, writers of constitutions base their efforts on the past.  The past in the eighteenth century US meant that law was expected to be based on English common law as described in Blackstone’s Commentaries.  This was law as defined by centuries of judicial decisions made in the context of trying to provide justice in balancing the arguments of the abused and the accused.  The founding fathers apparently never dreamed that legislators would supersede this judicial system of law.

Stuntz points out a critical difference between the manner in which justice was dispensed in those days and where our procedural focus has led us today.

“In the eighteenth century as today, criminal defendants enjoyed the right to trial by jury.  But that right meant something quite different two centuries ago: eighteenth century American juries had the power to decide questions of law as well as disputed facts...Jurors at the time of the Founding were not the mere lie detectors they have since become.  They were moral arbiters; their job was to decide both what the defendant did and whether his conduct merited punishment.  Criminal law meant whatever jurors said it meant: their will trumped even Blackstone’s text.”

In that era there were no police departments and no district attorneys.  Criminal law was mostly defined by judges.  If someone believed he had been wronged in any way, the recourse was to file the equivalent of a civil suit seeking damages.

“Early criminal law was more akin to tort law—the law that governs what we now call personal injury litigation—than to contemporary criminal codes enforced by full-time, government-paid prosecutors.  No one thought tort law required constitutional limits: constitutions were needed to restrain governments, not private litigants.”

In Stuntz’s view, the politicization of legislation and law enforcement has been disastrous. It has become common to make district attorneys and judges elected officials.  This tends to lead to situations in which it is almost inevitable that prosecutors, judges, and legislators are team members with similar goals.  Judges once were intended to see that justice is obtained. Today, at best, they are severely limited by legal statutes in what options are available for legal interpretation and sentencing; at worst, they are collaborators in a system designed to convict people as quickly and as cheaply as possible.

“….the notion of electing both the district attorneys who prosecute criminal cases and the judges who try those cases seems uncomfortably close to criminal conviction by local plebiscite on Court TV.”

The Bill of Rights may have been intended to protect those accused of a crime from being abused by the legal system, but laws put on the books have made it easy for prosecutors to be coercive.  Laws became very specific as to what constituted a crime, leaving little leeway for a judge or a jury to have any role to play in determining guilt or innocence.  The number of laws also multiplied allowing prosecutors to charge a defendant for several crimes for seemingly a single offense.  The new set of politician-driven laws also relieved prosecutors of having to prove intent to commit crime.

“The most important change may have come in what lawyers call the law of mens rea, the law of criminal intent.  Traditionally, that body of law required proof that the defendant acted with a state of mind that was worthy of moral blame.  Some vestiges of that earlier state of affairs still exist in American law; the law of homicide is the clearest example.  But for the most part, the concept of wrongful intent—the idea that the state must prove the defendant acted with a ‘guilty mind,’….has gone by the boards.  Criminal intent has become a modest requirement at best, meaningless at worst.”

Proving guilt is hard work and it is costly—so why do it?  The goal of our legal system is to force people to plead guilty.  Trial by jury has essentially disappeared from our nation.  Over 95% percent of defendants plead guilty.  That means that over 95% of those convicted of a crime never had their guilt determined.

How does the prosecutor-judge-legislator team accomplish this extraordinary feat?  There are numerous tools at their disposal.  The death penalty is an excellent motivator for a defendant to plead guilty with a life sentence rather than risk death with the uncertainty of a jury trial.  Harsh mandatory sentences serve the same purpose.  Who would wish to risk a jury trial with a possible 20 year sentence, when offered the option to plead guilty to a lesser charge with a 2 year sentence?  Prosecutors have also been known to threaten family members and friends with prosecution if the defendant does not cooperate.  And then there is the time-honored method of encouraging other defendants to rat out on someone for a reduced sentence in their own case.

“The lack of careful investigation that characterizes most felony prosecutions virtually guarantees that a significant number of innocent defendants are pressured to plead to crimes they did not commit.  And among the much larger universe of guilty defendants, those who are punished most severely are often those who made the worst deals, not those who committed the worst crimes.  Often, the best deals go to defendants who have the most information to sell—meaning those defendants with the most extensive histories of criminal conduct.”

An article in The Economist titled The Kings of theCourtroom addresses the overwhelming power possessed by prosecutors and the possibilities for abuse.  It provides this data:

“A study by Northwestern University Law School’s Centre on Wrongful Convictions found that 46% of documented wrongful capital convictions between 1973 and 2004 could be traced to false testimony by snitches—making them the leading cause of wrongful convictions in death-penalty cases. The Innocence Project keeps a database of Americans convicted of serious crimes but then exonerated by DNA evidence. Of the 318 it lists, 57 involved informants—and 30 of the convicted had entered a guilty plea.”

The arbitrariness of “justice” when prosecutors are free to do anything they please was illustrated by this example:

“In 1996 police found a safe in Stephanie George’s house containing 500g of cocaine. She said it belonged to her ex-boyfriend, who had the key and admitted that it was his. Prosecutors could have charged Ms George with a minor offence: she was obviously too broke to be a drug kingpin. Instead they charged her for everything in the safe, as well as everything her ex-boyfriend had recently sold—and for obstruction of justice because she denied all knowledge of his dealings. She received a mandatory sentence of life without the possibility of parole. Her ex-boyfriend received a lighter penalty because he testified that he had paid her to let him use her house to store drugs. Ms George was released in April, after 17 years, only because Barack Obama commuted her sentence.”

When prohibition was put into law, enforcement was directed at producers and sellers of alcohol.  Individuals were allowed to possess alcohol and drink it within their homes.  When the antidrug laws were put into place mere possession became a crime.  Illegal drug use is at about 10% percent of the population, thus about 30 million citizens are criminals.  This has evolved into a worthless exercise in mass incarceration and racial discrimination.  However, the subject here is the making of bad laws that should be illegal because they produce astonishing violations of civil rights.  The War on Drugs produced what may be the worst example.  We opened this piece with a bit of outrage because people had seen their money confiscated unfairly by the IRS.  That is only the tip of the civil-forfeiture iceberg.

Sarah Stillman wrote a shocking article for The New Yorker titled Taken.  She begins with this lede:

“Under civil forfeiture, Americans who haven’t been charged with wrongdoing can be stripped of their cash, cars, and even homes.”

Law enforcement officials became frustrated with their inability to bring to justice the principals involved in organized crime, particularly those involved in the drug trade.  The intent was to punish by confiscation of property those who could not be punished by criminal statute.  The definition of property associated with criminal activity was quite broad.  One should recognize that this is moving law enforcement into murky areas.

“Forfeiture in its modern form began with federal statutes enacted in the nineteen-seventies and aimed not at waitresses and janitors but at organized-crime bosses and drug lords. Law-enforcement officers were empowered to seize money and goods tied to the production of illegal drugs. Later amendments allowed the seizure of anything thought to have been purchased with tainted funds, whether or not it was connected to the commission of a crime.”

Such forfeitures were comparatively rare until the passage of the Comprehensive Crime Control Act in 1984 and the establishment of a program called Equitable Sharing. 

“It established a special fund that turned over proceeds from forfeitures to the law-enforcement agencies responsible for them. Local police who provided federal assistance were rewarded with a large percentage of the proceeds, through a program called Equitable Sharing. Soon states were crafting their own forfeiture laws.”

“Revenue gains were staggering. At the Justice Department, proceeds from forfeiture soared from twenty-seven million dollars in 1985 to five hundred and fifty-six million in 1993. (Last year, the department took in nearly $4.2 billion in forfeitures, a record.)”

The goal seemed to be the enhancement of law enforcement activities without having to pay for the increase.  The federal government was providing local law enforcement agencies a way to earn money by stopping people and confiscating their property.  The more people stopped, the more money earned.  In 2006, in United States v. $127,000, an appeals court overruled a lower court and ruled that a Latino driver who was stopped with the indicated sum of money in his car provided suspicious enough circumstances that the police could confiscate the funds as drug-related money.  No evidence of a crime existed, and none was needed.

Property forfeitures soared as police officers, law enforcement agencies, and even communities began to depend on this source of income. 

“Many states, facing fiscal crises, have expanded the reach of their forfeiture statutes, and made it easier for law enforcement to use the revenue however they see fit. In some Texas counties, nearly forty per cent of police budgets comes from forfeiture. (Only one state, North Carolina, bans the practice, requiring a criminal conviction before a person’s property can be seized.)”

Not surprisingly, there were abuses of the privilege.  A poorly conceived and poorly executed law aimed at wealthy criminals has ended up the bane of the helpless.

“Yet only a small portion of state and local forfeiture cases target powerful entities. ‘There’s this myth that they’re cracking down on drug cartels and kingpins,’ Lee McGrath, of the Institute for Justice, who recently co-wrote a paper on Georgia’s aggressive use of forfeiture, says. ‘In reality, it’s small amounts, where people aren’t entitled to a public defender, and can’t afford a lawyer, and the only rational response is to walk away from your property, because of the infeasibility of getting your money back.’ In 2011, he reports, fifty-eight local, county, and statewide police forces in Georgia brought in $2.76 million in forfeitures; more than half the items taken were worth less than six hundred and fifty dollars. With minimal oversight, police can then spend nearly all those proceeds, often without reporting where the money has gone.”

The wealthy are capable of receiving considerate treatment from law enforcement in the US, but for the rest of us the nation is beginning to resemble a banana republic.