Wednesday, January 30, 2013

Evolution: Women, Men, and Reproductive Strategies

Natural selection passes on the traits of those who are capable of creating and nurturing offspring to the point at which they survive long enough to create their own offspring. To play in this selection game, females must be willing to be inseminated and possess the physical and mental attributes required to give birth and to provide nurturing to a child until it is self-sustaining. Men must be capable of inseminating a woman, but the requirements and duties of nurturing are highly contingent on circumstances. Why weren’t men genetically wired to provide reliable support for their mates and offspring?

Sarah Blaffer Hrdy has produced two books that provide insight into the ways in which human males and females have evolved and developed reproductive strategies to insure that their genes are passed on: Mothers and Others: The Evolutionary Origins of Mutual Understanding, and Mother Nature: Maternal Instincts and How They Shape the Human Species.

Compared to other mammals, primate males have a significant role in raising offspring. They play a significant role in providing protection and nourishment for infants. In most species, the females are the foragers, and the males are the hunters tasked to provide the valuable protein-laden meat. The degree to which males are diligent in performing these roles is species-specific, and usually limited to infants thought to be their own offspring. Infanticide is common among some species. While a female is suckling an infant she is not available for mating, and it could be several years before weaning will occur. Males, particularly unfamiliar males, often become impatient and kill the infant in order to render the female again available.

Human males are distinguished by their lack of predictable nurturing traits.

"Some primates exhibit very high levels of direct male care, others do so only in emergencies, while still others exhibit no care at all. But the extent of this between-species variation pales when compared with the tremendous variation found within the single species Homo sapiens."

The differences in male response seem to be associated with differing reproduction strategies assumed by females. Females realize that they and their infants are better off if the assistance of a male is available. So there are two options available: to mate with a single male and hope that it will be sufficiently reliable in performing its duties, or mate with a number of males and let them all assume they might be fathers and perhaps collectively they will provide enough security—and at least not kill the infant.

While a mother can generally be sure that an infant is actually hers, a male has no guarantee of paternity. He also has two options: trust that the mother is nurturing his infant, or increase his chances of siring an infant by inseminating as many females as possible.

Different species have adopted different strategies, but in most primates females mate with multiple males over a period of time. Hrdy seems to hint that females must have been driven by male irresponsibility to develop such behaviors. It would seem equally likely that males might have been driven by concerns about female promiscuity.

"This evolutionary history can still be detected in the patterning of sexual behavior in women today, and in the psyches of men who are obsessed with the chastity of their mates. No matter that females did not evolve a flexible and assertive sexuality in a vacuum. (It was an essential tactic for ensuring well-being of their infants that would scarcely have been necessary if females could choose an acceptable partner and count on him.) Given the situation as we find it, females mate with more than one male. This leaves males little choice. They must mate with as many females as they can, or else find themselves at a relative disadvantage vis-a-vis their rivals’ efforts to transmit their own genes to the next generation. Like mothers, males make tradeoffs of their own. Males must choose between parenting offspring they may have sired, and seeking to mate with additional females and possibly siring more."

In any event, there are evolutionary consequences and certain traits will be selected and enhanced. For example:

"To remain competitive with other males in his vicinity, a male primate must grow large enough to dominate and control females, and to exclude rival males (the way dominant male gorillas do). Or else he must evolve large testicles and ejaculate plentiful, high quality sperm in order to compete in a different arena, inside the reproductive tract of a female he will never manage to monopolize."

Primate testes size seems consistent with these mating modes, with humans being intermediate in scale.

"Hence, a 170-kilogram male gorilla....has testes weighing just 27 grams, compared to the enormous 140-gram testes of a 45-kilo chimp.... the chimp does not have the luxury of excluding competing males....Humans have testes proportionately larger than those of the underendowed gorilla, but considerably smaller than those of the chimps."

Following this line of logic, it would seem that human males might have encountered an uncertain reproductive environment where there was variation in mating habits either over time or over space as different bands of humans utilized differing strategies. There is evidence from encounters with primitive hunter-gatherer societies that mating and nurturing arrangements were highly flexible.

More consistent nurturing responses related to mates and infants were presumably never sufficiently selected, leaving men with a more random approach based on their individual attributes. For example, men undergo some of the same hormonal and chemical changes women do, including a lowering of testosterone when in the presence of an infant, but the scale of these changes is much lower in men and highly variable.

The understanding of long existing evolutionary imperatives can shed light on the bad behaviors that seem endemic within men (and women) in our societal environment. Fortunately, evolution has also provided us with a consciousness that allows us to decide to override the more primitive instincts—some of the time.

Hrdy provides us with a related bit of data that addresses a fundamental question that has long puzzled humans. The human male has evolved a rather large penis as primates go. It would seem that thousands of generations of women have voted and the results are conclusive—size matters!

Tuesday, January 29, 2013

The Permanent Temp Paradigm: Making Workers Expendable?

Erin Hatton reminds us that there is a growing employment trend in our country that is often noted but insufficiently discussed: temporary workers. She provides the beginnings of such a discussion in an article in the New York Times: The Rise of the Permanent Temp Economy.

Hatton is concerned that the marketing by the agencies that provide temporary workers has been successful at convincing employers that their needs can be met by legions of temporary workers who can be "rented" and used for whatever period is needed and released as convenient.

"A quarter of jobs in America pay below the federal poverty line for a family of four ($23,050). Not only are many jobs low-wage, they are also temporary and insecure. Over the last three years, the temp industry added more jobs in the United States than any other, according to the American Staffing Association, the trade group representing temp recruitment agencies, outsourcing specialists and the like."

"Low-wage, temporary jobs have become so widespread that they threaten to become the norm. But for some reason this isn't causing a scandal. At least in the business press, we are more likely to hear plaudits for 'lean and mean' companies than angst about the changing nature of work for ordinary Americans."

Hatton provides an interesting description of the growth of the temporary worker industry. It began in the early postwar years with an emphasis on placing women in temporary positions. From Hatton’s perspective this was a clever means of placing employees without having to worry about union objections or meeting wage and benefits standards that were in place.

"The temp agencies' Kelly Girl strategy was clever (and successful) because it exploited the era's cultural ambivalence about white, middle-class women working outside the home. Instead of seeking to replace 'breadwinning' union jobs with low-wage temp work, temp agencies went the culturally safer route: selling temp work for housewives who were (allegedly) only working for pin money. As a Kelly executive told The New York Times in 1958, ‘The typical Kelly Girl... doesn't want full-time work, but she's bored with strictly keeping house. Or maybe she just wants to take a job until she pays for a davenport or a new fur coat’."

The temp agencies succeeded in imprinting the notion that temporary work was a valid component of the economy. In so doing, they thus created a subclass of workers who could be treated differently from other employees.

"Protected by the era's gender biases, early temp leaders thus established a new sector of low-wage, unreliable work right under the noses of powerful labor unions. While greater numbers of employers in the postwar era offered family-supporting wages and health insurance, the rapidly expanding temp agencies established a different precedent by explicitly refusing to do so. That precedent held for more than half a century: even today 'temp' jobs are beyond the reach of many workplace protections, not only health benefits but also unemployment insurance, anti-discrimination laws and union-organizing rights."

The natural next step for the temp agencies was to sell the idea that temporary workers could be more cost effective than regular employees.

"Now eyeing a bigger prize - expansion beyond pink-collar work - temp industry leaders dropped their 'Kelly Girl' image and began to argue that all employees, not just secretaries, should be replaced by temps. And rather than simply selling temps, they sold a bigger product: a lean and mean approach to business that considered workers to be burdensome costs that should be minimized."

"According to the temp industry, workers were just another capital investment; only the product of the labor had any value. The workers themselves were expendable."

The potential advantages to companies are obvious: minimal training, little or no overhead, and immediate hiring and termination as necessary. A number of employers have apparently found that this model meets their needs.

"....thousands of companies began to go the temping route, especially during the deep economic recessions of the 1970s. Temporary employment skyrocketed from 185,000 temps a day to over 400,000 in 1980 - the same number employed each year in 1963. Nor did the numbers slow when good times returned: even through the economic boom of the '90s, temporary employment grew rapidly, from less than 1 million workers a day to nearly 3 million by 2000."

Hatton claims that low wage temporary workers "threaten to become the norm." The 3 million temporary workers in 2000 would be less than 3% of the working population. Does that signify a significant threat to the norm? She suggests that the temp industry is growing faster than any other, but what exactly does that mean?

Danielle Kurtzleben provides a bit more perspective on the recent developments in the temp industry in an article in U.S. News Weekly.

"After hitting a peak of nearly 2.7 million workers in 2006, the temporary help industry lost more than one-third of its members during the downturn. Since then, it has regained a vast majority of the workers it lost—87 percent—and continues to add them steadily. Compare that to all private employers, which have only brought back just over half of their jobs."

The suggestion by Hatton that industry might be embracing the temporary worker paradigm is not supported by this data. Even in the worst of economic times temporary workers suffered more severe cutbacks than the general workforce. That is not necessarily the expected response by businesses. The fact that temporary jobs are coming back faster as the economy recovers is a testament to the ease of hiring temps, but it does not yet indicate a wholesale conversion to this class of worker.

There are areas in the workforce and in the economy where temporary work is a desirable component. Some people do, in fact, wish to work part-time. Many industries have seasonal worker requirements and hiring temporary workers is unavoidable. Temporary workers can fill in for employees who are on vacation, or maternity leave, or in any number of other circumstances. On the other hand, there are certainly those who wish to work full time in a job with decent pay and benefits but are consigned to a netherworld of uncertain prospects and unmet expectations because employers choose to satisfy long-term needs with a cheap short-term solution.

Hatton’s warning would be more compelling if she had provided data that indicated how temporary workers are actually being used—or abused—by businesses. Perhaps the information is available in her book.

"Erin Hatton, an assistant professor of sociology at the State University of New York, Buffalo, is the author of ‘The Temp Economy: From Kelly Girls to Permatemps in Postwar America’."

Even if the number of workers who are unwillingly trapped in low-wage temporary jobs is a small fraction of the workforce, that is no reason to ignore them. Hatton’s article indicates that it is the option of business to create these job categories, but that is not strictly true. Society makes the rules that businesses must live by. If society decides that a class of workers is being abused, society can and should change the rules to prevent that.

If there is a fault here, it is not with our companies, but with our society.

Sunday, January 27, 2013

The Filibuster: Two Nations Negotiate, Not Two Political Parties

The filibuster is a mechanism by which a minority party that controls at least 41 votes in the 100 member Senate can block passage of any legislation it chooses even if there is a majority in favor. This tactic produces, in effect, minority control of the legislative process. This is clearly an inefficient and an unhealthy means by which to govern. Ezra Klein has provided an excellent summary of the history of the filibuster and of the current activities associated with attempts to modify it in The New Yorker: Let’s Talk: The Move to Reform the Filibuster

The Senate-clogging mechanism was clearly not intended by the Founders, but they did allow the two houses of Congress to determine the rules by which they would conduct their proceedings.

"The Constitution expressly specifies the occasions on which majority rule is to be considered insufficient: removing the President, expelling members, overriding a Presidential veto of a bill or a resolution, ratifying treaties, and amending the Constitution."

The path to the filibuster arose accidentally out of an attempt to improve the clarity of the Senate’s rules.

"In 1806, the Senate rewrote its rules to make them clearer. Along the way, it dropped a provision called ‘the previous question motion,’ which allowed a senator to hold a majority vote to end debate on a question, and, in so doing, dropped the only way to stop a group of senators from talking."

This left the senate in the mode of assuming that no members would abuse this path in order to subvert the will of the majority on a major issue.

It wasn’t until 1917 that the procedure was carried to an extreme. At that point a group of isolationist senators talked for 23 days until the senate’s session came to an end in order to block Woodrow Wilson’s intention to arm merchant ships prior to formal entry into World War I. The response to this action was to establish a mechanism for terminating "debate."

"During a special session of Congress....the senate agreed to Rule XXII, which empowered a two-thirds majority of senators to end debate through the procedure known as ‘cloture’."

The filibuster again became an issue when it was used by southern senators as a means obstructing attempts to combat their Jim Crow laws and Policies. Minimal reforms were put in place, but it wasn’t until 1974 that significant changes were made.

" the wake of Watergate and the resignation of Richard Nixon,....a huge Democratic majority swept into the Senate....the reformers forced the threshold for cloture down to three-fifths."

There is a valid argument to be made that the filibuster is important in preventing a majority from imposing onerous conditions on a minority. The problem with such an argument is that it cannot conceivably apply to all matters of legislation. Not all laws should require a supermajority.

"There’s no perfect measure of how frequently filibusters occur. The closest thing we have to a count is the number of cloture votes the majority mounts. From 1917 to 1970, the majority sought cloture fifty-eight times. Since the start of President Obama’s first term, it has sought cloture more than two hundred and fifty times. Even that is probably an undercount, as it misses all the moments when the majority just gave up on an issue before a vote was mounted."

The means for changing this procedure have always been available, but party leaders seem to have little motivation to dramatically weaken the right to filibuster. Younger majority Senators are perpetually outraged at the obstructionism allowed. Older senators remember what it was like to be a member of the minority and how useful it was to have the filibuster available as a weapon at those times.

In truth, the controversy over use of the filibuster doesn’t relate to poor procedural rules, or malfeasance on the part of one particular party or the other; it relates to the fact that we have two components to our society and each views the other as an existential threat. When filibusters are invoked today, the two parties are not arguing over the most effective features of a proposed law, they are arguing that their way of life is being put at risk.

Recall that the Founders thought it important to require a supermajority for ratification of a treaty. Treaties are difficult to negotiate because they often involve two nations that have little trust in each other, and the issues can be momentous—even existential. An analogy between contending nations and contending political parties becomes compelling when we consider how diametrically opposed the Republican and Democratic parties are on just about every issue. Whether one wants to describe the situation as liberal versus conservative, North versus South, rural versus urban, religious versus sectarian, libertarian versus communitarian—aspects of all are involved—we are, in fact, two nations trying to live together as one.

One should recall that the United States was created as two nations: slave and non-slave. We have yet to fully overcome that initial division. Conflict flared up and led to the Civil War. It flared up again and resulted in the tumultuous and violent civil rights struggles in the middle of the twentieth century. It seems we have one more conflict to endure. One can draw a straight line across time from the economic and political philosophy espoused by the slave-state South to that of the Southern core of the Republican Party today.

We must decide whether we are a nation of individuals who function better when acting in concert, or we are a nation of individuals who need collaborate only when absolutely necessary. Until we reach an overwhelming consensus on that issue we might as well keep the filibuster rule in place. Perhaps the continuing dysfunction will force our citizens to finally come to a definitive conclusion.

Friday, January 25, 2013

Evolution and Culture: Parent Preferences: Sons or Daughters?

There are a number of societies in which a parent preference for sons has, on the whole led, to a large discrepancy in the male and female populations. Infanticide, neglect, or, more recently, abortion were the means of expressing this male preference. It was once easy to conclude that this was merely the result of peculiar cultural practices that had developed in a particular region. Sarah Blaffer Hrdy, in her book: Mother Nature: Maternal Instincts and How They Shape the Human Species, tells us that the situation is actually much more complicated, and deeply associated with fundamental biological imperatives.

The fundamental principle of motherhood is to produce offspring that will survive long enough to themselves reproduce. This is an inevitable result of natural selection. The major determinants of survivability are resource availability, and security. A mother can adjust both the number and sex of offspring in response to her perceived expectations for survivability. Lack of resources will convince mothers to limit the number of offspring by either smaller broods or by killing or abandoning the excess. Many animals have been observed to alter the sexual content of the brood produced depending on the prospects of survival. Experiments have determined that for some species there exist innate mechanisms that provide this degree of control. Humans seem to behave similarly in response to the same concerns about survivability, but they await the birth of the child before decisions are made about the viability of the infant.

In species that live in organized societies there are also social or cultural effects in play as well as resource availability.

In societies where resources are limited and rank is acquired from the female, species such as baboons, high ranking mothers will prefer to produce more females because females will benefit most from their mother’s rank. Conversely, lower ranking females will produce more males because males are more likely to survive the disadvantages of low rank and successfully breed than females. If the same species exist in a resource-rich environment, then the population will grow faster, the breeding limitations imposed by rank are less restrictive, and the sexual preference for infants can switch. A high ranked mother can decide that it is more advantages to produce males because males have a greater capacity for reproduction given that they are capable of inseminating many females and more females are now available.

In humans, the fact that males have evolve to be bigger and stronger than females due to the physical competition over mate selection, has generally produced a survival bonus associated with their sex. This was not always necessarily the case. Studies of the earliest hunter-gatherer societies available for observation indicate a wide variety of social arrangements. Nevertheless, as societies evolved to more sedentary structures and domesticated agriculture produced goods and properties that constituted wealth that had to be preserved, patriarchal structures became more dominant. Males were more capable of the critical role of protecting wealth from predators. Male dominance, however, does not necessarily lead to female infanticide. There was a need for a son to carry on the family name and family wealth, but here were many variations on how to deal with other infants.

There are certain preconditions that seem to be necessary for societies such as in India to develop extreme gendercide practices. It is the fragility of the economic health of the society that creates such a social response.

"In a world fraught with economic peril, recurring droughts, famines, and warfare, the best hope for long-term persistence of a lineage was concentration of resources in a strong, well-situated male heir with several wives or concubines. If family circumstances make this tactic doubtful, a daughter or two provide insurance against total extinction of the family line. If a family is truly wretched, the best it can hope for is that daughters will be able to, as slaves, wives or concubines, move up the social scale into positions where their children might possibly survive."

This human behavior is predicted by a hypothesis produced by Robert Trivers and Dan Willard to describe animal behavior.

"Trivers and Willard proposed that parents in good condition should prefer sons, those that were disadvantaged, daughters. They even specified that this logic would be found in socially stratified human societies, where women marry up the social scale, whenever the ‘reproductive success of a male at the upper end of the scale exceeds his sister’s, while that of a female at the lower end of the scale exceeds her brother’s. A tendency for the female to marry a male whose socioeconomic status is higher than hers, will, other things being equal, tend to bring about such a correlation’."

Note that this social construct renders females born at the top of the social structure useless, and males born at the bottom relatively useless.

"Eliminating daughters at the top of the hierarchy produces a vacuum sucking up marriageable girls from below, and creating a shortage at the bottom. Families don’t pay dowries to place daughters in the same or lower status than their own. They demand payment for them instead. At the bottom of the heap, sons whose families cannot cough up the required brideprice remain celibate. Far from calamities, daughters are the most valuable commodities low-status families possess."

Hrdy emphasizes that such societies are not the result of some genetic imperative, but merely a response to a particular environment, one that can change as the environment changes.

"In nineteenth century Rajasthan, where periodic droughts and famines were a certainty, survival of family lines required extreme measures. Heartless? Definitely. And ruthless. But prevailing rules for deciding which sex offspring will contribute most to family ends were devised over generations. Outcomes of successive trial and error, observation of the trial of others, imitation of those who succeed—these became codified as preferences for particular family systems. Adaptive solutions were retained as custom because families that followed these rules survived and prospered."

India is a tremendously diverse country and the incidence of direct or indirect infanticide varies considerably from one region to the other. India was the focus here because it has been the subject of most research. In terms of the ratio of male to female children produced it, as a whole, is far from being the worst offender.

Modernization is changing the ruthless calculus that created the practice of gendercide, but changing a culture is a complex process. As India and China become wealthier, one would expect the practice to die away—gradually.  That has yet to occur on a broad scale.

There is one exceedingly positive data point provided by South Korea. Twenty years ago that country had one of the highest ratios of male children to female children in the world. With the dramatic economic and educational improvements that have occurred throughout the nation in the interim, that country now produces children at near the nominal male to female ratio of about 1.05. An article in The Economist provides this data:

Humans are animals, but they are animals capable of controlling their environment. Better environments produce better behaviors.

Tuesday, January 22, 2013

Avandia: Corruption, Bias, and Death

All drugs have side effects which can range from imperceptible to lethal. There is always a calculation that must be done to quantify the ratio of good effects of the drug to the bad effects of the drug. If the good sufficiently outweighs the bad, the drug might still be considered worthy of use. One assumes such decisions are to be made by an agency assigned the task of deciding what might be in the public good. The relevant agency in this case is the Food and Drug Administration (FDA). Unfortunately, the FDA does not itself produce the data needed to make decisions on safety. It has become common for a drug to be submitted for approval when the vast majority of the data available on its use comes directly from the drug companies themselves rather than from independent agencies. The opportunity for abuse in such an environment pertaining to such a critical mission is outrageous.

Peter Whoriskey has produced an excellent series of articles for the Washington Post covering the less-well-known practices of the drug industry. Here we will discuss his article titled As drug industry’s influence over research grows, so does the potential for bias.

Avandia was a drug produced by GlaxoSmithKline (Glaxo) for the treatment of diabetes. The data provided to the FDA indicated that Avandia was safe to use and was more effective than competing drugs. Several years later, after numerous heart attacks and deaths, Avandia was determined to significantly increase the probability of cardiovascular events and was essentially withdrawn from circulation. How this came to pass is the focus of Whoriskey’s article.


"The outlines of the Avandia case — in which the drug’s dangers had been recognized within the company long before the FDA pulled it from retail shelves — are well known."

"But the way that company officials employed academics — and the prestige of the nation’s top journal — to promote the idea that the drug was safe has received little public scrutiny, and a full account offers a window into the corporate decisions underlying today’s drug research."

"Interviews, FDA documents and e-mails released by a Senate investigation indicate that GlaxoSmithKline withheld key information from the academic researchers it had selected to do the work; decided against conducting a proposed trial, because it might have shown unflattering side effects; and published the results of an unfinished trial even though they were inconclusive and served to do little but obscure the signs of danger that had arisen."

Glaxo knew at an early stage that there was reason to be suspicious about side effects from Avandia. They chose to ignore the possibility and decided to avoid gathering any potentially damaging data.

"From nearly the beginning, Glaxo scientists confronted signs of potential heart dangers in Avandia. In 2000, about a year after the drug’s approval, a small internal study suggested that Avandia might raise "bad" cholesterol levels more than a competitor."

"The company considered sponsoring a full-blown trial to weigh the issue, but before it did, scientists conducted a "risk/benefit" analysis — not to calculate the risks and benefits of the drug to patients but to see whether a full-blown trial could harm the drug’s reputation."

"When that analysis showed a sign of danger — Avandia raised bad cholesterol levels more than the competitor — the company decided to drop the subject."

’Per Sr. Mgmt request, these data should not see the light of day to anyone outside of GSK,’ said an internal e-mail that was widely reported after it turned up in the Senate investigation."

Warnings came in from an international monitoring agency suggesting that drugs such as Avandia might be associated with heart trouble in 2003.

" 2005 and 2006, Glaxo conducted an examination of records from more than 14,000 patients and concluded that Avandia raised the risk of coronary blood flow problems by about 30 percent, the Senate investigators said."

When asked by the FDA to conduct a safety trial and include cardiovascular effects, Glaxo devised a clinical test that excluded people who were considered at risk for heart trouble and failed to tell the investigators (so the investigators claim) that cardiovascular issues should be a point of focus.

"As is common practice, the company arranged for a group of experts — mostly academics — to form a steering committee to guide and publish the experiment. Four of the 11 committee members were Glaxo employees. The other seven reported serving as paid consultants or had other financial connections to the company."

"But as the FDA later noted, the.... trial was not really designed to assess heart risks. For one thing, it excluded people most at risk of heart trouble, making it harder to spot a problem. Moreover, investigators did not have a group of doctors validate reports of heart attacks, as is customary because they can be difficult to detect. Finally, about 40 percent of patients dropped out of the trial."

"Why would the academics have set up a trial like that? One reason is that Glaxo apparently did not tell its own academic researchers that the FDA had requested that the....trial look at possible heart troubles."

How did Glaxo finally get caught? Independent researchers were disturbed enough to force the release of data so that an unbiased evaluation could be performed.

"To see whether his suspicions were warranted, [Steven E.] Nissen, with colleague Kathy Wolski, set out to assemble the data from every trial of Avandia that they could find. The more data they had, the more likely they could accurately gauge the risks. The drugmaker refused Nissen’s requests for data, but because of litigation brought by Eliot Spitzer, then New York’s attorney general, the company had been forced to make some of it public. In all, he discovered the summaries of 42 trials — 35 of them unpublished. Most of them had been sponsored by Glaxo."

"After analysis, the results were stark: Avandia raised the risks of heart attack by 43 percent and of death from heart problems by 64 percent."

Glaxo continued to fight back and deny the validity of these results even though its own scientists agreed with the conclusions.

"....scientists and statisticians at Glaxo largely agreed with Nissen’s calculations, the company e-mails released by the Senate show."

It would be another three years until Glaxo’s defenses collapsed and the use of the drug was restricted in the US and banned in Europe. Meanwhile people who were prescribed the drug continued to die.

When does corporate death-causing malfeasance become manslaughter, a punishable crime?


There are disturbing trends in the testing and marketing of drugs that lead one to suspect that the entire system needs to be rethought and put under an independent agency that better represents the welfare of consumers.

"Years ago, the government funded a larger share of such experiments. But since about the mid-1980s, research funding by pharmaceutical firms has exceeded what the National Institutes of Health spends. Last year, the industry spent $39 billion on research in the United States while NIH spent $31 billion."

"The billions that the drug companies invest in such experiments help fund the world’s quest for cures. But their aim is not just public health. That money is also part of a high-risk quest for profits, and over the past decade corporate interference has repeatedly muddled the nation’s drug science, sometimes with potentially lethal consequences."

The many billions the drug companies spend on research have resulted in a tremendous growth in for-profit drug testing entities that compete with academic sites. Business is business, so these outfits know that receiving continued funds from a given company depends on how happy the drug company is with the results delivered. Yet another system that is ripe for abuse.

The drug companies have also used their funds to feed fees, research dollars, and investment opportunities to doctors and researchers who study and/or promote drug products. The result is that it has become difficult to assemble experts, on any subject, that do not have financial ties to one or more drug companies. One might argue that this money has no effect, but the evidence that bias, inadvertent or purposeful, is nearly inevitable in medical science is overwhelming. For example,

"’Unfortunately, the entire evidence base has been perverted,’ said Joseph Ross, a professor at Yale Medical School who has studied the issue."

"....Ross notes that corporate bias can be particularly strong. The odds of coming to a conclusion favorable to the industry are 3.6 times greater in research sponsored by the industry than in research sponsored by government and nonprofit groups, according to a published analysis by Justin Bekelman, a professor at the University of Pennsylvania, and colleagues."

Other studies have arrived at similar conclusions.

If there is a positive note in this tale it is the suggestion by Whoriskey that doctors are beginning to take note of the drug company activities.

"....medical science appears to have reached a crisis: Doctors have grown deeply skeptical of research funded by drug companies — which, as it happens, is most of the research regarding new drugs being published....."

"According to a survey published this fall in NEJM [New England Journal of Medicine], doctors are about half as willing to prescribe a drug described in an industry-funded trial. That’s unfortunate, doctors say, because a good portion of the industry-funded research is done well."


"A Food and Drug Administration scientist later estimated that the drug had been associated with 83,000 heart attacks and deaths."

Sunday, January 20, 2013

Stimulus, Austerity, and Economic Dogma

The more one learns of the discipline of economics, the more distrustful one becomes of any predictions or pronouncements that emerge. The field has been referred to as the "dismal science," but that appellation is only half correct. The label "science" is as yet unearned.

In the 1930s, during what was arguably the darkest hour for global economics, two contending views emerged. One, held by the British government’s financial officials (thus referred to as the "Treasury view," held that

"Any increase in government spending necessarily crowds out an equal amount of private spending or investment, and thus has no net impact on economic activity."

John Maynard Keynes produced counter arguments to support the efficacy of increased government spending in increasing the level of economic activity.

One might have thought that, given the severity of the consequences of not resolving this issue, there would inevitably have been some sort of confrontation where one side or the other was compelled to admit error. With all the wars, recessions, and policy fluctuations that have occurred in the many countries with modern economies, there must be sufficient evidence to support one side or the other.

This ultimate confrontation never occurred. Instead, the two sides retreated into enclaves and surrounded themselves with disciples trained (indoctrinated) in the appropriate dogma. These opposing schools of thought are even geographically separated. Those who adhere to variations on the Treasury view are often referred to as "freshwater schools" because they seem to be distributed in the center of the country, with the University of Chicago being ground zero. Those adhering to variations on the Keynesian approach have accumulated in schools on the east and west coasts and are appropriately referred to as "saltwater schools."

Since the big picture issues were determined by revelation (Keynes or Treasury approach), students could occupy themselves writing more and more papers about less and less. Big issues are difficult to address. And they are risky to address. If one is proved wrong, then a lifetime of effort and accumulated prestige are rendered worthless. That cannot be allowed to happen—so it didn’t happen.

Rather than some sort of accord being attempted, the economic polarization became conflated with political polarization. The big business oligarchs wish to control the economy of the nation themselves and fear any hint that the government might have a role in directing it. Consequently, they and their party, the Republicans, have an interest in supporting and propagating the activities of the freshwater economists. The notion of government spending being useful in fiscal policy—as well as related social policies—led those who contend with business oligarchs, supporters of the Democratic party, to grow quite comfortable with the saltwater economists.

Consequently, when the latest financial disaster struck, the Great Recession, the two flavors of economists were ready to provide totally conflicting advice. Arguments centered on the degree to which government spending could stimulate the economy and produce sufficient growth to bring us out of the recession. A decrease in government spending would have the same size effect, but in the opposite direction. A saltwater type would believe that government spending has a large positive effect on the economy, and decreases in spending have a large negative effect. Freshwater types believe spending has little effect; therefore cuts in spending also have little effect. Obviously, they both can’t be right, and pity the nation whose government makes the wrong selection.

At issue is what happens after the government spends funds. In particular, what is the multiplying factor on the spending as a contribution to economic activity—the effect on GDP. If the government pays a company to build a bridge, the money will buy equipment and supplies and pay wages. All those who receive income from the bridge builder will then spend some of that money on wages supplies and personal expenses. The initial government expenditure sends ripples out through the economy and, on the face of it, seems to increase economic activity beyond that of the initial input. If so, this would indicate a multiplier greater than one. The exact contrary view would say that this input of activity is, to some degree, cancelled by secondary effects and the multiplier would be less than one, and could be as low as zero.

Here are examples of what these opposing views arrive at. The first is data presented by Mark Zandi in a brief to a congressional committee in 2008. Zandi was described as chief economist of Moody’s

These numbers indicate multipliers much greater than one for government spending.

Robert J. Barro provided an alternate view in an article in the Wall Street Journal.


"I estimate a spending multiplier of around 0.4 within the same year and about 0.6 over two years. Thus, if the government spends an extra $300 billion in each of 2009 and 2010, GDP would be higher than otherwise by $120 billion in 2009 and $180 billion in 2010. These results apply for given taxes and, therefore, when spending is deficit-financed, as in 2009 and 2010. Since the multipliers are less than one, the heightened government outlays reduce other parts of GDP such as personal consumer expenditure, private domestic investment and net exports."

"Thus, viewed over five years, the fiscal stimulus package is a way to get an extra $600 billion of public spending at the cost of $900 billion in private expenditure. This is a bad deal."

This is not the place for an extended argument about who might be correct in this battle. Paul Krugman provides an interesting discussion of these issues in his book End This Depression Now! Krugman is perhaps the most vocal members of the saltwater school. Interestingly, he admits that the issues are more complex than a simple analysis can resolve. Fiscal actions must be correlated with political policy decisions and the state of the economy at the time in order to even attempt to extract a cause and effect relationship. He suggests a broad analysis of the many instances of fiscal actions that have occurred is required and indicates that the International Monetary Fund (IMF) has recently performed an appropriate study.

This source provides us with information about the IMF’s conclusions.

"In October 2012 the International Monetary Fund released their Global Prospects and Policies document in which an admission was made that their assumptions about fiscal multipliers had been inaccurate."

‘IMF staff reports, suggest that fiscal multipliers used in the forecasting process are about 0.5. our results indicate that multipliers have actually been in the 0.9 to 1.7 range since the Great Recession. This finding is consistent with research suggesting that in today’s environment of substantial economic slack, monetary policy constrained by the zero lower bound, and synchronized fiscal adjustment across numerous economies, multipliers may be well above 1.’

"This admission has serious implications for economies such as the UK where the OBR used the IMF's assumptions in their economic forecasts about the consequences of the government's austerity policies."

The Eurozone countries have also been enamored with austerity policies and were similarly encouraged by the IMF. How has that worked out? An article in The Economist provides this chart with Australia thrown in as a bonus.

The US has maintained a slowly growing economy, whereas the European countries seem to have entered a death spiral.

Thankfully, the US had a Democratic president and Democratic economists in our time of need, and it will be at least four more years before the Republican economists can return and cause us grief again.

Thursday, January 17, 2013

Evolution in Action: Pro-Choice IS Pro-Life

The pro-choice and pro-life antagonists argue over whether a woman should have the right to abort a pregnancy when she decides that she is either unable or unwilling to invest a lifetime of resources in the fetus that she is carrying. The pro-life proponents claim such an act is equivalent to murder because the fetus must be considered a viable human being from the time of conception. Rather than battle this out in the context of current cultural standards, let’s address the historical record and see if some insight can be gained. We have had medically-safe abortion legally available for only about two generations. Mankind has existed prior to that for many thousands of generations. Abortion was not available, but the various forms of infanticide were options. Perhaps some insight can be derived from humanity’s needs and solutions as they developed over the eons. What one might deem "natural" today may have little correlation with what seemed natural to earlier generations of humans.

Our guide in this investigation is Sarah Blaffer Hrdy, an evolutionary anthropologist and author of an absolutely enthralling book: Mother Nature: Maternal Instincts and How They Shape the Human Species. What is clear from Hrdy’s research is that nature has evolved reproductive strategies that are designed to produce more offspring than a given species is generally capable of nurturing to maturity. This provides the species with extras if something should go wrong, but it requires the development of tactics for ensuring that the burden of too many offspring does not threaten the viability of the species itself. The excess of births are dealt with by allowing some to die. That is nature’s way. That has also been humanity’s way.

Data on the behavior of humans at earlier stages of development can be inferred from the study of other primates, particularly the other apes, and from early studies of primitive hunter-gatherer societies, before their behavior patterns were altered by encounters with modern civilization. Historical records go back a few thousand years and provide additional information. The picture that emerges is one in which infanticide was a common practice up to the time when modern contraceptive methods became available. For example, the best recorded data determine that allowing excess children to die was common among Christians from the origin of the religion in the Roman Empire through the nineteenth century.

Let us begin in the beginning. Human-like creatures, bipedal apes walking the ground, began to emerge several million years ago. Most evolutionary attempts failed and the species disappeared. Humans were lucky enough to have emerged from the line that began with Homo erectus. Evolution drove humans farther and farther away from the other great apes in characteristics. Humans lost their body hair and began to have ever larger infants that were less developed at birth and more vulnerable. Whereas a baby chimp was born strong enough to grasp its mother’s hair and ride her as she went about her business of gathering food, human mothers had only the option of carrying her infant everywhere. Carrying around a cumbersome and absolutely helpless infant was not an efficient way to gather food.

Hrdy tells us the next step was to learn to depend on allomothers, a grandmother, other kin, or just another woman in the group, to nurture her child at times when she couldn’t. Keep in mind that infants have little or no immune system active at birth. If they do not have access to safe fluids from a woman’s breast they nearly all die. This act of shared nurturance meant that the child spent less time on its mother’s breast. The contraceptive effect of lactation is not absolute; it is a function of the amount of nipple stimulation the mother receives. Going significant periods without that stimulation increased the probability of conceiving again before the previous child could be weaned. To face the need to care for multiple infants at the same time became extremely difficult and extremely dependent upon circumstances.

This tendency to conceive over shorter intervals was probably beneficial in terms of survival of the species, but it had to be moderated in order that the demands of a rapidly growing population could be limited if food or water became scarce. Humans are unique in many ways, but this one characteristic became very important.

"Scrutinizing newborn group members is a primate universal. But consciously deciding whether or not to keep a baby is uniquely human."

When water, food, or allomothers were scarce, the mother had to decide if she had sufficient resources to support the infant. If she had older children she had to decide whether it was worth risking their lives by adding the burden of an additional infant. This type of decision process must have been quite common. Given that the human population must have grown and contracted many times over the ages, the need to eliminate infants had to have been common.

If the decision to allow a baby to die was to be made, it had to be made quickly. Mothers and infants seem to have evolved characteristics that recognized this process and provided a grace period, a few days in which the exceptionally fat human infant could survive without mother’s milk, and a few days before the mother began to lactate and her body began issuing all sorts of chemical and hormonal signals ordering her to bond with her child.

These behaviors were observed in hunter-gatherer societies when encountered centuries earlier. They are observed today in the few such societies that remain. This behavior pattern did not disappear as humanity exited from the pre-historical era. As societies became more complex, decisions of life or death for infants often became a group decision or came under more defined social rules. Many societies defined viability tests to insure that only the sturdiest infants were invested in.

"Among Germans, Scythians, and even some civilized Greeks....newborns were subjected to icy-cold baths to toughen them, and also to test them ‘in order to let die, as not worth rearing, one that cannot bear the chilling’."

The ancients had an answer to the question of when life begins. It begins when someone decided the infant is worthy of the investment of scarce resources. In particular, it begins when a woman, usually the mother, offers the infant her breast and lactation begins. No infant was viable until that occurred.

"In eighth century Holland, for example, among the....Frisians, infanticide was permissible, but only as long as the child had not yet tasted ‘earthly food.’ This was a common pattern. It is surely no accident that so many culturally recognized milestones about becoming human specify intake of nutrients. Once lactation is established, with all its attendant hormonal changes, the mother is physiologically and emotionally transformed in ways that make subsequent abandonment of her baby unthinkable."

Hrdy tells a fascinating tale of the investigations of the historian John Boswell.

"While researching early Christian sexual mores, Boswell came across some odd advice given by prominent early theologians. Men should be careful never to visit brothels or have recourse to prostitutes because in doing so they might unwittingly commit incest."

How should one interpret that strange piece of advice? Boswell ultimately proved what could be the only logical explanation.

"These early Christian parents, much like the ‘barbarians’ Darwin and various anthropologists described, abandoned rather than killed their unwanted infants. The deeper Boswell delved, the clearer it became that very nearly the majority of women living in Rome during the first three centuries of the Christian era who had reared more than one child had also abandoned at least one. He found himself looking at rates of abandonment of around 20 to 40 percent of children born."

Boswell seemed to maintain hope that these Christians abandoned their infants on the assumption that some kind soul would pass by and pick them up. And of course that kind soul would be lactating so the infant would have a safe supply of fluids and nourishment. Some obviously did survive and probably became slaves or prostitutes, but, as subsequent history will show, abandonment was more likely a death sentence.

The Christian nations of Europe did not eliminate the practice of abandonment, but they did begin to keep better records. The practice was so widespread and encountering abandoned infants so common that attempts were made to gather them up and nurture them in group homes—an experiment destined to fail.

" Europe groups of citizens and governments were similarly disturbed by the large numbers of unwanted infants left along roads and in gutters. In city after city the same painful experiment was repeated."

The experiment referenced was the creation of foundling homes. Gathering together a large number of infants with little or no immunity was not a good idea.

"The foundling homes became focal points for contagion for small pox, syphilis, and dysentery. But the key problem was always how to feed infants without introducing lethal diarrhea-causing pathogens."

One again, infants without access to a lactating woman usually died, and there were not near enough of them to go around.

What the foundling homes did do was keep records.

" became clear that foundling homes were magnets for a much wider population than simply unwed mothers and poor domestics seduced by employers. Parents—often married couples—from a broad catchment area saw the orphanages as a way to delegate to others parental effort for offspring they could scarce afford to rear. Mothers poured in from rural areas to deposit babies in the cities. What has generally been studied as a patchwork of various, discrete, local crisis is really a wide-scale, demographic catastrophe of unprecedented dimensions."

Without contraception, a couple could produce 10-15 infants in a lifetime. Few could provide the necessary level of support. They had little choice. The existence of foundling homes made abandonment an easier decision to make, but the death rates for infants had to be known.

"The scale of mortality was so appalling, and so openly acknowledged, that residents of Brescia proposed that a motto be carved over the gate of the foundling home: ‘Here children are killed at public expense’."

The practice of abandoning unwanted infants went on for centuries.

"Italy provides some of the most complete records on infant abandonment....By 1640, 22 percent of all children baptized in Florence were babies that had been abandoned. Between 1500 and 1700, this proportion never fell below 12 percent. In the worst years on record, during the 1840s, 43 percent of all infants baptized in Florence were abandoned."

As modern contraception became available, abandonment and infanticide essentially disappeared from most societies. But the perceived need to avoid giving birth to an unwanted child or a child that a mother is unable to support remains. Consider this tally of reasons why people choose to have an abortion.

These motivations are similar to those that have driven women for uncountable numbers of generations. How can the desire of a woman to have an abortion be considered evil or unnatural given the history of humanity that has been described? Abortion is, today, a tiny residual of a once-common practice that is as old as humanity.

Rather, can one argue that the decisions women have been forced to make over the ages should be described as pro-life?

One can make a valid argument that humanity would not have survived if women had not limited the population when necessary. That should qualify as pro-life at some very high level.

The decision to let an infant die in order to improve the life prospects of others can be considered as pro-life.

Modern medicine has complicated the issue of viability, but there is no justification for assuming it begins at conception. That is a fantasy dreamed up by old men who wish to exert control over the bodies of women and their reproductive habits. That topic involves another tale of evolutionary history—and the subject is not life, but power.

I would argue that the potential harm produced in creating an unwanted or uncared-for child, far exceeds any harm that comes from eliminating a nonviable fetus.

And that is a pro-life point of view.

Monday, January 14, 2013

Britain, Austerity, and the Multiplier: The IMF Says "Oops"

John Lanchester provides an acerbic appraisal of the British government’s last three years of economic "austerity" in an article in the London Review of Books. The title used in the article is Let’s call it failure. A more colorful title, and one more representative of the tone of the article, was used on the magazine cover: The Shit We’re In.

Lanchester begins with this barb:

"As [Chancellor of the Exchequer] George Osborne’s autumn statement made clear, the scale and speed and completeness with which things are going wrong are numbing."

The current government came into power with a very specific plan and goal.

‘We will cut government spending to bring the deficit down and restore stability.’

When the austerity policies were initiated, the government claimed a budget deficit of 4.8% of GDP. After three years of spending cuts and tax increases, Lanchester concludes that the deficit has moved not down but up and resides now at 4.9% of GDP.

He finds the imposed government policies to be bewildering. Severe public spending cuts have been authorized, but the biggest components of public spending—education, healthcare, and pensions—have effectively been excluded, producing draconian cutbacks in other areas.

"We can all agree that there have been savage cuts to public spending. Examples are not far to seek: the police have lost more than 24,000 jobs since the coalition came to power; more than two hundred libraries closed in 2011 alone; local councils are on a four-year track which will see their budgets cut by more than a third."

"Is this achievable? Could any government do that? No government in British history has, which should give us a clue. The cuts to unprotected, unringfenced departments, in real terms (that means adjusted for inflation), would amount to more than 30 per cent. The Institute for Fiscal Studies thinks it is ‘inconceivable’ for the implied levels of cuts to be achieved."

One of the areas required to cut back is the military.

"In....March 2010, when the cuts to unprotected departments were set to be 18 per cent, I quoted an IFS economist as saying that ‘for the Ministry of Defence an 18 per cent cut means something on the scale of no longer employing the army.’ Upgrade the level of cuts to 30 per cent and the cuts are, I suggest, politically and practically unachievable."

The only explanation for this type of planning is that the government hoped to be bailed out by a resurgence of economic growth that would have precluded the need for such drastic measures. Not only has the growth been lacking, but the economy has experienced a double dip recession, and could be headed for a third dip.

"In June 2010, the OBR [Office of Budget Responsibility] predicted that the UK economy would grow by 2.8 per cent in 2012. By this year’s budget in March, it had revised that estimate downwards to 0.8 per cent. By the autumn, the new guesstimate was that the economy would shrink by 0.1 per cent this year. The OBR predicts that the economy will shrink again in the last quarter of the year, before slowly picking up in the first quarter of 2013. If they are right about the first part of that guess but wrong about the second – which looks far from improbable – then we will have entered a historically unprecedented triple-dip recession."

How could government projections be so far off?

Lanchester suggests a misunderstanding of the concept of the "multiplier" is the cause. If the government pays someone a dollar, or a pound, to do something, that dollar has increased economic activity by that amount. If that recipient then goes and purchases something with that money, then another increase in economic activity has occurred. Subsequent transactions could ensue and also contribute to the economy. The value of the initial expenditure has been "multiplied." Note that the concept of a multiplier holds for cuts in spending as well, becoming a negative factor in tallying economic activity. It should also be clear that different types of spending or spending cuts are likely to have different multipliers. It is difficult to predict the effect of government spending changes if one cannot quantify these multipliers.

How could such fundamental economic parameters not be available for accurate budget planning? It seems the concept of a multiplier was first introduced long ago by a student of Keynes. Unfortunately, much of the economic profession has devoted decades to proving that Keynes was wrong.

"About thirty years ago, when Keynes was in the depths of economic unfashionability, going up to a group of macroeconomists and trying to start a conversation about the multiplier would have been roughly like going up to a group of astrophysicists and trying to start a conversation about your star sign. The multiplier was so far off the agenda that it was no longer considered a serious economic principle."

Given the need to prove Keynes wrong, it was necessary to argue that multipliers are small, if not zero. Otherwise, government spending in a recession could be deemed a good thing. Not surprisingly, political and economic conservatism go hand in hand. If an economist wants to believe a multiplier is small he/she can come up with conclusions to support that bias. Keynesians believe multipliers can be large, conservatives believe they are small.

What if the conservatives are wrong?

"What if the effect of public spending cuts is bigger than they thought, bigger than they have allowed for in their models? What if, quite simply, they’re using the wrong multiplier? If that were the case, then austerity policies would be doing more damage than good, and the countries that were pursuing them would be digging themselves further and further into a recessionary hole. The amount they’re saving in spending cuts would be more than accounted for by the extra damage they were doing to themselves."

Lanchester was startled to discover that one of the most influential and conservative of the economic voices has just admitted that the assumptions about the multiplier being small have been wrong.

"In the October edition of its regular World Economic Outlook, the IMF studied the question and announced that governments had been basing their calculations on the effects of austerity using a multiplier of 0.5. So for every £1 billion removed from government spending, GDP would contract by £500 million. The IMF looked at the relevant historical data, and concluded that the real multiplier for austerity-related cuts was higher, in the range of 0.9 to 1.7. So that same package of £1 billion in fact removes as much as £1.7 billion of output. This was a jaw-dropping thing to discover, not just because it was surprising in itself, and because it explained the surprising-to-governments economic damage being done by austerity packages, but also because the people saying so were the IMF. The very same IMF whose off-the-shelf policy recommendations for indebted governments and struggling economies always, but always, involves swingeing packages of spending cuts."

Such an admission of economic incompetence is stunning. Lanchester provides this analogy.

"In terms of the surprise, and its source, the IMF announcing that the multiplier effects of spending cuts had been underestimated was like the BMA [British Medical Association] announcing that they had studied all the relevant evidence and come to the conclusion that exercise is bad for you."

Yes, the British are deep in it—and so is much of Europe.

Friday, January 11, 2013

Limiting Carbon Emissions: Getting It Done in Ireland and California

Essentially all of the world’s governments recognize the need to reduce emissions of greenhouse gases in order to contain the effects of global warming. In spite of that fact, it does not seem possible for them to agree on an overall plan to address the issue. This is not an unusual circumstance in the history of mankind. The sticking point seems to be settling on precise levels of emission cuts to be imposed on each country.

Given the determination by many countries to not be held to specific targets, is this a reason to despair? Not necessarily. Many countries are going about their business and initiating energy conservation and renewable energy policies that are effective and contributing to the common good. It makes good economic sense for them. Even developing countries are recognizing that building efficient factories and planning cities with minimization of energy consumption in mind are good policies. It may be possible that the sum of these individual actions might be more effective than the constraints imposed by a highly-compromised plan arising from global negotiations. Let us be optimistic and applaud good news where we find it.

We recently reported on Germany and its plans in Germany and Energiewende: Going for Broke with Renewable Energy. Here we will discuss developments in Ireland and the state of California.

Ireland decided to limit its carbon emissions by imposing a carbon tax. The net result is beneficial to humanity, but the initial motivation seems less lofty: they needed the money. Elisabeth Rosenthal reports on the situation in an article in the New York Times.

"Over the last three years, with its economy in tatters, Ireland embraced a novel strategy to help reduce its staggering deficit: charging households and businesses for the environmental damage they cause."

"The government imposed taxes on most of the fossil fuels used by homes, offices, vehicles and farms, based on each fuel’s carbon dioxide emissions, a move that immediately drove up prices for oil, natural gas and kerosene. Household trash is weighed at the curb, and residents are billed for anything that is not being recycled."

"The Irish now pay purchase taxes on new cars and yearly registration fees that rise steeply in proportion to the vehicle’s emissions."

The Irish admit to not having been the best citizens of the planet given that they had per capita carbon consumption rates almost as bad as those of the United States. But, when put in the position where it made sense to cooperate, they delivered admirably and easily met the targeted emission reductions.

"....when the Irish were faced with new environmental taxes, they quickly shifted to greener fuels and cars and began recycling with fervor. Automakers like Mercedes found ways to make powerful cars with an emissions rating as low as tinier Nissans. With less trash, landfills closed. And as fossil fuels became more costly, renewable energy sources became more competitive, allowing Ireland’s wind power industry to thrive."

"Even more significantly, revenue from environmental taxes has played a crucial role in helping Ireland reduce a daunting deficit by several billion euros each year."

"The three-year-old carbon tax has raised nearly one billion euros ($1.3 billion) over all, including 400 million euros in 2012. That provided the Irish government with 25 percent of the 1.6 billion euros in new tax revenue it needed to narrow its budget gap this year and avert a rise in income tax rates."

Has the imposed tax been particularly painful?

"The prices of basic commodities like gasoline and heating oil have risen 5 to 10 percent."

While the price rise is surely noted by the poorer citizens, it should be recognized that those prices will also jump by that much or more whenever there is a run of bad news (or good news) in the headlines of the day.

California has recently held its first auction of carbon emission permits under its cap-and trade-program. Rather than impose a direct and immediate penalty for carbon emitting behavior via a tax, a cap-and-trade system imposes a gradual and indirect penalty on consumption. A permit allows a company to emit a specified amount of carbon. The number of permits issued is determined by the state’s emission target. To lower emissions in the future, the amount allowed by the permits is decreased in accordance with the state’s master plan.

This plan and a direct carbon tax both result in costs being passed on to consumers. However, the cap-and-trade approach should provide more flexibility to industries in meeting goals. The permits will be traded on a market so companies that invest heavily in reducing emissions, or find it easy to do so, can make money by selling unneeded permits. Those who have difficulty reducing emissions, or chose not to, can buy time by purchasing excess permits. The emission level allowed by the possession of the permits is said to "enforceable." Presumably that involves a fine, or, in extreme cases, a shutdown of operations.

California passed a law requiring it to reduce emissions to 1990 levels by 2020. This program will be combined with conservation and other efforts in order to meet that requirement.

This is no small effort. As of 2011, California had the eighth largest economy in the world, and comprised about 13% of the United States economy. The only bigger cap-and-trade system in the world is that of the European Union.

Progress is being made. Hopefully, successful implementation of either approach will encourage others to follow.

Wednesday, January 9, 2013

Hedge Funds Are Unable to Beat the Market: Why?

Traditional advice to conservative investors has been to put 60% of resources into equities and 40% into bonds. Institutional investors such as pension plans and college endowments have tended to drift into riskier forms of investment in recent years as the outlook for equities has not met their need for high returns. As a result, they have invested heavily in more risky arenas such as hedge funds, private equity, and real estate. This trend should be troubling to those who have a dependence on the health of these institutional investments. An article in The Economist, Hedge funds: Going nowhere fast, provides some insight into the efficacy of utilizing hedge funds to attain those large returns.

This comparison of the performance of an index of hedge funds versus a 60%/40% split on indices representing equities and bonds illustrates that hard times have befallen a once glamorous investment vehicle.

"The past year has been another mediocre one for hedge funds. The HFRX, a widely used measure of industry returns, is up by just 3%, compared with an 18% rise in the S&P 500 share index. Although it might be possible to shrug off one year’s underperformance, the hedgies’ problems run much deeper."

"The S&P 500 has now outperformed its hedge-fund rival for ten straight years, with the exception of 2008 when both fell sharply. A simple-minded investment portfolio—60% of it in shares and the rest in sovereign bonds—has delivered returns of more than 90% over the past decade, compared with a meagre 17% after fees for hedge funds...."

Investors pay a heavy price for relatively poor performance.

"....fees of 2% of assets and 20% of profits (above a certain level) typically charged by hedge funds...."

The knife is then turned with this observation.

"As a group, the supposed sorcerers of the financial world have returned less than inflation. Gallingly, the profits passed on to their investors are almost certainly lower than the fees creamed off by the managers themselves."

There continue to be high performing funds, but they are not always the same ones. Big winners of a few years ago have become big losers of today. The author suggests there are a number of changes that have occurred that have altered the environment in which the hedge funds now operate.

The funds may have been damaged by their earlier success. As they have grown in size they have found that it was easier to be nimble and quick when small rather than after becoming big and bloated. The nature of their investors has also changed. Whereas they were once the province of risk-seeking wealthy individuals, now about two-thirds of their funds come from institutional investors whose goals are much less adventuresome. In the past, funds were able multiply their returns by leveraging their assets with borrowed funds. That option has mostly disappeared.

The author then provides what might be the most compelling, and the most intriguing explanation—one that should be relevant to all investors.

"[ Hedge funds] attribute their woes to choppy markets that are moved more by politicians than by underlying economic forces. ‘Markets are watching governments, which are watching the markets,’ says Jim Vos of Aksia, a consultancy. Even a talented stockpicker will struggle to make money if the entire market is sent into convulsions by central-bank announcements. Many hedgies admit to having no ‘edge’ in this environment. A few have slimmed or shut up shop."

The funds now seem to be propagating a message of diminished expectations.

"For those that remain, the message to investors has changed dramatically. Whereas hedge funds used to sell themselves as the spicy, market-beating wedge of an investment portfolio, they now stress the long-term stability of their returns."

One of the products of our poisoned political environment is a poisoned investment environment. However, this uncertain state has tilted the focus toward more stable long-term investment options. Less gambling and a greater focus on financial fundamentals—can that be a bad thing?

But where are those institutional investors to go now? Will they seek even riskier investments?

Tuesday, January 8, 2013

Culture, Language, Gender, and Power: Sweden’s Experiment

It appears that Sweden is considering taking gender equality to levels not seen before. An article in the New York Times by John Tagliabue provides a description of an experiment in preschool teaching.
"Sweden is perhaps as renowned for an egalitarian mind-set as it is for meatballs or Ikea furnishings. But this taxpayer-financed preschool, known as the Nicolaigarden for a saint whose chapel was once in the 300-year-old building that houses it, is perhaps one of the more compelling examples of the country’s efforts to blur gender lines and, theoretically, cement opportunities for both women and men."

"....the teachers avoid the pronouns "him" and "her," instead calling their 115 toddlers simply "friends." Masculine and feminine references are taboo, often replaced by the pronoun "hen," an artificial and genderless word that most Swedes avoid but is popular in some gay and feminist circles."

To some, this may appear at first as a silly exercise—much ado about nothing. But those who run this school took the national decree to provide gender equality, even in day-care centers, quite seriously. They took the unusual step of recording their actions as they cared for the children and evaluated the degree to which they treated boys and girls differently. What they discovered was that their actions, inadvertently, were propagating sexual stereotypes.

"’We could see lots of differences, for example, in the handling of boys and girls,’ said Lotta Rajalin, who directs the center and three others, which she visits by bicycle. ‘If a boy was crying because he hurt himself, he was consoled, but for a shorter time, while girls were held and soothed much longer,’ she said. ‘With a boy it was, ‘Go on, it’s not so bad!’"

"The filming, she said, also showed that staff members tended to talk more with girls than with boys, perhaps explaining girls’ later superior language skills. If boys were boisterous, that was accepted, Ms. Rajalin said; a girl trying to climb a tree on an outing in the country was stopped."

"The result, after much discussion, was a seven-point program to alter such behavior. ‘We avoid using words like boy or girl, not because it’s bad, but because they represent stereotypes,’ said Ms. Rajalin, 53. ‘We just use the name — Peter, Sally — or ‘Come on, friends!’ Men were added to the all-female staff."

Is it possible that these common differences in the treatment of boys and girls could have long-term effects on how the two sexes ultimately view themselves?

Daniel T. Rodgers has written a fascinating book titled Age of Fracture. Rodgers attempts to explain the social evolution that occurred at the end of the twentieth century as a movement from a society in which people considered themselves as individuals within a defined social sphere to one in which these aggregations of individuals began to fracture into numerous cultural entities.

"....the terms that had dominated post-World War II intellectual life began to fracture. One heard less about society, history, and power and more about individuals, contingency, and choice."

If individuals broke out of the social structures that might have defined them in an earlier time, did that mean that they now possessed the power to determine the course of their future? Or were there more subtle constraints that remained? Rodgers devotes a significant fraction of his work to examining the possible sources of power that might provide those constraints.

If women in the United States gradually freed themselves from the formal constraints of laws and regulations that generated unequal treatment, did that mean that they were now unconstrained? Not necessarily so seemed to be the answer.

Rodgers discusses the ideas of the Italian Antonio Gramsci in identifying and emphasizing the role culture could play in enforcing rules of conformity.

"How do the rulers rule? By domination through violence and the coercive powers of the state, surely. But also and still more pervasively, Gramsci reflected, by the less choate power of ‘hegemony’."

Rodgers quotes the definition of hegemony provided by the British historian Gwynn Williams. Hegemony is:

" order in which a certain way of life and thought is dominant, in which one concept of reality is infused throughout society in all its institutional and private manifestations, informing with its spirit all taste, morality, customs, religious and political principles, and all social relations, particularly in their political and moral connotations."

Rodgers then adds:

"Hegemony was the power of the dominant class not only to impose its social categories on others but also, and still more, to make its systems of meaning come to seem the natural order of things, so that by insensibly absorbing that order the many consented to the domination of the few."

While Gramsci’s thoughts were penned in the environment of World War II Europe, it is not difficult to see signs of cultural domination in current society. Would a young woman growing up in multicultural Manhattan be more unfettered than a young woman growing up in a rural area dominated by evangelical Christians? The Swedish school was correct in worrying about the little differences in behavior imposed on boys and girls.

Language is part of culture—an important part. Consider the terms "woman" and "female." Both are defined as a relationship to man, explicitly suggesting a subordinate state. Many languages enforce the rule that if gender is not specified it will be assumed to be masculine—presumably in the belief that if it is worth writing about then it probably involves a male. How many of these linguistic slights are women exposed to in the course of a lifetime, and what might be the effect?

Rodgers suggests that language can be a subtle, but powerful tool of suppression.

"....a power lodged in the hierarchical oppositions that formed the very stuff of language. Nature/culture, mind/body, inner/outer, male/female. In these overtly neutral binaries, one of the terms of difference was inevitably promoted at the expense of the other, made normal or natural in the very act of excluding, marginalizing, or making supplemental the other. Dominance and erasure ran all through what Jacques Derrida called the ‘violent hierarchies’ of language, naturalizing what was arbitrary...."

Sweden’s experiment in gender equality may or may not lead to any definitive conclusions, but the very act of defining the issue as it has, should cause others to reconsider the ways in which gender inequality is being addressed.

Saturday, January 5, 2013

Europe: Green Energy Policies Contributing to Increased Coal Consumption?

It is ironic—if not tragic—that at a time when natural gas prices and regulatory policies in the United States are causing large declines in coal consumption, the same factors are generating increased consumption of coal in Europe. 

An article in The Economist explains the factors at work.

"As American utilities shifted into gas, American coal miners had to look for new markets. They were doing so at a time when slowing Chinese demand was pushing down world coal prices, which fell by a third between August 2011 and August 2012 and is below $100 a tonne. These prices make European utilities willing buyers. European purchases of American coal rose by a third in the first six months of 2012."

Europe has not benefited from the increased supply of gas that fracking has provided in the United States. It still receives most of its natural gas via pipeline at prices negotiated long ago. But that is not the only factor in play. Imminent environmental regulations are causing a surge in coal usage to take advantage of the cost difference while still possible.

"In April 2012 coal took over from gas as Britain’s dominant fuel for electricity for the first time since early 2007. The amount of the country’s electricity provided by coal in the third quarter of last year was 50% greater than the year before."

This is expected to be temporary and Britain will eventually lower coal usage.

"Under a European Union directive which comes into force in 2016, utilities must either close coal-fired plants that do not meet new EU environmental standards or else install lots of expensive pollution-control devices. The deadline for companies to decide which course to take is this month. If a company closes a plant, it will be given a maximum number of hours to run before it must be shut down (depending on how much pollution it produces). This is a big incentive to burn a lot of coal quickly."

The cost of building coal-fired plants that meet the EU standards is not so great that they become uneconomical. A number of them are being planned for.

"....if you count the number of applications for permits to build coal-fired power stations—as the World Resources Institute, a think-tank in Washington, DC, does—the number of planned new coal plants in Europe is much higher: 69, with a proposed capacity of over 60 gigawatts, roughly equivalent to the capacity of the 58 nuclear reactors that provide France with most of its electricity."

In Germany, where the push for renewable energy is strongest (Energiewende), its policies have had unintended consequences. Germany accompanied its push for renewable sources with an aggressive plan to phase out nuclear power. This leaves the country with a need for new capacity, and cost effectiveness points toward coal.

The price differential between gas and coal is being exacerbated by the preference given to renewable sources. Power utilities would make most of their profit by selling power at high prices during the peak usage period. Unfortunately for them, that is also the peak in availability of renewable sources.

"At the beginning of November 2012, according to Bloomberg New Energy Finance, a research firm, power utilities in Germany were set, on average, to lose €11.70 when they burned gas to make a megawatt of electricity, but to earn €14.22 per MW when they burned coal."

In order to try to remain profitable, the energy utilities are switching from natural gas to coal. The net effect is that renewable energy sources are replacing gas with coal—not what had been intended.

This increase in coal consumption has reversed the EU’s downward trend in carbon emissions.

"The EU aims to reduce carbon emissions to 80% of their 1990 levels by 2020. Thanks in part to the recession, by 2009 it was most of the way there—a bit more than 17% down on the 1990 level. In 2010, though, emissions began rising. Bloomberg calculates that carbon emissions from power plants rose around 3% in 2012, pushing total emissions 1% higher than they were in 2011."

One might have expected the EU’s vaunted cap-and-trade system for carbon emissions to have precluded such an increase in carbon output. The following illustration indicates that while coal consumption has been rising, the carbon price has been flat or falling.

Having a carbon permit trading scheme is an excellent idea, but it is necessary that it function properly.

"The problem is that when the system was set up, regulators allowed companies overly generous permits to pollute, in part because of lobbying and in part because the effects of the recession were not foreseen. This oversupply has swamped the impact of emissions from coal-fired power plants."

Nevertheless, the long-term outlook for a greener Europe is positive, in spite of the current perversity. However, the author is unable to leave without a final dose of sarcasm.

" the moment, EU energy policy is boosting usage of the most polluting fuel, increasing carbon emissions, damaging the creditworthiness of utilities and diverting investment into energy projects elsewhere. The EU’s climate commissioner, Connie Hedegaard, likes to claim that in energy and emissions Europe is ‘leading by example’. Uh-oh. "

Friday, January 4, 2013

Evolution in Action: Humans and Visualization

Humans are incredibly complex organisms. Some observe the complexity and conclude that such an entity could not possibly have been arrived at by the essentially random process of mutation coupled with natural selection. Others study the complexity and conclude that such a structure could have only been arrived at by random mutation and natural selection. No competent engineer would have designed such a complex mess.

V. S. Ramachandran devotes a chapter in his book The Tell-Tale Brain to visualization. His interest in how humans interpret visual data is driven by the relative availability of data on relevant brain function and by the hope that understanding this function will provide enlightenment concerning the less-well-understood brain functions. Our interest derives from the insight provided relevant to evolution and complexity.

Ramachandran tells us that we must first disabuse ourselves of the notion that the image projected on the sensors in our eyes is viewed and studied by the brain directly.

"....the brain creates symbolic descriptions. It does not recreate the original image, but represents the various features and aspects of the image in totally new its own alphabet of nerve impulses. These symbolic encodings are created partly in out retina itself but mostly in your brain. Once there, they are parceled and transformed and combined in the extensive network of visual brain areas that eventually let you recognize objects. Of course, the vast majority of this processing goes on behind the scenes without entering your conscious awareness, which is why it feels effortless and obvious...."

There are actually two pathways by which visual material is processed and presented as actionable information.

"The so-called old pathway starts in the retinas, relays through an ancient midbrain structure called the superior colliculus, and then projects—via the pulvinar—to the parietal lobes....The old pathway enables us to orient toward objects and track them with our eyes and heads."

"The new pathway, which is highly developed in humans and in primates generally, allows sophisticated analysis and recognition of complex visual scenes and objects."

Within the new pathway there is a shunt that short-circuits some of the higher order functions in order to gain speed in response.

"....bypasses high-level object perception—and the whole rich penumbra of associations....and shunts quickly to the amygdala, the gateway to the emotional core of the brain, the limbic system. This shortcut probably evolved to promote fast reaction to high-value situations, whether innate or learned."

The human visualization scheme has many functions to perform. How complex is this structure? Ramachandran tells us that there are at least thirty areas of the brain that participate. He includes this wiring chart that has been developed via the study of monkeys. Humans are presumably more complex.

Note that there is a great amount of feedback that is provided at each stage of processing.

"What these back projections are doing is anybody’s guess, but my hunch is that at each stage in processing, whenever the brain achieves a partial solution to a perceptual ‘problem’—such as determining an object’s identity, location, or movement—this partial solution is immediately fed back to earlier stages. repeated cycles of such an iterative process help eliminate dead ends and false solutions when you look at ‘noisy’ visual images....In other words, these back projections allow you to play a sort of ‘twenty questions’ game with the image, enabling you to rapidly home in on the correct answer. It’s as if each of us is hallucinating all the time and what we call perception involves merely selecting the one hallucination that best matches the current input."

It is a rather small leap to the conclusion that the old pathway is a primitive form of awareness that was maintained and carried along while more complex functions evolved. Ramachandran tells an interesting story about a patient who suffered brain damage that eliminated the function of the new pathway. The patient was blind in his right visual field as we would define blindness. However, the eyes and old pathway still functioned. Nevertheless, when requested to, the patient was able to place his finger on a spot of light that was projected on his right. He did this without any conscious awareness of the light or how he was able to do this. This phenomenon was labeled "blindsight." How suggestive is this of the type of response to a stimulus that one might expect from a worm or some other low-level creature.

Lower-order animals have simpler visualization systems.

"Carnivores and herbivores probably have fewer than a dozen visual areas and no color vision. The same holds for our own ancestors, tiny nocturnal insectivores scurrying up tree branches..."

The ability to develop complex systems of vision is apparently not that difficult in nature.

"....the ability to see is so useful that eyes have evolved many separate times in the history of life. The eyes of the octopus are eerily similar to our own, despite the fact that our last common ancestor was a blind aquatic slug- or snail-like creature that lived well over half a billion years ago."

The data presented by scientists is consistent with the picture of evolution as a process of adding structures to an existing framework. There is no opportunity to go back and reengineer the underlying structures for efficiency. This leads to exotic solutions to simple problems, and to redundancies and features that no longer have a discernible function.

 The ability to grasp the power of evolution becomes clearer when we consider that the vast times involved allow for millions and millions of generations to occur.

One of the aspects of our evolved brains that Ramachandran dwells on in his book is that our brains abhor uncertainty. It is easy to explain this as a survival mechanism. In time of danger, quick decisions and actions are required. Those who equivocate tend to get eaten by larger animals. A side effect of this attribute is the difficulty involved in changing an established behavior pattern or belief by mere rational logic. Such a change involves emotional distress, and emotion usually trumps reason. Our political process is characterized by this fact.

While my studies continue to provide proof of evolution and wonder at the beauty and power of the process, they also lead me to be more tolerant of those who, coming from a different direction, have trouble recognizing reality.
Lets Talk Books And Politics - Blogged