Monday, July 30, 2012

Corporations, Shareholder Primacy, and the Decline of Long-Term Investment

Lynn Stout discusses the origins of the misguided belief that corporations are owned by their shareholders in her book The Shareholder Value Myth. She argues that corporations are entities that have a legal existence and identity that is separate from that of its shareholders. The idea that shareholders "own" the corporation is a myth, but one that many wished to believe was true; and over time it effectively became true. The notion of shareholder primacy contributed the constraint that the corporation’s goal was to increase the wealth of its shareholders. The idea took root because it was popular with conservative economists, the public, the press, corporate reformers, shareholders, and even corporate executives.

The objective of shareholder primacy was to provide mechanisms whereby corporate leadership could be constrained to act in the best interests of the corporation and its shareholders. The metric by which executives would be judged was the shareholders share value.

It was assumed that the corporation and its shareholders would have common purpose in insuring the economic health of the enterprise. This assumption has turned out to be false. Therein resides a dilemma that can only be resolved by shattering this myth of shareholder supremacy.

Most people who actually invest in company shares do so through mutual funds or via pension plans. These people tend to be long-term investors who envisage their wealth increasing via the overall growth in the economy, rather than worrying about the specific activities of a particular company.

"The shareholder who plans to hold her stock for many years wants the company to invest in its employees’ skills, develop new products, build good working relationships with suppliers, and take care of customers to build consumer trust and brand loyalty—even if the value of these investments in the future is not immediately reflected in share price."

This is the type of shareholder people had in mind when they swallowed the shareholder primacy concept.

What type of investment environment have shareholders provided for corporations today?

"....in 1960 annual share turnover for firms listed on the New York Stock Exchange (NYSE) was only 12 percent, implying an average holding period of about eight years. By 1987, this figure had risen to 73 percent. By 2010, the average annual turnover for equities listed on the U.S. exchanges reached an astonishing 300 percent annually, implying an average holding period of only four months."

How can four months and long-term be in any way consistent?

The major players in the markets are mutual funds, pension plans, and hedge funds. One might think that pension plans and mutual funds that generally represent investors with long-term goals would themselves be focused on long-term performance of companies, but that would be a mistake. Pension plans have a desperate need for a rate of return on investment that is difficult to attain in current markets. This leads them to indulge in the more extreme investment tactics such as investing in hedge funds, or investing like a hedge fund themselves. Mutual funds and hedge funds have investors with long-term goals, but short-term memories.

"Unfortunately, these individual clients tend to judge the fund managers to whom they have outsourced their investing decisions based on their most recent investing records. This explains why many actively-managed mutual funds turn over 100 percent or more of their equity portfolios annually and why ‘activist’ hedge funds that purport to make long-term investments in improving corporate performance typically hold shares for less than two years."

What is a short-term investor looking for when he/she purchases shares?

"The short-term investor who expects to hold for only a few months or days wants to raise shares today and favors strategies like cutting expenses, using cash reserves to repurchase shares, and selling assets or even the entire company."

A hedge fund can exert a powerful influence on a company if it decides to invest enough to control a significant fraction of the shares. Stout refers to a study of activist hedge funds by Bill Bratton of the University of Pennsylvania. In his words:

"Activist hedge funds look for four things in their targets—potential sale of the whole, potential sale of a part, free cash, and cuttable costs."

Nowhere in that list is there anything to do with the health of the company.

In a volatile market such as exists today, failure to meet set goals can generate large market responses. This puts considerable pressure on company executives.

"One recent survey of 400 corporate finance officers found that a full 80 percent reported that they would cut expenses like marketing or product development to make their quarterly earnings targets, even if they knew the likely result was to hurt long-term corporate performance."

The concept of shareholder supremacy, coupled with the current nature of shareholders, has introduced a number of perverse incentives into the system. The extreme focus on share price, and the coupling with executive compensation, has contributed to risky and unwise decisions being made. Rather than holding executives accountable it has tempted them to do foolish things.

Where does all this lead? It leads to a dysfunctional economy. We discussed some of the issues in Corporate Profits and Wages: Karl Marx Would Be Smiling. Included there was reference to a study issued by the Federal Reserve Bank of New York that suggested that

"....executive incentives may even be driving the business cycle by their effects on investment."

Stock markets seem to have become disconnected from their original goal of raising equity for investment purposes. Instead they seem more like giant casinos where many high stakes and low stakes games are available for play. Money churns and the players have an exciting time, but there is little benefit to the economy or to society.

Businesses seem to recognize that becoming a public corporation is something to be avoided if at all possible. This issue was discussed in Are Public Corporations Becoming Obsolete? Companies tend to go private if they can. Private partnerships were once the typical business form; they may regain that status again.

Has the US form of capitalism come to a dead end? Something has to change. Let’s hope it is a change for the better.

Sunday, July 29, 2012

Mental Illness: Moving Past Medication

Tanya Marie Luhrmann is described as a "psychological anthropologist." She has participated in a remarkably diverse set of projects, several of which involve mental illness and its treatments. She bears glad tidings in a new article that appeared in The Wilson Quarterly: Beyond the Brain. The title does little to elucidate the topic. This will help.
"....the National Institute of Mental Health designated the 1990s as the "decade of the brain." Psychoanalysis and even psychotherapy were said to be on their way out. Psychiatry would focus on real disease, and psychiatric researchers would pinpoint the biochemical causes of illness and neatly design drugs to target them."

The title is meant to convey that the medication-driven approach to addressing mental illness was a dreadful failure and it is now time to move on to the next attempt.

"It is now clear that the simple biomedical approach to serious psychiatric illnesses has failed....At least, the bold dream that these maladies would be understood as brain disorders with clearly identifiable genetic causes and clear, targeted pharmacological interventions (what some researchers call the bio-bio-bio model, for brain lesion, genetic cause, and pharmacological cure) has faded into the mist."

Luhrmann’s discussion focuses mostly on schizophrenia, but she claims the same general conclusions apply to depression and other mental illnesses.

"To be sure, it would be too strong to say that we should no longer think of schizophrenia as a brain disease. One often has a profound sense, when confronted with a person diagnosed with schizophrenia, that something has gone badly wrong with the brain."

"Yet the outcome of two decades of serious psychiatric science is that schizophrenia now appears to be a complex outcome of many unrelated causes—the genes you inherit, but also whether your mother fell ill during her pregnancy, whether you got beaten up as a child or were stressed as an adolescent, even how much sun your skin has seen. It’s not just about the brain. It’s not just about genes."

The tendency to develop schizophrenia is now attributed to a complex assembly of risk factors. This view is supported by advances in knowledge derived from epigenetics, the study of the means by which genetic function can be altered by environmental factors or experiential events. The world appears a whole lot more complicated than it did in the 1990s.

"In recent years, epidemiologists have been able to demonstrate that while schizophrenia is rare everywhere, it is much more common in some settings than in others, and in some societies the disorder seems more severe and unyielding. Moreover, when you look at the differences, it is hard not to draw the conclusion that there is something deeply social at work behind them."

"Schizophrenia has a more benign course and outcome in the developing world. The best data come from India. In the study that established the difference, researchers looking at people two years after they first showed up at a hospital for care found that they scored significantly better on most outcome measures than a comparable group in the West. They had fewer symptoms, took less medication, and were more likely to be employed and married."

The better results obtained in India seem to derive from the approach of focusing on the patients recovery rather than the seriousness of the illness. They were maintained in their normal social environment to the degree possible.

"As a result, none of the patients thought of themselves as having a career-ending illness, and every one of them expected to get better. And at least compared to patients in the West, they generally did."

Luhrmann tells us that the concept of social activation as being causative has taken root within the psychiatric profession. This has led to acceptance that social factors should be part of the treatment process. It was interesting to note that the much-maligned George W. Bush demonstrated an enlightened understanding of the issues.

"One of the most influential patient-driven initiatives in decades, the Recovery Movement received a federal imprimatur of sorts in 2003, when the Bush administration issued a mandate promoting "recovery-oriented services." Treatment providers paid by Medicare and Medicaid were told that schizophrenia would no longer be understood as an illness with a chronic and debilitating course, a death sentence for the mind. Instead, patients and mental health professionals were instructed to believe that people with schizophrenia could live as effective members of a community, able to work and to be valued. The expectation of permanent impairment was to be replaced with hope."

The Recovery Movement is described as a patient-driven initiative. One of the downsides of depending on medication to treat mental illness is that the medication seems to confer to the patients the notion that they are in fact mentally damaged, something they would resist agreeing with. Instead the Recovery Movement focuses on trying to let the misbehaving brain be brought under control by the brain itself.

"....the Recovery Movement, which explicitly embraces the idea that the very way you imagine an illness will affect the way you experience it—an idea that seems, well, almost psychoanalytic. As the movement’s manifesto defined it, ‘recovery is a process, a way of life, an attitude, and a way of approaching the day’s challenges’."

Luhrmann also describes the intriguing "Hearing Voices" movement.

"In Europe, the Hearing Voices network teaches people who hear distressing voices to negotiate with them. They are taught to treat the voices as if they were people—to talk with them, and make deals with them, as if the voices had the ability to act and decide on their own. This runs completely counter to the simple biomedical model of psychiatric illness, which presumes that voices are meaningless symptoms, ephemeral sequelae of lesions in the brain. Standard psychiatric practice has been to discount the voices, or to ignore them, on the grounds that doing so reminds patients that they are not real and that their commands should not be followed. One might think of the standard approach as calling a spade a spade. When voices are imagined as agents, however, they are imagined as having the ability to choose to stop talking. Members of the Hearing Voices movement report that this is what they do. In 2009, at a gathering in the Dutch city of Maastricht, person after person diagnosed with schizophrenia stood up to tell the story of learning to talk with the voices—and how the voices had then agreed to stop."

Absolutely fascinating!

Luhrmann tells an interesting story that indicates that the mentally ill may, perhaps, face a future that need not include a lifetime shortened by powerful mind-altering and brain-damaging drugs. Her description of current methods under development is definitely progress.

A burst of enthusiasm was soon replaced by the more depressing realization that nothing she posited as a new and exciting development was really new. Robert Whitaker wrote a book published in 2010 that reviewed what he was able to learn about the history of mental illness and the results of medication on patients: Anatomy of an Epidemic. Whitaker’s curiosity was aroused by the fact that as each new miracle drug was announced to combat mental illness, the larger the number of the mentally ill became. That is not what one would expect from a miracle drug.

There was a brief period of time when schizophrenics were neither thrown in asylums, nor medicated with antipsychotic drugs. During that period most of those diagnosed stayed within a family setting and the majority learned to deal with their issues and managed to lead a more or less normal life, or at least one that did not require medication or incarceration. Is that terribly different from the "new" results presented by Luhrmann?

The World Health Organization (WHO) tracked schizophrenia outcomes, comparing results in developing countries with results from advanced countries. In 1969 the WHO concluded that schizophrenics were more likely to recover in the developing countries than in the advanced countries. Whitaker tells us that the psychiatric community in the US refused to believe the results. The WHO performed another study in 1978 with more exacting controls; the result was the same.

Luhrmann would look at these results and conclude that the socialization in the developing countries was more conducive to healing than the treatment received in the US. Whitaker concluded that the medications commonly prescribed were not only not helping the patients very much, but that they would also eventually make many patients even sicker; hence the term "Epidemic" in his title. They both seem to be correct.

Luhrmann states that the community is beginning to recognize that medication is not the cure-all it was sold as. The data to demonstrate that statement was always available, but people chose to ignore it. Whitaker dug up this result from 1978.


The best thing that could happen to a schizophrenic in the 1970s was to never receive an antipsychotic drug. The next best thing was to escape from a psychiatrist’s care and stop taking any medications that had been prescribed.

The psychiatric community knew in the 70s that psychoactive drugs had been a failure, but they thought that they owed their patients the right to participate in the best experiments they could perform. They listened to siren call of the drug companies who promised a new generation of miracle drugs. Luhrmann tells us how that turned out.

"....the older antipsychotics—such as Thorazine, mocked in the novel One Flew Over the Cuckoo’s Nest for the fixed, glassy stares it produced in those who took it—worked as well as the new generation, and at a fraction of the cost."

In other words the new generation of drugs couldn’t cut it either. Studies were available in the 1990s that demonstrated that depression was best treated by avoiding psychoactive drugs. And then there were the multiple studies that concluded that antidepressants worked barely better, if at all, than a sugar pill. The drug companies weren’t even embarrassed because they knew it all along. The power of suggestion can be an awesome medication.

So now we are said to be moving from a stage where drug companies and pad scribblers are in charge to one in which anthropologists have concluded that anthropologists should be in charge. Let us cheer on the anthropologists and wish them well.

It is hard to see how they can do much damage.

Seriously, the Recovery Movement and the Hearing Voices approaches indicate that there is hope for a better future for the mentally ill.

Friday, July 27, 2012

Corporate Profits and Wages: Karl Marx Would Be Smiling

An article in The Economist provides us with this astonishing chart.



Corporate profits in the US are hovering at 15% of GDP! This is at a time when corporations resist hiring and refuse to raise wages. What the article focuses on is the fact that these corporations are rolling in cash and they seem intent on sitting on it. Not only do they resist hiring and raising wages, they also resist investing their money in their own corporations.

Why are they hoarding their funds?

"Andrew Smithers of Smithers & Co, a consultancy, suggests incentives may be to blame. Managers are motivated by share options and share prices are driven by changes in earnings per share. Spending cash on share buy-backs boosts earnings per share immediately, whereas a capital-investment programme may actually reduce earnings in the short term."

"Capital expenditure may have a pay-off in the long run but, given the ever-shortening career span of the average chief executive, few may be willing to take a chance that they will be around for the long term. A recent paper from the Federal Reserve Bank of New York suggests that executive incentives may even be driving the business cycle by their effects on investment."

Can it be that we have become so focused on short-term stock performance that there is little motivation left for long-term considerations? As was discussed in The Legal Basis for Corporate Irresponsibility, radical, free-market economists have successfully promulgated the notion that corporate executives have only one motivation: to increase the wealth of the shareholders. Share price is foolishly chosen as the measure of shareholder wealth, and, more pertinently, the measure of executive performance. Could this perversion of the concept of a corporation become a destabilizing factor for the economy and for society?

The author suggests another reason why corporations hesitate to invest: no one but they has any money to spend.

"....firms are reluctant to invest in the face of weak demand. Domestic consumers have been under pressure from austerity and higher commodity prices; the euro-zone crisis and a slowdown in developing economies is weighing on export prospects. Companies may have milked all they can from productivity improvements. The irony here is that a high share of GDP for profits automatically results in a low share for wages and thus may eventually be self-limiting—a positively Marxist outcome."

Let’s see if we have this straight: if no one has money to spend then they can’t justify capital investments; and they can’t distribute money to people by increasing wages or making capital investments because that would lower share price. Is this one of those internal inconsistencies that is supposed to eventually bring capitalism down?

One could easily conjure up an image of Karl Marx with a smug smile upon his face.

Law, Economics, and Corporate Crime

There was an interesting discussion of corporate crime in The Economist with the clever title: Fine and Punishment. The author was examining the notion that the fines being levied on corporations caught in criminal activities were too small to have a deterrent effect. It was suggested that the US Department of Justice was beginning to recognize this and had been ratcheting up the level of its fines for a number of years. The following chart was provided to indicate that for antitrust violations the penalties had increased significantly. The question is, has any of this had a deterrent effect?



The author relies on the "law and economics" approach to crime and punishment. This point of view holds that criminals are the same "rational, self-interested actors" that are assumed in some economic theories. Let us refer to such individuals as homo economicus.

"The economics of crime prevention starts with a depressing assumption: executives simply weigh up all their options, including the illegal ones. Given a risk-free opportunity to mis-sell a product, or form a cartel, they will grab it. Most businesspeople are not this calculating, of course, but the assumption of harsh rationality is a useful way to work out how to deter rule-breakers."

Given this assumption, an economist can come up with a scheme for disincentivizing a rational criminal.

"In an influential 1968 paper on the economics of crime, Gary Becker of the University of Chicago set out a framework in which criminals weigh up the expected costs and benefits of breaking the law. The expected cost of lawless behaviour is the product of two things: the chance of being caught and the severity of the punishment if caught. This framework can be used to examine the appropriate level of fines, and to see if there are ever reasons to exempt companies from fines."

Using this approach, the author concludes that fines will have to be much higher if they are going to have the necessary deterrent effect.

The author’s conclusion is rather obvious given that fines are levied as punishment and crime continues. Of more interest is the implied notion that corporations can be considered as having the same motives and ethical responses as individuals.

Lynn Stout addresses some of these issues from a different perspective in her book Cultivating Conscience: How Good Laws Make Good People. She is not a fan of the homo economicus view of humanity. She believes it is an absurd assumption in economics, and particularly so as applied to law.

"If this is true, Economic Man is a being without conscience. He will happily lie, cheat, steal, renege on his promises, even murder, whenever doing so advances his material interests....The resulting portrait of human nature is anything but flattering....it implies we are psychopaths."

Psychiatrists have a collection of characteristics they use to identify psychopaths. There is a nice overlay with those of homo economicus.

"Antisocial Personality Disorder (APD) is the formal psychiatric label for psychopathy. The hallmarks of APD are extreme selfishness and lack of consideration for others, along with tendencies ‘to lie, cheat, take advantage, [and] exploit’."

"Luckily, very few people act like this. The American Psychiatric Association estimates that only 1-3 percent of the U.S. population suffers from APD, and many of these individuals are safely locked away in prison."

While Stout does not hold with the homo economicus view of individuals, she does consider the possibility that corporations, being other than human, may be capable of generating such aberrant behavior. The issue is social context. The thesis of her book is that humans have evolved as "prosocial" animals. They have innate behaviors molded by the requirement that society must have individuals who are cooperative, law-biding, and considerate of others if society is to be successful. Individuals, except for the few psychopaths, are predisposed to have those characteristics. However, humans can change their behavior if inserted into different social contexts. For example, a person who willingly pays his taxes will be tempted to cheat if he discovers that he is surrounded by neighbors who cheat on their returns.

Stout believes that people imbedded in a corporate environment are unlikely to become homo economicus, but might become what she refers to as a cousin: "corporation man." An environment that fosters a "them versus us" attitude and focuses only on financial gains as a metric of success is one that can alter behavioral patterns. As discussed in The Legal Basis for Corporate Irresponsibility, lawyers, economists, and legislators are telling corporation man that his only responsibility is to make money—not a message likely to generate ethical concerns.

Is there such a thing as corporation man? It is definitely possible. Antitrust violations are too easily explained as commercial exuberance and it is hard to envisage them as pathological behavior. However, consider some of the actions of the pharmaceutical corporations. The $3 billion fine levied on GlaxoSmithKline involved admitting criminal behaviors including failure to tell regulators about safety data for one of its drugs. Can it be that corporation man, outside of his business environment, would not think of harming another human, but within the corporate context is capable of reckless behavior that could endanger the health of many individuals? One can look at the frequency with which these antisocial actions are taken by corporations and conclude that there is a corporate culture that encourages such behavior.

If one then believes in corporation man, how does one disincentivize his behavior? Corporate fines don’t seem to have the desired effect. As Stout indicates, most true psychopaths end up locked away in prison. Since corporation man is not a true psychopath, but merely a cousin, perhaps a form of "three strikes" would be appropriate: first offense yields a corporate fine; second offense adds a fine for the CEO; third offense adds fines for all the board members. If the criminal actions are life-threatening, the second and third offense penalties can by switched to prison terms.

Thursday, July 26, 2012

The Legal Basis for Corporate Irresponsibility

Whenever a corporation pursues a path that causes harm to a community, or anguish to some group, or participates in some scheme to avoid paying taxes, or decides to pollute the environment and kill a bunch of people, or commits some other crime against humanity, the explanation that comes back is always the same: "Our responsibility is to maximize the wealth of our shareholders; it would be illegal for us to do otherwise." Not true! Not true at all! So claims Lynn Stout in her new book: The Shareholder Value Myth: How Putting Shareholders First Harms Investors, Corporations, and the Public. The book cover tells us that Stout is the "Distinguished Professor of Corporate and Business Law, Clarke Business Law Institute, at Cornell Law School. Her work on corporate theory was cited by Supreme Court Justice John Paul Stevens in his dissent in Citizens United." That’s good enough for me!

Let us begin by remembering that corporations have stakeholders, and they have shareholders, a specific type of stakeholder. Corporations exist because states allow them to exist. This is an implied partnership between the corporation and the state in which the corporation receives the legal and physical protection of the state provided it follows the state’s rules. One can argue the state, or the community, or society—however one might wish to phrase it—is a stakeholder in the corporation.

In the beginning a corporation consists of a corporate charter and a board of directors. When employees are added they become stakeholders. When money is borrowed, the holders of the debt become stakeholders also. When funds are collected by selling shares, then the shareholders become stakeholders also. The relationship between the corporation and its various stakeholders might best be described as contractual.

"None of the three sources of state corporate law requires shareholder primacy."

Stout indicates the sources of law or legal interpretation with respect to corporate governance as: the corporation’s charter and by-laws, state codes and statutes, and state case law.

"The overwhelming majority of corporate charters simply state that the corporations purpose is to do anything ‘lawful’."

There is no mandate for shareholder primacy to be found in state statutes.

"To start with the most important example, Delaware’s corporate code does not say anything about corporate purpose other than to reaffirm that corporations ‘can be formed to conduct or promote any lawful business or purposes.’ A majority of the remaining state corporate statutes contain provisions that reject shareholder primacy by providing that directors may serve the interests not only of shareholders but of other constituencies as well, such as employees, customers, creditors, and the local community."

Interestingly, the federal government has little role to play in this area.

"Federal securities laws require public corporations to disclose information to investors, but the feds mostly take a ‘hands-off’ approach to internal corporate governance, leaving the rules to be set by the states; this division of labor has been reinforced by court decisions slapping down Securities Exchange Commission (SEC) rules that interfere too directly with state corporate law."

It is to judicial decisions that one must turn to find what might be used as justification for exalting shareholder wealth. Stout reminds us that judicial opinions include "holdings" that form legal precedent, and "in dicta" comments that express personal opinions, but carry no legal weight. Judges have ruminated over corporate and executive responsibility at length, but have not chosen to deliver any holding that would require the primacy of shareholder wealth.

"....they uniformly refuse to actually impose legal sanctions on directors or executives for failing to pursue one purpose over another. In particular, courts refuse to hold directors of public corporations legally accountable for failing to maximize shareholder wealth."

Courts seem to adhere to what is known as the "business judgment rule."

"....the business judgment rule holds that, so long as a board of directors is not tainted by personal conflicts of interest and makes a reasonable effort to become informed, courts will not second-guess the board’s decisions about what is best for the company—even when those decisions seem to harm shareholder value."

Stout indicates two judicial decisions that are often pointed to as the basis for shareholder primacy. The first is Dodge v. Ford. Henry Ford was the majority stockholder in the Ford Company and the Dodge brothers were minority shareholders. The Dodge brothers wanted to form a rival company. To stop them, Ford cut off their cash flow by eliminating company dividends. Since the Ford Company was not a modern public corporation, the Dodge brothers had a right to sue. They won their case in the Michigan Supreme Court. Associated with the ruling was a nonbinding comment:

"There should be no confusion....a business is organized and carried on primarily for the profit of the stockholders. The powers of the directors are to be employed for that end.’

Stout says that this comment is what lawyers refer to as "mere dicta," a comment that is made in passing and is unrelated to the court’s holding. The insertion of the word "primarily" renders the comment uninterpretable in any event.

The second case often quoted as legal basis is Revlon, Inc. v. MacAndrews & Forbes Holdings. Revlon was a public corporation that was to be sold to a private company and cease to exist as a public entity. The Delaware Supreme Court ruled in this case that Revlon’s directors had the duty to get the best share price for the shareholders. There is no problem with that ruling, but what does it have to do with the activities of corporations that are alive and intend to stay alive?

If there is no legal basis for shareholder supremacy, where did the concept come from? And how did it become fixed in the public consciousness?

Stout tells us that with the emergence of public corporations those who disputed the rules of corporate behavior immediately split into two camps, with one favoring shareholder primacy, while the other suggested responsibility to the broader class of stakeholders. It was the latter camp that won the initial debate. It was in the 1970s when opinions began to shift.

"The process began in the 1970s with the rise of the so-called Chicago School of free market economists. Prominent members of the School began to argue that economic analysis could reveal the proper goal of corporate governance quite clearly, and that goal was to make shareholders as wealthy as possible."

The next step was to mischaracterize the legal relationships between boards of directors, executives and shareholders, and to imbed that mischaracterization in the public consciousness.

"Milton Friedman published in 1970....Friedman argued that because shareholders ‘own’ the corporation, the only ‘social responsibility of business is to increase its profits’."

"Six years later, economist Michael Jenson and business school dean William Meckling published an even more influential paper in which they described shareholders in corporations as ‘principals’ who hire corporate directors and executives to act as the shareholders’ ‘agents’.

Note that these conclusions are inconsistent with corporate charters and with the legally validated principle of the "business judgment rule." They represent what conservative economists believe should be the law of corporate governance.

How did this notion attain supremacy in the public consciousness? Stout said it began with the infection of the legal profession by economists who thought they could inject rigor and physical principles into fuzzy-thinking law schools.

"Thus shareholder value thinking quickly became central to the so-called Law and Economics School of legal jurisprudence, which has been described as ‘the most successful intellectual movement in the law in the past thirty years’."

The popular press picked up the idea because it was simple to understand and it seemed to make sense. Others, including corporate executives, jumped on board because there was profit to be made.

"Lawmakers, consultants, and would-be reformers also were attracted to the gospel of shareholder value, because it allowed them to suggest obvious solutions to just about every business problem imaginable. The prescription for good corporate governance had three simple ingredients: (1) give boards of directors less power, (2) give shareholders more power, and (3) ‘incentivize’ executives and directors by tying their pay to share price."

The movement to tie executive pay to share price has resulted in incentivizing executives to maximize their income by focusing on short term issues that might boost share price—if only temporarily—not what one necessarily would desire.

So what we have is a misconception that has taken hold as gospel. Stout details the harm shareholder primacy can cause. That will perhaps be a topic for another day.

Let us finish with a quote Stout provides from Jack Welch the respected former CEO of GE.

"Welch observed in a Financial Times interview about the 2008 financial crisis that ‘strictly speaking, shareholder value is the dumbest idea in the world’."

Tuesday, July 24, 2012

Should Sugar Be Considered a Poison?

In a previous article, If We Are What We Eat, We Aren’t What We Used To Be: Corn, Nuts, and Sugar, we discussed a number of substances that are being ingested in ways inconsistent with human experience throughout evolution. Sugar is one of the most common substances encountered in our diet. It is known to be a source of calories, and excess calories are known to lead to obesity, and obesity is known to lead to a host of other ailments. It is generally recognized that limiting sugar consumption is a good thing. But it turns out that sugar is more than a source of calories. The particulars of how our bodies respond to sugar are different from how they respond to starches, for example. Not all calories are equal. There is accumulating evidence that excessive amounts of sugar consumption over an extended period can initiate a cascade of chemical changes in our body and lead to everything from diabetes to cancer. Unfortunately, we don’t know how to define precisely what excessive consumption is and what an extended period might be. 

Given that this is medical science and a lot of money is at stake, there will be multiple studies coming to multiple conclusions. Taubes claims to have been studying this and associated issues for over a decade, and that he has come to agree with Lustig’s assessment. He also agrees that the data that support Lustig are not sufficient to convince all of the correctness of this hypothesis. We will present a short summary of the content of Taubes article. Those who find it thought provoking should follow the link to the complete article. Whatever one might conclude, it will be difficult to ever again engage in cavalier engorgement of sugar.

Gary Taubes discusses these issues in a fascinating article in the New York Times: Is Sugar Toxic? Taubes begins by introducing Robert Lustig as the latest, and perhaps most effective proponent of the hypothesis that sugar is poisonous at high consumption levels.

"In Lustig’s view, sugar should be thought of, like cigarettes and alcohol, as something that’s killing us."

Lustig brings impressive credentials to the argument.

"Lustig is a specialist on pediatric hormone disorders and the leading expert in childhood obesity at the University of California, San Francisco, School of Medicine, which is one of the best medical schools in the country. He published his first paper on childhood obesity a dozen years ago, and he has been treating patients and doing research on the disorder ever since."

As background, one needs to realize that sugar is a substance consisting of the chemical combination of a glucose molecule and a fructose molecule. It is the fructose that provides the sweetness. Whereas a starch provides glucose and calories, sugar’s calories come with the need for the fructose to be processed by the liver. So, all calories are not created equal. It is this path through the liver that is seen as the source of problems.

A basis for the suspicion that sugar might be bad for us resides in the observation that the human body evolved without ever encountering the high sugar levels that are common in modern diets. The most common natural source of sugar in the diet would be fruits. Ingesting sugar by chewing on a piece of fruit dilutes the sugar and spreads out the time over which it enters the blood stream and reaches the liver.

In our modern diet we encounter all sorts of highly concentrated forms of sugar that can enter our system rapidly.

"In animals, or at least in laboratory rats and mice, it’s clear that if the fructose hits the liver in sufficient quantity and with sufficient speed, the liver will convert much of it to fat. This apparently induces a condition known as insulin resistance, which is now considered the fundamental problem in obesity, and the underlying defect in heart disease and in the type of diabetes, type 2, that is common to obese and overweight individuals. It might also be the underlying defect in many cancers."

"If what happens in laboratory rodents also happens in humans, and if we are eating enough sugar to make it happen, then we are in trouble."

The liver is a complex organ with hundreds of functions within the body. The manner in which a fatty liver leads to insulin resistance is not completely clear. However, the correlation between the two conditions has been observed in humans.

"What causes the initial insulin resistance? There are several hypotheses, but researchers who study the mechanisms of insulin resistance now think that a likely cause is the accumulation of fat in the liver. When studies have been done trying to answer this question in humans, says Varman Samuel, who studies insulin resistance at Yale School of Medicine, the correlation between liver fat and insulin resistance in patients, lean or obese, is ‘remarkably strong.’ What it looks like, Samuel says, is that ‘when you deposit fat in the liver, that’s when you become insulin-resistant’."

Other studies have indicated a correlation between high doses of fructose and insulin resistance in humans.

"Similar effects can be shown in humans, although the researchers doing this work typically did the studies with only fructose — as Luc Tappy did in Switzerland or Peter Havel and Kimber Stanhope did at the University of California, Davis — and pure fructose is not the same thing as sugar or high-fructose corn syrup. When Tappy fed his human subjects the equivalent of the fructose in 8 to 10 cans of Coke or Pepsi a day — a "pretty high dose," he says —– their livers would start to become insulin-resistant, and their triglycerides would go up in just a few days. With lower doses, Tappy says, just as in the animal research, the same effects would appear, but it would take longer, a month or more."

Why is insulin resistance such a critical phenomenon?

"You secrete insulin in response to the foods you eat — particularly the carbohydrates — to keep blood sugar in control after a meal. When your cells are resistant to insulin, your body (your pancreas, to be precise) responds to rising blood sugar by pumping out more and more insulin. Eventually the pancreas can no longer keep up with the demand or it gives in to what diabetologists call "pancreatic exhaustion." Now your blood sugar will rise out of control, and you’ve got diabetes."

"Not everyone with insulin resistance becomes diabetic; some continue to secrete enough insulin to overcome their cells’ resistance to the hormone. But having chronically elevated insulin levels has harmful effects of its own — heart disease, for one. A result is higher triglyceride levels and blood pressure, lower levels of HDL cholesterol (the "good cholesterol"), further worsening the insulin resistance — this is metabolic syndrome."

Taubes asks whether sugar may be even more dangerous than Lustig claims.

"One of the diseases that increases in incidence with obesity, diabetes and metabolic syndrome is cancer. This is why I said earlier that insulin resistance may be a fundamental underlying defect in many cancers, as it is in type 2 diabetes and heart disease. The connection between obesity, diabetes and cancer was first reported in 2004 in large population studies by researchers from the World Health Organization’s International Agency for Research on Cancer. It is not controversial. What it means is that you are more likely to get cancer if you’re obese or diabetic than if you’re not, and you’re more likely to get cancer if you have metabolic syndrome than if you don’t."

The connection seems to be related to high levels of insulin.

"As it was explained to me by Craig Thompson, who has done much of this research and is now president of Memorial Sloan-Kettering Cancer Center in New York, the cells of many human cancers come to depend on insulin to provide the fuel (blood sugar) and materials they need to grow and multiply. Insulin and insulin-like growth factor (and related growth factors) also provide the signal, in effect, to do it. The more insulin, the better they do. Some cancers develop mutations that serve the purpose of increasing the influence of insulin on the cell; others take advantage of the elevated insulin levels that are common to metabolic syndrome, obesity and type 2 diabetes. Some do both. Thompson believes that many pre-cancerous cells would never acquire the mutations that turn them into malignant tumors if they weren’t being driven by insulin to take up more and more blood sugar and metabolize it."

"What these researchers call elevated insulin (or insulin-like growth factor) signaling appears to be a necessary step in many human cancers, particularly cancers like breast and colon cancer. Lewis Cantley, director of the Cancer Center at Beth Israel Deaconess Medical Center at Harvard Medical School, says that up to 80 percent of all human cancers are driven by either mutations or environmental factors that work to enhance or mimic the effect of insulin on the incipient tumor cells."

If sugar is a factor in causing insulin resistance, it will be a factor in causing cancers.

Taubes believes that more studies will have to be performed in order to convince doubters of the role of sugar in disease. Since the effects to be measured are subtle and occur over long periods of time, the studies will be long, costly and difficult. He is doubtful that major breakthroughs are on the horizon. There are an array of special interests that make their living selling sugar-laden products, and no specific agency to turn to for large-scale funding.

There seems to be sufficient data to conclude that sugar is more than just a source of calories. Since its supposed effects can work on thin people as well as the obese, it seems wise for all to consider moderation in its intake. But it is difficult to see it being considered a "poison" if one is unable to suggest what a harmful dosage might be.

Perhaps, the comparison with alcohol is the appropriate thought with which to finish. Alcohol can damage the liver if consumed in excess. But everyone’s liver is different and no fixed rule for consumption is accurate even for inebriation, let alone long-term liver damage. All we know is that moderate consumption is better than immoderate drinking.

Let a word to the wise be sufficient. Less sugar is in all ways a good thing.

Monday, July 23, 2012

If We Are What We Eat, We Aren’t What We Used To Be: Corn, Nuts, and Sugar

Anthropologists tell us that humans spent about one million years evolving as hunter-gatherers, and eating targets of opportunity such as nuts, fruits, greens, starchy roots and the occasional treat of meat or fish. Humans have spent about the last 10,000 years transitioning to a diet of domesticated plants and animals. Humanity has spent the last two or three generations converting their domesticated species into more efficient food producers. This has been accomplished by selective breeding, genetic modification, new types of nourishment, and altered agricultural methods. The bottom line is that we are now eating things that we never ate while we were evolving over those countless generations. We are messing with Mother Nature in ways we cannot possibly understand.

Michael Pollan raises this issue in The Omnivore’s Dilemma.

"Folly in the getting of our food is nothing new. And yet the new follies we are perpetrating in our industrial food chain today are of a different order. By replacing solar energy with fossil fuel, by raising millions of food animals in close confinement, by feeding those animals foods they never evolved to eat, and by feeding ourselves foods far more novel than we even realize, we are taking risks with our health and the health of the natural world that are unprecedented."

Of particular interest to Pollan is the degree to which corn has come to dominate our "industrialized" food chain.

"The great edifice of choice that is an American supermarket turns out to rest on....the giant tropical grass most Americans know as corn."

The ability to grow great quantities of corn cheaply and efficiently has created the need to make use of that excess.

"Corn is what feeds the steer that becomes the steak. Corn feeds the chicken and the pig, the turkey and the lamb, the catfish and the tilapia and, increasingly, even the salmon, a carnivore by nature that the fish farmers are reengineering to tolerate corn. The eggs are made of corn. The milk and cheese and yogurt, which once came from dairy cows that grazed on grass, now typically come from Holsteins that spend their working lives indoors tethered to machines, eating corn."

"Head over to the processed foods and you find ever more intricate manifestations of corn. A chicken nugget, for example, piles corn upon corn: what chicken it contains consists of corn, of course, but so do most of the nugget’s other constituents, including the modified corn starch that glues the thing together, the corn flour in the batter that coats it, and the corn oil in which it gets fried. Much less obviously, the leavenings and lecithin, the mono-, di-, and triglycerides, the attractive golden coloring, and even the citric acid that keeps the nugget ‘fresh’ can all be derived from corn."

Are there indications that we may be inflicting harm upon ourselves by our changing eating habits? We seem to be beset by a number of conditions whose incidence has recently grown to the point that the term epidemic is used. The list would include diabetes, food allergies, asthma, cardiovascular disease, obesity, cancer, autism, and even mental health.

Food allergies provide an interesting category. It is somewhat of a mystery that they even exist. One would have expected that the genetic variants that cause such allergic reactions to common foods would have been selected out by evolution. Perhaps their current rate of incidence can be interpreted as a result of our changed eating habits.

The allergy to nuts is one of the most common. An allergic response to nuts seems also to be correlated with an increased likelihood of being asthmatic. A fine article in The New Yorker by Jerome Groopman, The Peanut Puzzle, examines what is known about this condition. His article turns on the past belief that one could protect children from allergies by preventing encounters with the potentially dangerous substance, at least until their immune systems had matured.. This led mothers to follow the advice of doctors and avoid things like nuts during pregnancy in the belief that they were protecting their children.

This approach seems rather strange if one considers all those generations in which pregnant women ate whatever was available and then fed the same food to their infants—usually pre-chewed by the mother and ejected into the mouth of the child. There didn’t appear to be an attempt, or a need, to shield children from anything that was part of the food chain.

Groopman’s article describes the change of attitude that eventually crept into the medical profession as researchers came to conclude that gradual introduction of the sensitive material to a child was the best way to overcome, or outgrow an allergy. A complete about face in the recommendations to pregnant women took place in the US in 2008 and in the UK in 2010.

A recent study reported on by Kerry Grens for Reuters provides an apt conclusion to the controversy.

"In a study based on 62,000 Danish mothers, the children of those who ate peanuts and tree nuts while pregnant were less likely to develop asthma or allergies than the kids whose mothers shunned nuts."

It would seem that the genetic basis for allergic reactions to foods were not selected out by evolution so much as suppressed by eating habits. This example does suggest that changing what we ingest into our bodies can be a serious matter.

In searching for an example of changed eating habits, the curious case of sugar provides perhaps the most compelling and interesting example. Gary Taubes provided an absolutely fascinating article for the New York Times about a year ago: Is Sugar Toxic? Taubes describes the viewpoint of the medical researcher Robert Lustig who has been describing sugar with the terms "toxin" and "poison." His belief is that sugar should be considered in the same class as alcohol which can cause severe medical problems if taken in access. Taubes indicates that he has been following the literature on the effects of sugar consumption for the past decade and he concludes that Lustig is making a valid point.

Humans evolved getting their sugar in relatively small quantities via eating fruit. This type of ingestion means that the sugar was delivered slowly into the blood stream. Consider today when we consume much more sugar than our evolutionary ancestors, and we consume it in a form that provides rapid delivery. It should be recognized that sugars consist of a chemical combination of glucose and fructose.

"The fructose component of sugar....is metabolized primarily by the liver, while the glucose from sugar and starches is metabolized by every cell in the body. Consuming sugar (fructose and glucose) means more work for the liver than if you consumed the same number of calories of starch (glucose). And if you take that sugar in liquid form — soda or fruit juices — the fructose and glucose will hit the liver more quickly than if you consume them, say, in an apple (or several apples, to get what researchers would call the equivalent dose of sugar). The speed with which the liver has to do its work will also affect how it metabolizes the fructose and glucose."

"In animals, or at least in laboratory rats and mice, it’s clear that if the fructose hits the liver in sufficient quantity and with sufficient speed, the liver will convert much of it to fat. This apparently induces a condition known as insulin resistance, which is now considered the fundamental problem in obesity, and the underlying defect in heart disease and in the type of diabetes, type 2, that is common to obese and overweight individuals. It might also be the underlying defect in many cancers."

"If what happens in laboratory rodents also happens in humans, and if we are eating enough sugar to make it happen, then we are in trouble."

Taubes believes there is sufficient evidence, albeit mostly circumstantial, to support grave concern for the effects of sugar consumption on our health. It should also be noted that around the turn of the current century the Department of Agriculture concluded that we were consuming about 90 pounds of sugar per capita each year.

Taubes’ article on sugar and its consequences demands a more detailed discussion. That will, hopefully, be presented soon. Let us merely conclude here that sugar, as we now consume it, was never encountered by our ancestors, and it has the potential to do great bodily harm if over consumed.

We are stuffing are bodies with constituents that have not stood the test of time. A powerful medication may require years of observation before we learn that there are undesirable consequences. The more subtle changes in our intake of nourishment may require generations to discover that we may have made a terrible mistake. We are participating in a massive experiment and don’t even realize it.

 

Sunday, July 22, 2012

The Effect of Raising the Minimum Wage: Is There a Market for Labor?

One often hears the claim that raising the minimum wage will cause an increase in unemployment that will hurt those in the lowest income group rather than help them. This claim seems to be based on the textbook assumption that in the area of labor there is a market that is driven by supply and demand. This tidy generalization of markets to the interactions of employers and employees in the workplace makes for nice charts in a textbook, but it need not have anything to do with reality.

James K. Galbraith addressed this issue in his book The Predator State.

"....the ‘labor market’—conceived as an interaction between forces of supply and those of demand—really does not exist, or to put it as Keynes did, in 1936, there is no supply curve for labor. The total demand for labor determines employment and that is essentially the entire story."

Let’s consider an example in order to elucidate what Galbraith means by that claim. Consider a fast food outlet where a manager has organized a team of workers as efficiently as possible to produce the product at the rate necessary to meet demand. There are no excess workers and everyone is kept busy. If raising the minimum wage causes the manager to have to pay the employees more, it will eat into his/her profits. Economics textbooks imply that the manager will choose to eliminate an employee to compensate for the increase in cost of that worker. But does that make sense. A certain number of workers are necessary to produce the required amount of product. The operation will lose money through inefficiency if product cannot be delivered in a timely manner. It makes much more sense to simply raise prices slightly to cover the increased expense.

What actually happens in practice? A brief article by Ross Eisenbrey provides a summary of recent economic studies of employment data as affected by increases in the minimum wage.

"As a 1995 paper in the Journal of Economics Literature put it, ‘There is a long history of empirical studies attempting to pin down the effects of minimum wages, with limited success.’ No one found significant employment losses when President Truman raised the minimum wage by 87% in 1950. When Congress raised the minimum wage by 28% in two steps in 1967, businesses predicted large employment losses and price increases. As the Wall Street Journal reported six months later, ‘Employment and prices show little effect from $1.40-an-hour guarantee’."

The gold standard in these studies appears to be that performed by Card and Krueger which determined that a 19% increase in New Jersey’s minimum wage did not lead to a loss of jobs.

"Nobel laureate Paul Krugman says the study ‘has stood up very well to repeated challenges, and new cases confirming its results keep coming in.’ And even the most ardent conservative critics could not claim that the New Jersey increase caused statistically significant job loss."

Even teen employment seems to be unaffected.

"University of California, Berkeley....economist Sylvia Allegretto wants policy advocates to know about recent economics research about the minimum wage because it is so clear and convincing. Allegretto and colleagues Michael Reich and Arindrajit Dube carefully studied data on teen employment from 1990 to 2009 and found ‘that minimum wage increases—in the range that have been implemented in the United States—do not reduce employment among teens. Previous studies to the contrary used flawed statistical controls and ‘do not provide a credible guide for public policy’."

Eisenbrey cites credible studies that confirm the opinion that raising the minimum wage has no significant effect on employment. Given the state of economics and the personal biases inherent in the analyses of complex data, it is assured that someone could find counter studies that would indicate an effect.

Perhaps the conclusion that should be drawn is that if economists can argue over this for decades without convincing each other that an effect does or does not exist, then any effect is too small to worry about and the value of the minimum wage should be determined by other factors.

Friday, July 20, 2012

Drug Companies: Even More Corrupt Than Financial Institutions?

In a recent post, HSBC: Too Big to Fail, Too Big to Prosecute, we discussed some of the illegal activities that have become common within our financial institutions. Besides causing recessions and manipulating markets, they also find time to provide financial assistance to drug cartels, terrorist organizations, and designated foes of the United States. With that kind of track record, one might assume that banks were the baddest of the bad in terms of corruption. Perhaps, but it is not necessarily so.

Let us see if a case can be made that the pharmaceutical industry—drug companies—might deserve that honor.

A recent article in The Economist discussed the recently announced health-fraud settlement of $3 billion imposed on GlaxoSmithKline. This is the largest such settlement—thus far— in US history

"Glaxo agreed to plead guilty to three criminal counts: the misbranding of two antidepressants and the failure to tell regulators about safety data for a diabetes drug. Those plea deals included $1 billion in payments. Glaxo also agreed to an additional $2 billion to settle civil charges under the False Claims Act."

"Glaxo allegedly used spa treatments, trips to Hawaii and hunting excursions to coax doctors to write prescriptions for unapproved uses of certain drugs. In the case of Paxil, an antidepressant, Glaxo was said to have promoted a journal article that overstated the drug’s benefits for children."

GlaxoSmithKline was required to pay a fine that was a small part of its profits. It has joined a host of other companies that have made settlements for illegal behavior in recent years. The criminal sanctions seem to have become part of the industry’s cost of doing business. The article provides us with this tally of recent settlements.



The article is overly kind to the drug company to refer to a trip to Hawaii as anything other than a blatant bribe. To get a better picture of how these companies operate let’s turn to a description by Daniel J. Carlat, a psychiatrist who has written a book titled Unhinged: The Trouble with Psychiatry—A Doctor’s Revelations about a Profession in Crisis. The crisis he refers to is the dangerous symbiotic relationship between psychiatry and drug companies. Carlat includes a chapter titled: How Companies Sell Psychiatrists on Their Drugs.

Carlat claims that the success of the drug companies in developing new products that would keep their profit growth strong began to fade about fifteen years ago.

"....in response, companies changed their business strategies. Marketing and sales divisions were beefed up. And marketing gradually morphed into something different, larger, all-pervasive, and, frankly, uglier than anything that had come before it."

Marketing has come to dominate the pharmaceutical industry, dwarfing even R&D. And the target of most of the marketing: physicians.

"It is true that companies spend plenty on R&D—$30 billion in 2007 alone. But they spend twice as much on marketing—close to $60 billion in that same year. Some 90 percent of this marketing money is spent on sales activities directed towards physicians."

The path to profits passes through physicians. Only doctors can prescribe drugs. The marketing arms of the drug companies have become part of the process of clinical testing, publishing of scientific journal articles, getting FDA approval, and convincing doctors to use their specific product.

"In order to provide prescribers with evidence to use their products, companies have brought their marketing department front and center into the research business. When they have trouble finding legitimate independent researchers to do test treatments, they have designed and funded those tests themselves. When they haven’t been able to find academics to write up the results, they have hired ghostwriters, and then paid academics to simply put their names on the articles. And when, heaven forbid, they have conducted studies that haven’t shown their drugs to be effective, they have slipped the studies into file drawers, hoping that nobody would ever find out that they were conducted."

And how does this process work out in practice.

"Unsurprisingly, research sponsored by companies almost always produces positive results for the sponsor’s drug. This has been documented in research on a vast array of treatments, including antidepressants, antipsychotics, birth control pills, arthritis medications, and drugs for Alzheimer’s dementia. A meta-analysis of all such studies found that drug company-sponsored studies were four times more likely to produce a favorable outcome for the sponsor’s drug than studies with other funders."

The medical literature itself is compromised by the tactics of the drug companies. Carlat relates the situation related to Pfizer’s antidepressant Zoloft. Pfizer contracted to have 55 articles on the drug to be ghostwritten and paid psychiatrists to put their names on the articles. This took place in 1998-2000.

"Most astoundingly, these articles outnumbered those written in the old fashioned way, comprising over half—57 percent—of all articles published about Zoloft in the entire medical literature from 1998 to 2000. Thus for at least one antidepressant, the bulk of the medical literature was literally written by the drug company that manufactured the drug, which is about as glaring a manipulation of science as one can imagine."

A physician who is trying to decide what drugs to prescribe faces biased clinical test results, biased scientific literature, and a drug sales force that can provide him/her with gifts and money. The financial equivalent would be the combining of garbage mortgages with a few quality mortgages and selling the batch as if the whole package was of high quality.

All of this is dangerous for consumers, and it is unethical, but it is not illegal. The illegality arises from the fact that doctors are not limited in what they can prescribe. They can choose to use drugs for purposes that are not FDA approved. Drug companies cannot legally market their drugs for such purposes, but they do until they get caught. By the time that happens they will have pocketed more in profits than they would ever have to pay in penalties.

Carlat describes one marketing push by Warner-Lambert (subsequently taken over by Pfizer) on a drug called Neurontin. Neurontin was approved for treatment of epilepsy, but it performed so poorly that it could only be prescribed as a secondary drug for use if primary drugs failed to work. This was not destined to be a large money maker for the company. The information is available because of a whistle blower who eventually brought Pfizer to justice—such as justice is.

Warner-Lambert thought that Neurontin might be effective for a number of conditions such as bipolar disorder, migraine headaches, ADHD.... The problem was that they had no evidence of effectiveness that the FDA considered worthy of its approval.

"....but the executives decided to try to convince doctors to prescribe it for these disorders anyway....This is called ‘off-label’ prescribing."

The whistle blower provided this account of a meeting with an executive of the company:

"....John Ford, a senior executive, exhorted reps to pitch Neurontin to doctors for a long list of disorders, none of them adequately researched. ‘That’s where we need to be, holding their hand and whispering in their ear,’ Ford said, referring to the doctors, ‘Neurontin for pain, Neurontin for monotherapy, Neurontin for bipolar, Neutrontin for everything.’ He went further, encouraging reps to get doctors to ramp the dose up higher than FDA’s recommended maximum of 1,800 mg/day: ‘I don’t want to hear that safety crap either,’ he said."

It is easy to imagine a similar meeting taking place in a mortgage company in say 2004. "Go out there and get me some mortgages signed. I don’t care if they have any money. As long as they are breathing and can put an ‘X’ on a sheet of paper I want you to nail them!"

As with the marketing of subprime mortgages, the marketing of Neurontin to doctors was successful. The requisite articles were ghostwritten, and doctors where provided, if necessary, with the equivalent of bribes in the form of cash, dinners and trips.

"These sleazy techniques worked beautifully. The drug became a blockbuster, earning $2.7 billion in 2003 alone. Almost all of that income was for off-label uses.....Eventually....Pfizer (which had since bought Warner-Lambert) pleaded guilty to criminal charges and agreed to pay $430 million in fines—a pittance in comparison with the billions the drug was earning per year. For Pfizer it was simply another business expense, and a fairly minor one at that."

Carlat tells us that in spite of the fine and admissions of guilt, Neurontin is still being prescribed for off-label uses and is still earning money for Pfizer.

The drug companies, like the large banks, seem to be shielded by a form of "too big to fail" logic. We need drug companies and their products; therefore we hesitate to treat criminal activity as criminal activity. Instead, we issue ineffective monetary fines and are happy to have them promise not to do it again.

In answer to the question imbedded in the title to this piece: financial institutions play fast and loose with our money; drug companies play fast and loose with our lives.

Thursday, July 19, 2012

HSBC: Too Big to Fail, Too Big to Prosecute?

In 2010, Michael Smith wrote a fascinating article for Bloomberg titled Banks Financing Mexico Gangs Admitted in Wells Fargo Deal. The subject of the article was the illicit money laundering provided for the Mexican cartels by Wachovia Corporation. Wells Fargo was involved because it purchased Wachovia in 2008.
"Wachovia admitted it didn’t do enough to spot illicit funds in handling $378.4 billion for Mexican-currency-exchange houses from 2004 to 2007. That’s the largest violation of the Bank Secrecy Act, an anti-money-laundering law, in U.S. history -- a sum equal to one-third of Mexico’s current gross domestic product."

"’Wachovia’s blatant disregard for our banking laws gave international cocaine cartels a virtual carte blanche to finance their operations,’ says Jeffrey Sloman, the federal prosecutor who handled the case."

This was no case of lack of oversight. Wachovia executives were well-aware that these transactions involved drug money and they chose to continue on in order to pocket the profits. Such an egregious crime should have serious penalties—right?

"....Wells Fargo promised in a Miami federal courtroom to revamp its detection systems. Wachovia’s new owner paid $160 million in fines and penalties, less than 2 percent of its $12.3 billion profit in 2009."

"If Wells Fargo keeps its pledge, the U.S. government will, according to the agreement, drop all charges against the bank in March 2011."

Smith explains this slap on the wrist.

"No big U.S. bank -- Wells Fargo included -- has ever been indicted for violating the Bank Secrecy Act or any other federal law. Instead, the Justice Department settles criminal charges by using deferred-prosecution agreements, in which a bank pays a fine and promises not to break the law again."

HSBC, a global bank centered in London, was recently caught breaking the rules. An article by Michael Mathes, HSBC apologizes for anti-money-laundering failures, explains what was involved.

"HSBC apologized Tuesday for failing to apply anti-laundering rules and one senior executive resigned, as lawmakers accused the global bank of giving Iran, terrorists and drug dealers access to the US financial system."

As in the Wachovia case this involved willful criminal behavior.

"Among the findings was the revelation that HSBC and its US affiliate concealed more than $16 billion (13 billion euros) in sensitive transactions to Iran, violating US transparency rules over a six-year period."

"HSBC executives were aware of the "concealed Iranian transactions" -- which stripped all identifying Iranian information from documentation -- as early as 2001 but allowed thousands of transactions to continue until 2007."

While the article focused on the illegal Iranian transactions, HSBC was willing to break the law in other areas.

"The report said HSBC's Mexican affiliate ‘transported $7 billion in physical US dollars to HBUS [HSBC’s US affiliate] from 2007 to 2008... raising red flags that the volume of dollars included proceeds from illegal drug sales in the United States’."

"And it said HBUS ‘provided US dollars and banking services to some banks in Saudi Arabia and Bangladesh despite links to terrorist financing’."

The article quotes Senator Carl Levin as suggesting HSBC would face penalties for its behavior, presumably another fine and a deferred-prosecution agreement as in the Wachovia case. Why is no one ever prosecuted and sent to jail? Michael Smith provided this insight in his article.

"Large banks are protected from indictments by a variant of the too-big-to-fail theory. "

"Indicting a big bank could trigger a mad dash by investors to dump shares and cause panic in financial markets, says Jack Blum, a U.S. Senate investigator for 14 years and a consultant to international banks and brokerage firms on money laundering."

"The theory is like a get-out-of-jail-free card for big banks, Blum says."

"’There’s no capacity to regulate or punish them because they’re too big to be threatened with failure,’ Blum says. ‘They seem to be willing to do anything that improves their bottom line, until they’re caught’."

Hardly a week goes by without some new revelation of illegal behavior on the part of banks. It is hard to see how a culture that has been so ethically compromised can be reformed without throwing some people in jail in order to create appropriate precedents.

HSBC was caught aiding and abetting a designated foe of the United States. Didn’t they used to hang people for things like that?

Wednesday, July 18, 2012

The Housing Crisis and Eminent Domain: Society’s Powers

An article in the New York Times by Jennifer Medina reports on a novel way to address the housing crisis faced by many cities where home values have plummeted and have led to numerous foreclosures. 

The approach was suggested, and is being touted by, Steven M. Gluckstern who is chairman of an entity called Mortgage Resolution Partners (MRP). The plan being promoted is for MRP to survey the mortgage holders in an area and select those whose house value is worth much less than their current mortgage, but who have kept up to date on their mortgage payments. Those who are at risk of defaulting on payments, or of just walking away from a bad investment, are the most likely foreclosure threat. The local government would then use eminent domain to claim these homes and reimburse the owners (mortgage holders) for fair market value. The home can then be refinanced at current low interest rates and the homeowner can end up with a much lower payment and a mortgage that is consistent with the current value of the property. The goal is to limit foreclosures and to stabilize housing prices. MRP would receive a fixed fee for the selection process and for facilitating the refinancing.

This path, as suggested by MRP is receiving active consideration in San Bernardino County east of Los Angeles where foreclosure rates are among the highest in the nation and about half the homes have mortgages that exceed the value of the home. The city of San Bernardino recently made the news when it voted to declare bankruptcy. The cities of Fontana and Ontario would be the focus of any initial effort. It is estimated that 20,000 homes could be included in the project.

This path seems like a winning solution for the homeowners who get to keep their homes and are returned to a firmer financial footing. Banking and mortgage firms are outright hostile to the notion. What is interesting is that they don’t seem to think the idea is illegal; their argument is that the cities who take this action will poison the confidence of investors and end up hurting their housing markets in the long run. How many times can financial types cry "confidence" before people stop listening?

Surprisingly, the use of eminent domain in this way is well-founded in precedent, and its constitutionality has been upheld many times. Wikipedia provides a nice summary of the concept and its long history.

"The power of governments to take private real or personal property has always existed in the United States, being an inherent attribute of sovereignty. This power reposes in the legislative branch of the government and may not be exercised unless the legislature has authorized its use by statutes that specify who may use it and for what purposes. The legislature may delegate the power to private entities like public utilities or railroads, and even to individuals...."

"The Fifth Amendment imposes limitations on the exercise of eminent domain: the taking must be for public use and just compensation must be paid."

The concept of "public use" is quite obvious if cities feel this action is necessary to stave of financial collapse. The owners of the original mortgages would be compensated for the current market value of the property. There was a famous case decided by the Supreme Court in 2005, Kelo v. City of New London, that illustrates the extent to which eminent domain can be pushed.

"The Supreme Court's decision in Kelo v. City of New London, 545 U.S. 469 (2005) affirmed the authority of New London, Connecticut, to take non-blighted private property by eminent domain, and then transfer it for a dollar a year to a private developer solely for the purpose of increasing municipal revenues. This 5-4 decision received heavy press coverage and inspired a public outcry that eminent domain powers were too broad."

The action proposed here is much more easily justified than that taken by the city of New London, and just compensation will be paid. All that is needed is to insure that the proper legislative authority is in place.

This idea seems almost too good to be true, but give that nothing previously tried has been of much help, why not give it a go. The article suggests that cities in Florida and Nevada are considering the concept also. One suspects an avalanche of such efforts would follow once a precedent is set.

Go for it! I am sure the financial types will find a way to regain their "confidence."

Sunday, July 15, 2012

The Assault on Public Education Continues: Universities

The establishment of universal public education was one of the greatest accomplishments of the United States. It led the way in establishing the principle that everyone had a right to an education. A system of universal education works best for a society if it is uniformly available. Ideally, every citizen pays into the system proportional to their ability to pay, and everyone benefits equally from the system. Some European countries have come closer to this ideal by extending universal education to their university systems, but limiting access to those who demonstrate the greatest ability to perform at a high academic level. Alternate educational paths are provided for the remainder. 

The US has never attained that ideal, nor actively sought it. It has always had a powerful minority that was happy to retain quality higher education as a privilege for the wealthy. Expensive private universities for the elite were always an alternative to the lower-cost public universities. Such a mixed system is inherently unstable given that society will always possess some degree of income inequality. The wealthy will always be able to bid up the cost of their elite, private education to the point that only they can afford it. The other extreme, the poor, must depend on the more wealthy to be willing to pay sufficiently into the public system to provide an equivalent, or near equivalent, level of education.

This dual system worked well for many years as support for public schools was sufficient to make excellent universities available at low cost. An ascendant conservative faction has supported movement even further away from universal public education by favoring lower public spending, various forms of private enterprise, and market-influenced approaches. Translation: the quality of your education should be based on how much you are able to pay.

Rising costs for education coupled with lower public support has left students who are not wealthy enough to cover the costs with only the option to take on debt—over a trillion dollars of debt as of 2011. An article in The Economist provides this chart.




The situation is nicely addressed in a New York Times article by Andrew Martin and Andrew W. Lehren: A Generation Hobbled by the Soaring cost of College.

"About two-thirds of bachelor’s degree recipients borrow money to attend college, either from the government or private lenders, according to a Department of Education survey of 2007-8 graduates; the total number of borrowers is most likely higher since the survey does not track borrowing from family members." "By contrast, 45 percent of 1992-93 graduates borrowed money; that survey included family borrowing as well as government and private loans."

The degree of borrowing varies widely.

"For all borrowers, the average debt in 2011 was $23,300, with 10 percent owing more than $54,000 and 3 percent more than $100,000, the Federal Reserve Bank of New York reports."

Economic difficulties coupled with anti-tax sentiments have resulted in higher costs for an education at a state college and lower public support.

"Nationally, state and local spending per college student, adjusted for inflation, reached a 25-year low this year, jeopardizing the long-held conviction that state-subsidized higher education is an affordable steppingstone for the lower and middle classes. All the while, the cost of tuition and fees has continued to increase faster than the rate of inflation, faster even than medical spending."

"From 2001 to 2011, state and local financing per student declined by 24 percent nationally. Over the same period, tuition and fees at state schools increased 72 percent, compared with 29 percent for nonprofit private institutions, according to the College Board."

To maintain parity, spending on college education should have been increasing, merely to match increases in population. These cuts were not motivated by economic concerns alone. Conservative Republicans have been hostile to the notion that access to higher education is a right.

"....the sharp drop in per-student spending also reflects a change: an increasing number of lawmakers voted to transfer more of the financial burden of college from taxpayers to students and their families."

It has been argued that $50,000 is a small price to pay for an education that should yield greater earnings over a lifetime. Such arguments miss the point. It is to society’s benefit to make higher education as widely available as possible. We hear over and over that positions go unfilled, or that they are offshored, because we suffer from a dearth of acceptable candidates. How is that issue addressed by making entry into education more difficult?

The students who negotiate the system by accumulating debt can be economically limited for a decade or more as they try to pay off school debts.

"....economists say, growing student debt hangs over the economic recovery like a dark cloud for a generation of college graduates and indebted dropouts. A study of recent college graduates conducted by researchers at Rutgers University and released last week found that 40 percent of the participants had delayed making a major purchase, like a home or car, because of college debt, while slightly more than a quarter had put off continuing their education or had moved in with relatives to save money. Roughly half of the surveyed graduates had a full-time job."

Our schools could do a better job of preparing students for the environment they will encounter after they graduate. And they could do a better job of counseling students on the true costs they are incurring when they take out loans.

"Much like the mortgage brokers who promised pain-free borrowing to homeowners just a few years back, many colleges don’t offer warnings about student debt in the glossy brochures and pitch letters mailed to prospective students. Instead, reading from the same handbook as for-profit colleges, they urge students not to worry about the costs."

The most troubling aspect of this issue is that we seem to be breaking faith with the nature of society itself. Societies form because people realize that they are much safer as a member of a group than as an individual. It is an implicit contract in societies that the strong will contribute to the protection of the weak and that the wealthy will contribute to the welfare of the poor. The notion that everyone should pay for their own education is another instance in which conservatives have forced a retreat from the concept of societal assistance and pushed us towards a pay-to-play posture where everyone is expected to personally fund whatever service they receive. Friendships cannot endure such an arrangement; families cannot operate under that rule; tribes and clans would dissolve if such a requirement was operative. Why would anyone suppose that a modern, complex society could thrive?

Friday, July 13, 2012

Adultescence: Why Do We Raise Spoiled Children?

We are all familiar with disruptive, unruly children who refuse to obey their parents. Tales of children who never quite seem to be able to become self-sufficient are common. This latter condition has been referred to as "adultescence." Many families seem to be organized around serving the needs of the children rather than insuring that children become functional members of the household. Elizabeth Kolbert produced a wonderful article in The New Yorker: Spoiled Rotten: Why do kids rule the roost? She discusses the reasons why we might have fallen into such an unacceptable mode of raising children, illustrates why such a result need not be inevitable, and suggests that the problem may be even more serious than we think.

Kolbert touches on a number of books and theories that have emerged in order to explain our inability to raise children properly. We are said to have granted so much entitlement to our children that they assume the role of the privileged rather than that of the responsible. We are told that we worry too much about what effect our actions might have on our children. Madeline Levine, a psychologist, provides this insight:

"....she argues that we do too much for our kids because we overestimate our influence. ‘Never before have parents been so (mistakenly) convinced that their every move has a ripple effect into their child’s future success,’ she writes. Paradoxically, Levine maintains, by working so hard to help our kids we end up holding them back."

"'Most parents today were brought up in a culture that put a strong emphasis on being special,’ she observes. ‘Being special takes hard work and can’t be trusted to children. Hence the exhausting cycle of constantly monitoring their work and performance, which in turn makes children feel less competent and confident, so that they need even more oversight'."

This notion that parents err in worrying about whether their children will be "special" is discussed in the context of the welfare of the children. One has to wonder how much of this over-indulgence in their children comes from parents who are concerned more for their own image. A person who wishes to be viewed in their community as successful is under great pressure to produce successful children


There was not a lot that is new or especially revealing in the relating of childrearing gone wrong. More interesting were the counterexamples Kolbert presented, and the manner in which she invoked evolution in the discussion.

Kolbert begins her article by relating an event experienced in 2004 by an anthropologist, Carolina Izquierdo, while studying a Peruvian tribe of hunters and subsistence farmers named the Matsigenka. Izquierdo accompanied a family on a leaf gathering expedition down a local river.

"A member of another family, Yanira, asked if she could come along. Izquierdo and the others spent five days on the river. Although Yanira had no clear role in the group, she quickly found ways to make herself useful. Twice a day, she swept the sand off the sleeping mats, and she helped stack the kapashi leaves for transport back to the village. In the evening, she fished for crustaceans, which she cleaned, boiled, and served to the others. Calm and self-possessed, Yanira ‘asked for nothing,’ Izquierdo later recalled. The girl’s behavior made a strong impression on the anthropologist because at the time of the trip Yanira was just six years old."

It would be impossible for us to read this passage and not assume that Yanira was an adult. To make sure we do not falsely assume that Yanira was an exception, Kolbert provides additional insight from Izquierdo.

"....Izquierdo noted....how early the Matsigenka begin encouraging their children to be useful. Toddlers routinely heat their own food over an open fire, they observed, while ‘three-year-olds frequently practice cutting wood and grass with machetes and knives.’ Boys, when they are six or seven, start to accompany their fathers on fishing and hunting trips, and girls learn to help their mothers with the cooking. As a consequence, by the time they reach puberty Matsigenka kids have mastered most of the skills necessary for survival. Their competence encourages autonomy, which fosters further competence—a virtuous cycle that continues to adulthood."

Matsigenka children are entrusted with a machete years before we even begin pleading with our children to learn how to tie a shoelace. Clearly evolution has endowed our children with the ability to develop skills and responsibility at a much earlier age than is currently assumed in our societies.

Evolution is often invoked to explain a long adolescence that seems to grow ever longer. Of the great apes, humans mature most slowly. It is argued that this extended period of immaturity is necessary in order to attain the more complex skills required by human society.

"Evolutionarily speaking, this added delay makes a certain amount of sense. In an increasingly complex and unstable world, it may be adaptive to put off maturity as long as possible."

Clearly it takes much longer to become a skilled engineer or mathematician than to learn how to wield a machete. However, evolution and the complexity of society cannot explain why the Matsigenkas can entrust a three-year-old with a machete and we cannot. We have chosen, inadvertently or not, to demand less responsible behavior from our children. Any reliance on evolution to justify our children’s behavior has been proven false by the Matsigenkas.

Kolbert provides another perspective on delayed maturity.

"....adultesence might be just the opposite: not evidence of progress but another sign of a generalized regression. Letting things slide is always the easiest thing to do, in parenting no less than in banking, public education, and environmental protection. A lack of discipline is apparent these days in just about every aspect of American society."

Teaching responsibility and teaching skills are two different things. Instilling a sense of responsibility can—and should—begin immediately. It seems absurd to assume that a child who is given license to act irresponsibly will grow up to be a responsible adult and citizen.

Kolbert’s suggestion that we are seeing the fruits of our childrearing in our dysfunctional politics and in our greed-driven society is worthy of additional consideration.