Friday, March 27, 2015

Alcohol and Addiction: The False Premise of Alcoholics Anonymous

Alcohol is a substance that can generate a craving that can lead to high levels of consumption and unhealthy consequences.  The conventional wisdom about how to address abusive use of alcohol is informed more by folklore than evidence-based science.

Gabrielle Glaser provides an interesting summary of what is known about the condition and treatment of alcohol abuse in an article in The Atlantic: The False Gospel of Alcoholics Anonymous.  For some reason the online version of the article was instead titled The Irrationality of Alcoholics Anonymous

Glaser provides some history related to alcohol abuse and the founding of Alcoholics Anonymous (AA).  In 1935, Bill Wilson, an extremely heavy drinker (“known to drink two quarts of whiskey a day”), managed to stop drinking for good.  He viewed his experience as the model for others to follow, combined that with a healthy dose of man-as-sinner religiosity, and formulated the “12 steps” to follow to recovery. 

“Alcoholics Anonymous was established in 1935, when knowledge of the brain was in its infancy. It offers a single path to recovery: lifelong abstinence from alcohol. The program instructs members to surrender their ego, accept that they are ‘powerless’ over booze, make amends to those they’ve wronged, and pray.”

The approach provided by AA quickly became embedded in the national consciousness as the one and only path to follow in dealing with alcohol abuse.

“A public-relations specialist and early AA member named Marty Mann worked to disseminate the group’s main tenet: that alcoholics had an illness that rendered them powerless over booze. Their drinking was a disease, in other words, not a moral failing. Paradoxically, the prescription for this medical condition was a set of spiritual steps that required accepting a higher power, taking a ‘fearless moral inventory,’ admitting ‘the exact nature of our wrongs,’ and asking God to remove all character defects.”

“Mann helped ensure that these ideas made their way to Hollywood. In 1945’s The Lost Weekend, a struggling novelist tries to loosen his writer’s block with booze, to devastating effect. In Days of Wine and Roses, released in 1962, Jack Lemmon slides into alcoholism along with his wife, played by Lee Remick. He finds help through AA, but she rejects the group and loses her family.”

In 1970 the federal government recognized alcoholism as a disease and passed an act (referred to as the Hughes Act) that set up funding for the study and treatment of alcoholism.  The AA system was viewed as the model for treatment and other approaches that might have been suggested were discriminated against.  The result was that people were now in a position to make money treating people who were having trouble controlling a drinking problem.

“After the Hughes Act was passed, insurers began to recognize alcoholism as a disease and pay for treatment. For-profit rehab facilities sprouted across the country, the beginnings of what would become a multibillion-dollar industry. (Hughes became a treatment entrepreneur himself, after retiring from the Senate.) If Betty Ford and Elizabeth Taylor could declare that they were alcoholics and seek help, so too could ordinary people who struggled with drinking. Today there are more than 13,000 rehab facilities in the United States, and 70 to 80 percent of them hew to the 12 steps, according to Anne M. Fletcher, the author of Inside Rehab, a 2013 book investigating the treatment industry.”

The problem is that there is no independent data to support AA’s claim that:

“….AA has worked for 75 percent of people who have gone to meetings and ‘really tried.’ It says that 50 percent got sober right away, and another 25 percent struggled for a while but eventually recovered. According to AA, these figures are based on members’ experiences.”

Others have investigated this claim and have found no supporting evidence.

“In 2006, the Cochrane Collaboration, a health-care research group, reviewed studies going back to the 1960s and found that ‘no experimental studies unequivocally demonstrated the effectiveness of AA or [12-step] approaches for reducing alcohol dependence or problems.”

“In his recent book, The Sober Truth: Debunking the Bad Science Behind 12-Step Programs and the Rehab Industry, Lance Dodes, a retired psychiatry professor from Harvard Medical School, looked at Alcoholics Anonymous’s retention rates along with studies on sobriety and rates of active involvement (attending meetings regularly and working the program) among AA members. Based on these data, he put AA’s actual success rate somewhere between 5 and 8 percent. That is just a rough estimate, but it’s the most precise one I’ve been able to find.”

The sad part about this state of affairs is that there have long been other options available that allow people to control their drinking habits without the extremely difficult step of total abstinence for life.

“The 12 steps are so deeply ingrained in the United States that many people, including doctors and therapists, believe attending meetings, earning one’s sobriety chips, and never taking another sip of alcohol is the only way to get better. Hospitals, outpatient clinics, and rehab centers use the 12 steps as the basis for treatment. But although few people seem to realize it, there are alternatives, including prescription drugs and therapies that aim to help patients learn to drink in moderation. Unlike Alcoholics Anonymous, these methods are based on modern science and have been proved, in randomized, controlled studies, to work.”

To indicate how other countries have used science-based methods to obtain better results, Glaser went to Finland, a heavy drinking country, where the approach used is based on the work of the American neuroscientist John David Sinclair.  Sinclair moved to Finland in the 1970s to take advantage of a unique opportunity:

“….the chance to work in what he considered the best alcohol-research lab in the world, complete with special rats that had been bred to prefer alcohol to water.”

Perhaps Sinclair’s most interesting finding was that people who developed an uncontrollable craving for alcohol were being driven by specific chemical processes in the brain.

“Sinclair came to believe that people develop drinking problems through a chemical process: each time they drink, the endorphins released in the brain strengthen certain synapses. The stronger these synapses grow, the more likely the person is to think about, and eventually crave, alcohol—until almost anything can trigger a thirst for booze, and drinking becomes compulsive.”

He administered types of chemicals called opioid antagonists to his “alcoholic” rats and discovered that these drugs that block opiate receptors could limit the alcohol consumption in rats.  The same effects were found in humans.

“Subsequent studies found that an opioid antagonist called naltrexone was safe and effective for humans, and Sinclair began working with clinicians in Finland. He suggested prescribing naltrexone for patients to take an hour before drinking. As their cravings subsided, they could then learn to control their consumption. Numerous clinical trials have confirmed that the method is effective, and in 2001 Sinclair published a paper in the journal Alcohol and Alcoholism reporting a 78 percent success rate in helping patients reduce their drinking to about 10 drinks a week. Some stopped drinking entirely.”

Three clinics have been established in Finland, and one in Spain, based on Sinclair’s methods.

“In the past 18 years, more than 5,000 Finns have gone to the Contral Clinics for help with a drinking problem. Seventy-five percent of them have had success reducing their consumption to a safe level.”

“The most common course of treatment involves six months of cognitive behavioral therapy, a goal-oriented form of therapy, with a clinical psychologist. Treatment typically also includes a physical exam, blood work, and a prescription for naltrexone or nalmefene, a newer opioid antagonist approved in more than two dozen countries.”

When Glaser asked what the cost of such treatment she was given a figure of about $2,500,

“….a fraction of the cost of inpatient rehab in the United States, which routinely runs in the tens of thousands of dollars for a 28-day stay.”

So tightly fixed in the American mind is the efficacy of the AA approach that results of medical studies are largely ignored.

“….naltrexone has been found to reduce drinking in more than a dozen clinical trials, including a large-scale one funded by the National Institute on Alcohol Abuse and Alcoholism that was published in JAMA in 2006. The results have been largely overlooked. Less than 1 percent of people treated for alcohol problems in the United States are prescribed naltrexone or any other drug shown to help control drinking.”

Even the drug companies seemed sufficiently intimidated by the conventional wisdom to forego marketing drugs that should have been good moneymakers for them.

“Stephanie O’Malley, a clinical researcher in psychiatry at Yale who has studied the use of naltrexone and other drugs for alcohol-use disorder for more than two decades, says naltrexone’s limited use is ‘baffling’.”

“’There was never any campaign for this medication that said, “Ask your doctor,”’ she says. ‘There was never any attempt to reach consumers.’ Few doctors accepted that it was possible to treat alcohol-use disorder with a pill. And now that naltrexone is available in an inexpensive generic form, pharmaceutical companies have little incentive to promote it.”

Glaser hopes that the implementation of the Affordable Care Act (Obamacare) will force a reconsideration of treatments that can be supported in an environment where results and cost-effectiveness are critical.

“The debate over the efficacy of 12-step programs has been quietly bubbling for decades among addiction specialists. But it has taken on new urgency with the passage of the Affordable Care Act, which requires all insurers and state Medicaid programs to pay for alcohol- and substance-abuse treatment, extending coverage to 32 million Americans who did not previously have it and providing a higher level of coverage for an additional 30 million.”

“Nowhere in the field of medicine is treatment less grounded in modern science. A 2012 report by the National Center on Addiction and Substance Abuse at Columbia University compared the current state of addiction medicine to general medicine in the early 1900s, when quacks worked alongside graduates of leading medical schools.”

Alcoholics Anonymous has helped many people, but its method is inappropriate as a first option when science has provided us with evidence that other approaches are more effective.


Sunday, March 22, 2015

Big Energy, the Abolition of Slavery, and Global Warming

The association of slavery with big energy companies and global warming might not seem to make sense, but there is a context in which it does.  Christopher Hayes discussed the relationship in his article in The Nation: The New Abolitionism.

Hayes was particularly intrigued by a calculation reported by Bill Mickibben, the well-known author and environmental activist.

“….a fairly straightforward bit of arithmetic that goes as follows. The scientific consensus is that human civilization cannot survive in any recognizable form a temperature increase this century more than 2 degrees Celsius (3.6 degrees Fahrenheit). Given that we’ve already warmed the earth about 0.8 degrees Celsius, that means we have 1.2 degrees left—and some of that warming is already in motion. Given the relationship between carbon emissions and global average temperatures, that means we can release about 565 gigatons of carbon into the atmosphere by mid-century. Total. That’s all we get to emit if we hope to keep inhabiting the planet in a manner that resembles current conditions.”

The problem becomes obvious when we learn that proven, worldwide energy reserves are five times the amount that can be consumed within that temperature constraint.  Hayes concludes that somehow the fossil fuel companies are going to have to be persuaded or coerced into leaving at least 80% of their wealth buried under ground.  Placing a value on this fuel is difficult, but he suggests $20 trillion as an estimate.

Hayes then suggests that the only time such an enormous economic change was required was when the slave economy in the Americas was dismantled.  He estimates that the economic value of the slaves relinquished in the United States alone would be equivalent to about $10 trillion in today’s economy.  Consequently he concludes that the economic consequences were similar in scope to eliminating access to fossil fuel reserves.

Slavery did not end easily in the United States.  It took years of a civil war that was easily the bloodiest conflict the nation has ever participated in.  While limiting the access of energy companies to their claimed wealth in fossil fuels may not lead to violence, Hayes claims it is no less radical an idea in our time and will likely not be accomplished without some level of strife.

“….the parallel I want to highlight is between the opponents of slavery and the opponents of fossil fuels. Because the abolitionists were ultimately successful, it’s all too easy to lose sight of just how radical their demand was at the time: that some of the wealthiest people in the country would have to give up their wealth. That liquidation of private wealth is the only precedent for what today’s climate justice movement is rightly demanding: that trillions of dollars of fossil fuel stay in the ground. It is an audacious demand, and those making it should be clear-eyed about just what they’re asking. They should also recognize that, like the abolitionists of yore, their task may be as much instigation and disruption as it is persuasion. There is no way around conflict with this much money on the line, no available solution that makes everyone happy. No use trying to persuade people otherwise.”

Hayes then goes on to question the business model that the energy companies seem to be following and suggests a path for climate activists to follow in contending with them.

“Fossil fuel extraction is one of the most capital-intensive industries in the world. While it is immensely, unfathomably profitable, it requires ungodly amounts of money to dig and drill the earth, money to pump and refine and transport the fuel so that it can go from the fossilized plant matter thousands of feet beneath the earth’s surface into your Honda. And that constant need for billions of new dollars in investment capital is the industry’s Achilles’ heel.”

The energy companies must keep their shareholders happy and they must act in such a way as to maintain the value of company shares and continue to have access to new capital.  Hayes’s plan is for activists to convince shareholders and other investors that the energy companies are pursuing a business plan that is doomed to failure.

“Investors, even those unmotivated by stewardship of the planet, have reason to be suspicious of the fossil fuel companies. Right now, they are seeing their investment dollars diverted from paying dividends to doing something downright insane: searching for new reserves. Globally, the industry spends $1.8 billion a day on exploration. As one longtime energy industry insider pointed out to me, fossil fuel companies are spending much more on exploring for new reserves than they are posting in profits.”

“Think about that for a second: to stay below a 2 degree Celsius rise, we can burn only one-fifth of the total fossil fuel that companies have in their reserves right now. And yet, fossil fuel companies are spending hundreds of billions of dollars looking for new reserves—reserves that would be sold and emitted only in some distant postapocalyptic future in which we’ve already burned enough fossil fuel to warm the planet past even the most horrific projections.”

“This means that fossil fuel companies are taking their investors’ money and spending it on this extremely expensive suicide mission. Every single day. If investors say, ‘Stop it—we want that money back as dividends rather than being spent on exploration,’ then, according to this industry insider, ‘what that means is, literally, the oil and gas companies don’t have a viable business model. If all your investors say that, and all the analysts start saying that, they can no longer grow as businesses’.”

When Britain passed a law outlawing slavery in most of its possessions in 1833—and all lands soon thereafter—it was accomplished peaceably by offering slave owners financial compensation for their lost slaves.  The former slaves did not go away necessarily.  Most remained available as laborers, but laborers who had to be paid.  The businesses affected could switch business models and resume activities in many cases.  There is no similar backup plan for the energy companies at this point. 

There may be more to the energy companies’ business model than Hayes suspects.  It could be that part of the justification for continuing to search for energy reserves is the hope for eventual compensation for their inevitable loss.

It could also mean that big energy believes that human nature is on their side and they will be allowed to provide pollutants indefinitely into the future.  There is little indication of mass movements to rein in the energy companies.  In fact, there appears to be support being given to those who would experiment with geoengineering solutions in an attempt to counter the rising temperatures while eliminating the need to constrain fossil fuel consumption.

The energy corporations seem to believe that preserving their stock market valuation must be their primary objective.  The shareholders would probably agree with that as a strategy because they would rather have shares that they could sell if they wished and pay taxes at the capital gains rate rather than have dividends that are taxed at a higher rate as regular income.  The wealth of the company is judged in terms of current income and possessed assets.  If a company were to even hint that it was getting out of the energy exploration business, the stock price would likely plummet as shareholders jumped off a sinking ship.

It would seem that the best way to end fossil fuel extraction is the traditional way—by making the demand for fossil fuels drop so low that it is no longer a viable business.  The recent fall in the price of oil is often interpreted as being harmful to alternate energy solutions.  It is not clear that that is a correct assumption.  People and businesses will not necessarily consume more energy because it is cheaper.  That usually makes no sense.  There is the possibility that the switch to renewable energy sources may slow down a bit, but one lesson that should be learned from the recent price excursions is that fuel prices are very volatile and that what goes way down can also go way up.  Long-term plans are unlikely to be much changed.

It is interesting to note that while the price of fuel dropped dramatically, the level of demand was little changed.  Price changes were based on projections into the future.  The way to limit the activities of energy companies is to create a future projection of demand that will drive prices lower permanently—so low that future energy extraction becomes uneconomical. 

The technologies are already available to dramatically lower demand for fossil fuels.  Just striving for greater efficiency in transportation and in heating and cooling homes and businesses can provide enormous savings.  There are no wondrous breakthroughs required.  We must merely speedup processes that are already underway.  One such path to the near elimination of fossil fuel demand is described here.

Activists’ time would be better spent attacking demand rather than supply.


Thursday, March 19, 2015

The War on Poverty Succeeded?

President Lyndon Johnson declared “unconditional” war on poverty in the United States in 1964.  What has come to be known as the War on Poverty was a protracted conflict that, similar to the Vietnam War, extended over many years and ended up being considered by most as being of dubious value.  Christopher Jencks produced an article for the New York Review of Books that provides a fresh view of what might have been accomplished by Johnson’s initiative: The War on Poverty: Was It Lost?.

Jenks includes Ronald Reagan’s famous 1988 quote on the subject:

“Some years ago, the federal government declared war on poverty, and poverty won.”

He also describes polling that indicates Reagan’s assessment has been accepted by the majority of the population.

“Asked about their impression of the War on Poverty, Americans are now twice as likely to say ‘unfavorable’ as ‘favorable’.”

Jencks article was occasioned by the publication of a collection of papers evaluating the long-term consequences of the programs initiated or augmented as part of this attack on poverty.  This document suggests that the “War” was more successful than most people believe.

Legacies of the War on Poverty is a set of nine studies, edited by Martha Bailey and Sheldon Danziger, that assess the successes and failures of the diverse strategies that Johnson and his successors adopted to reduce poverty. The chapters are packed with evidence, make judicious judgments, and suggest a higher ratio of success to failure than opinion polls do.”

To put the collection of legislative efforts in perspective it must be recognized that Johnson did not put into action a complete slate of acts aimed at addressing poverty.  What emerged over time were efforts that extended to the 1980s when Reagan took over the White House and the Democrats lost control of Congress.

“As a result, some of today’s most important antipoverty programs, such as food stamps, Supplemental Security Income (a guaranteed minimum income for the elderly and disabled), and Section 8 rent subsidies for poor tenants in private housing, were either launched or dramatically expanded between 1969 and 1980. Had Johnson not put poverty reduction at the heart of the Democrats’ political agenda in 1964, it is hard to imagine that congressional Democrats would have made antipoverty programs a political priority even after Republicans regained control of the White House.”

Jencks promises a follow-up article is coming that describes the assessments of the various antipoverty programs.  Here he is mainly concerned with the most commonly applied metric: has the rate of poverty actually decreased after all this effort.  This is a quite complicated question.  Poverty can be defined in many ways.  Some people believe poverty exists only in the biblical sense where assistance is needed to avoid starvation.  Others conclude that anyone in possession of a smart phone cannot be considered poor; therefore the number of poor is quite small.  The federal government publishes a number estimating the percentage of people living below “the poverty line” using surveys conducted by the Census Bureau. 

This poverty line is an income level in dollars that depends on whether an individual is an independent unit or a member of a family.  Family incomes are compared to a number based on the size of the family and the age of its members.  This was the way poverty was assessed in the 1960s, and the government uses the same metric today basing the change in poverty level on the change in the Consumer Price Index for All Urban Consumers (CPI-U).

The government figures indicate a poverty rate of about 19% in 1964 that dropped to around 11% in the early 1970s.  Since that time the rate has fluctuated with changing economic conditions and stood at 14.5% in 2013.  This apparent stagnation makes it easy to conclude that the various government efforts at addressing poverty have been failures.

The government’s approach has the benefit of being consistent year-to-year, but consistency and accuracy are two quite different things. 

“Given that the War on Poverty was a commitment to eliminating it, the most obvious measure of the war’s success or failure is how the poverty rate has changed since 1964. Bailey and Danziger argue that just looking at changes in the poverty rate is a ‘simplistic’ approach to assessing the War on Poverty, and in one sense they are right. If you want to know how well programs like Head Start or food stamps worked, or how many full-time jobs they created, the reduction in poverty over the past half-century is not a sensible measure.”

“But Bailey and Danziger’s argument is more fundamental. They object to using trends in poverty as a measure of the war’s success because the prevalence of poverty depends not just on the success or failure of policies aimed at reducing it but also on other independent economic and demographic forces, like the decline in unskilled men’s real wages and the rising number of single-parent families.”

Jencks illustrates just how inaccurate the government’s approach can be by indicating some obvious changes that have taken place over the years that are not considered in the official poverty rate.

He first addresses the increase in unmarried but cohabiting couples who are considered “unrelated individuals” by the census bureau.  The poverty line for an unrelated individual was $12,119 in 2013.  An unmarried couple could have had a joint income of $24,238 and still be both counted as living in poverty.  For a married couple the poverty line is a mere $15,600.  This leads Jencks to conclude that the poverty rate is being over estimated by a change in social customs that the government’s methodology refuses to recognize.

There are other failings that are more easily quantified.  Jencks uses the term noncash benefits to represent goods and services that are provided to people that allow their cash needs to have decreased over the years.

“Noncash benefits now provide many low-income families with some or all of their food, housing, and medical care. Such programs were either tiny or nonexistent in 1964, and their growth has significantly reduced low-income families’ need for cash. However, the Office of Management and Budget (OMB) does not allow the Census Bureau to incorporate the value of these benefits into the recipients’ poverty thresholds. The President’s Council of Economic Advisers estimates that even if we ignore Medicare and Medicaid, food and housing benefits lowered the poverty rate by 3.0 percentage points in 2012.”

An even more egregious error is obtained by ignoring refundable tax credits as a component of income.

“As part of its effort to reform welfare by ‘making work pay,’ the Clinton administration persuaded Congress to expand the Earned Income Tax Credit (EITC) between 1993 and 1996. By 2013 the EITC provided a refundable tax credit of $3,250 a year for workers with two or more children and earnings between $10,000 and $23,000. Because the official poverty count is based on pre-tax rather than post-tax income, these tax “refunds” are not counted as income…. According to the Council of Economic Advisers, treating refundable tax credits like other income would have reduced the poverty rate by another 3.0 percentage points in 2012.”

Estimating changes in the purchasing power of the dollar has always been difficult.  Besides the CPI-U, there is something called “the chain-price index for Personal Consumption Expenditure” which Jencks refers to as the PCE index.  This PCE index is thought to be more accurate in assessing changes in purchasing power.

“The Commerce Department’s Bureau of Economic Analysis constructs this measure to calculate changes in the total value of all the consumer goods and services produced in the United States each year. The PCE index is therefore the largest single influence on government estimates of economic growth. If the poverty thresholds had risen in tandem with the PCE index rather than the CPI-U since 1964, the 2013 poverty line would have been 20 percent lower than it was, and the 2013 poverty rate would have been about 3.7 percentage points lower than it was.”

Jencks does not try to estimate the cohabitation effect, but considering the three corrections with estimates included he draws the conclusion illustrated below.



Reasonable improvements in projecting a poverty level consistent with that determined in 1964 leads to a prediction of a poverty rate in 2013 that is not the official figure of 14.5% but a mere 4.8%.  Jencks admits that there are uncertainties in his numbers as well, but this still allows him to argue that the suite of antipoverty efforts implemented since 1964 have actually worked much better than has been generally assumed.

Given this analysis, it is safe to conclude that many people classified as living in poverty today are probably living a better life than they would have if they were in the same situation in 1964.  However, that conclusion depends on a specific definition of poverty that was defined in a different era.  It is in no way clear that the same definition is relevant today—or that it was appropriate even in 1964.

Poverty is, in some ways of reckoning, a mental as well as a physical state.  There is the notion that poverty is always relative.  A family that has subsistence levels of food, clothing, and shelter may be locked into a declining economic lower class with no apparent means of escape for any of its members.  Is that a state of poverty?  Many would think it is. 

Jencks does not claim that his number represents the number of people who should be considered to be living in poverty.  He merely wishes to claim that Johnson’s war and the actions of the government in fighting poverty were effective.

“…even if the true poverty rate was 6 or 7 percent in 2013, it would have fallen by about two thirds since 1964, putting it considerably closer to what Lyndon Johnson had promised in 1964 than to what Ronald Reagan had claimed in 1988.”

This is an important point because it illustrates that the government is not necessarily “the problem” as claimed by Reagan.  It can be—and often has been—the solution.


Thursday, March 12, 2015

Israel, the Holocaust, American Jews, and Fundamentalist Christians

The decision by the Republican leadership to bypass President Obama and invite Israel’s leader, Benjamin Netanyahu, to give to give a talk to Congress was rather startling.  It was certainly indicative of the dysfunctional political state of affairs, but it also highlighted the special status Israel possesses in our national psyche.  Pundits claimed that the invitation was issued partly to irritate the president, but mainly to gain applause from the Republican base.  American Jewish organizations supported Netanyahu’s visit.  Jewish voting patterns suggest a distinctly liberal political tilt, yet they seem to support an Israeli government that appears determined to abuse and discriminate against a Palestinian minority in much the same way the Jews were abused and discriminated against for all those centuries in Christian countries.  This curious state of affairs deserves some consideration.

Tony Judt discusses Israel and what he refers to as America’s obsession with that nation in the book he produced in collaboration with Timothy Snyder: Thinking the Twentieth Century.  Ultimately, he concludes that our nation’s Jewish population has bought into the notion that Israel deserves special consideration because of the Holocaust, and deserves to be protected by us because it is under constant existential threat.  No other country has produced the unreserved support that Israel has received from United States. Judt provides us with some necessary perspective.

Judt is Jewish, born in England in 1948 of parents of Eastern European origin.  He was aware of the Holocaust his entire life, but it played little role in his life or that of his Jewish friends and relatives as he was growing up.  In fact the term was not even used at that time.  Today we think of the Holocaust as the defining event in Jewish history and a major justification for Israel’s existence.

“It can be very difficult, particularly when teaching here in the United States, to convey how far the Holocaust was from the center of people’s concerns or decisions during World War II….we cannot, if we wish to give a fair account of the recent past, read back into it our own ethical and communitarian priorities.  The harsh reality is that Jews, Jewish suffering and Jewish extermination were not matters of overwhelming concern to most Europeans (Jews and Nazis aside) of that time.  The centrality that we now assign to the Holocaust, both as Jews and as humanitarians, is something that only emerged decades later.”

It is necessary to recall that the United States and the nations of Europe were all considerably more anti-Semitic in the middle of the last century than they are now.  None of the allies thought it wise to make the systematic elimination of Jews a reason for fighting the war.  Most chose to avoid the issue.

“….the history of American Jews is in many respects the story of a belated response—often delayed by a generation or more—to events in Europe or the Middle East.  Consciousness of the Jewish Catastrophe—and its aftermath in the creation of the State of Israel—came well after the fact.  The generation of the 1950s would much rather have continued to look the other way—something I can confirm from the different but comparable British experience.  Israel in those years was like a distant relative: someone of whom one spoke fondly and to whom one sent a birthday card regularly, but were he to visit you and overstay his welcome, it would be embarrassing and ultimately an irritant.”

The holocaust was not something around which a people could build national or ethnic pride.  The prevalent image of Jews being led “like lambs to the slaughter” would have to be overcome.

“Americans, rather like Israelis in this respect, valued success, achievement, promotion, individualism, the overcoming of impediments to self-advancement and a dismissive unconcern with the past.  The Holocaust, accordingly, was not an altogether untroubling story, particularly with the widespread view that Jews had gone ‘like lambs to the slaughter.”

“….I don’t think the Holocaust fitted at all comfortably into American Jewish sensibilities—much less American public life as a whole—until the national narrative itself had learned to accommodate and even idealize stories of suffering and victimhood.”

It would be the Six-Day War in 1967 that would allow Jews to re-imagine themselves and view Israel as something they should take pride in and as something precious that must be preserved at all cost.  The Holocaust could then be elevated to its appropriate place as the central event in recent Jewish history, and it could become a means of rallying Jews and others to the aid of
Israel with the cry “never again.”

Judt had spent summers living and working as a young and idealistic Zionist in the farms of Israel.  He also volunteered to return and assist as the threat of war increased.  The Six-Day War would be a period of great disenchantment as he discovered that the Israel of the rural farms where he had toiled had little to do with the Israel that actually existed.  He was assigned as a translator to make use of his proficiency in English, French, and Hebrew.  In this position he was able to become more familiar with Israeli soldiers and the actual nature of the State of Israel.

“….I had been indoctrinated into an anachronism, had lived an anachronism, and now I saw the depths of my delusion.  For the first time I met Israelis who were chauvinistic in every meaning of the word: anti-Arab in a sense bordering upon racism; quite undisturbed at the prospect of killing Arabs wherever possible; frequently regretting that they had not been allowed to fight their way through to Damascus and beat down the Arabs for good and all; full of scorn for what they called the ‘heirs of the Holocaust,’ Jews who lived outside of Israel and who did not understand or appreciate the new Jews, the native-born Israelis.”

“This was not the fantasy world of socialist Israel that so many Europeans loved (and love) to imagine—a wishful projection of all the positive qualities of Jewish Central Europe with none of the drawbacks.  This was a Middle Eastern country that despised its neighbors and was about to open a catastrophic, generation-long rift with them by seizing and occupying their land.”

The Israelis recognized that they could use the Holocaust to justify just about anything they chose to do.  Peace was of little value because they gained more by being the little country permanently surrounded by threatening neighbors.  The victimhood of the Holocaust provided them with an argument that the world owed them something, and continued turmoil kept alive the notion that a repeat of the Holocaust could occur if Israel was not properly supported.

“To my knowledge, no one in the Israeli political class—and certainly no one in the Israeli military or policy-making elite—has ever expressed any private doubt as to Israel’s survival: certainly not since 1967 and, in most cases, not before then either.  The fear that Israel could be ‘destroyed,’ ‘wiped off the face of the earth.’ ‘driven into the sea’ or in any other way exposed to something remotely resembling a re-run of the past, is not a genuine fear.  It is a politically calculated rhetorical strategy.”

Judt believes that this practice of continually “crying wolf” has already become counterproductive. 

“Ever since Ben-Gurion, Israeli policy has quite explicitly insisted upon the assertion that Israel—and with it the whole of world Jewry—remains vulnerable to a re-run of the Holocaust.  The irony, of course, is that Israel itself constitutes one very strong piece of evidence to the contrary….As a state, Israel—in my view irresponsibly—exploits the fears of its own citizens.  At the same time it exploits the fears, memories and responsibilities of other states.  But in so doing, it risks over the course of time consuming the very moral capital that enabled it to exercise such exploitation in the first instance.”

Judt refers to the Holocaust as a Get Out of Jail Free Card for what Israel has become—a rogue state.  The United States, and most other countries, seeks peace and stability in the Middle East.  Israel’s current leaders have no interest in either until their goals are attained.  The United States has then been put in the position of having to provide unconditional support for a state, Israel, whose actions are harmful to its interests.

Given this bizarre state of affairs, how does Israel maintain its chokehold on the United States government?  Consider the results of polling on attitudes towards Israel. Max Fisher provides a discussion in the Washington Post: 8 fascinating trends in how American Jews think about Israel.  The data indicates that religion is an important factor in determining attitudes and allegiances.

“On one side are the reform and secular Jews who make up 65 percent of the U.S. Jewish population, sometimes joined by the 17 percent who identify with conservative Judaism. This group is more likely to worry about or criticize Israeli policies toward Palestinians. It's less likely to claim an emotional attachment to Israel and less likely still to argue that the country was promised to Jews by God.”

“On the other side are the small minority of Orthodox Jews – about 10 percent of American Jews – joined by America's much larger community of white Evangelical Christians. This block is more likely to oppose an independent Palestinian state, to support settlements and to argue that Israel was promised to Jews by God. That last point is particularly important for how you view Israel-Palestine, as it could suggest that Israel should control all Palestinian land outright and permanently – and that seeing this through is a divinely mandated mission.”

Orthodox Jews and white Evangelical Christians tend to support policies that are not in the best interests of their country.  Orthodox Jews are becoming more numerous as they are breeding faster than more secular Jews in both this country and in Israel.  It is not surprising that they would have the views that they do but what about Evangelical Christians?  Where are they coming from?

Joe Bageant provides a startling introduction to fundamentalist Christians in our country in his book Deer Hunting with Jesus: Dispatches from America's Class War.

“But taken as a whole, fundamentalists have three things in common: they are whiter than Aunt Nelly’s napkin, and, for the most part, they are working class and have only high school educations.”

“Yet some evangelicals stand apart from the mainstream in one important way: They would scrap the Constitution and institute ‘Biblical Law,’ the rules of the Old Testament, and they take the long view toward the establishment of a theocratic state.  Others believe we are rapidly entering the End Times and the fulfillment of the darkest biblical prophesies….they see a theocracy of one sort or another as a necessary part of the End Times, and, though few publicly say so, some are not averse to nuclear war in the Middle East, ideally with the help of Israel.”

They see the founding of the modern state of Israel as the initiating event of End Times.

“....the Messiah can return to earth only after an apocalypse in Israel called Armageddon, which a minority of influential fundamentalists are promoting with all their power so that The End can take place.  The first requirement was the establishment of the state of Israel.  Done.  The next is Israel’s occupation of the Middle East as a return of its ‘Biblical lands.’  Which means more wars.  Radical Christian conservatives believe that peace cannot ever lead to Christ’s return, and indeed impedes the thousand-year Reign of Christ, and that anyone promoting peace is a tool of Satan.  Fundamentalists support any and all wars Middle Eastern....”

Needless to say, such beliefs lead to political conclusions that are not helpful, including the following:

“Israel is to be defended at all costs and even encouraged to expand, because the Bible declares that Israel must rule all the land from the Nile to the Euphrates in order for End Times prophecy to be fulfilled.”

Aren’t we fortunate that one of our two major political parties, the Republican, feels a need to pledge fealty to such people?

Was it really appropriate to describe Israel as a rogue state?  Peter Beinart discusses the state of Israeli politics and the state of American Jewry in an article that appeared in the New York Review of Books in 2010: The Failure of the American Jewish Establishment.

Beinart describe the American Jewish establishment as one that fell in love with Israel when its existence was feared threatened in the Six-Day War.  It viewed Israel then as a liberal state that would set an example for how to deal with minority populations within its confines.  This image of Israel was rapidly proved false, but the leaders of politically powerful Jewish organizations have refused to acknowledge any shortcomings on the part of Israel and have continued to give it unqualified support.  Meanwhile, their children, except for Orthodox Jews, have become more secular and less interested in Israel as a Jewish responsibility.

Beinart describes Israel as a country as polarized as ours.  The secular Israelis are more flexible in their attitudes towards dealing with the Palestinians.  Orthodox Israelis and other conservatives are not flexible at all.  Consider this description by Beinart of Netanyahu’s beliefs and his administration:

“In his 1993 book, A Place among the Nations, Netanyahu not only rejects the idea of a Palestinian state, he denies that there is such a thing as a Palestinian. In fact, he repeatedly equates the Palestinian bid for statehood with Nazism. An Israel that withdraws from the West Bank, he has declared, would be a “ghetto-state” with “Auschwitz borders.” And the effort “to gouge Judea and Samaria [the West Bank] out of Israel” resembles Hitler’s bid to wrench the German-speaking “Sudeten district” from Czechoslovakia in 1938. It is unfair, Netanyahu insists, to ask Israel to concede more territory since it has already made vast, gut-wrenching concessions. What kind of concessions? It has abandoned its claim to Jordan, which by rights should be part of the Jewish state.”

Netanyahu has incorporated into his government individuals with similar views.

“Israeli governments come and go, but the Netanyahu coalition is the product of frightening, long-term trends in Israeli society: an ultra-Orthodox population that is increasing dramatically, a settler movement that is growing more radical and more entrenched in the Israeli bureaucracy and army, and a Russian immigrant community that is particularly prone to anti-Arab racism.”

A political solution to the Israeli-Palestinian issues seems a faded, long-ago dream.  The United States should at least learn to disassociate itself from the goals of what has indeed become a rogue state.


Wednesday, March 4, 2015

Facing the Facts about Mental Illness

T. M. Luhrmann is an anthropologist at Stanford University.  One of her interests has been mental illness with a focus on schizophrenia.  She has been writing about the need to consider what we label as mental illness as a psychological condition as well as a biological condition.  She has been interested in reminding the world that there are means of treating—if treatment is necessary—behaviors that do not fit within society’s definition of what is normal without resorting to dangerous medications.

Just as people are born with a range of intelligence, they are born with a range of responses to sorrow-inducing or threatening circumstances.  If the magnitude of the response is deemed excessive, we have decided to describe those people as suffering from depression or anxiety.  The range of such responses is continuous across individuals so the point at which one becomes “mentally ill” is absolutely arbitrary.  In addition, people have a considerable degree of control over their responses.  They can learn to deal better with sorrow and threats more effectively over time.  That is why Luhrmann and many others suggest drug therapy as more of a last resort than the treatment of choice.

As an example of learning to cope, consider a fascinating article Luhrmann produced for The American Scholar: Living with Voices.  She describes the discovery that people who are troubled by hearing voices, a certain path to a schizophrenia diagnosis, often find relief from confronting the voices and negotiating with them rather than fearing them or trying to make them go away.  Forming a relationship with the voice seems to change the character of the voice, transforming it from a threat to something that can occasionally even provide encouragement.  A movement has been formed promoting this approach.  Many participants had previously been subjected to intolerable brain-altering medications aimed at making the voices disappear.  They like to refer to themselves as “survivors of psychiatric care.”  More on the topic can be found here.

Luhrmann produced an article for the New York Times titled Redefining Mental Illness.  She was encouraged by recent developments that seemed to indicate a greater acceptance for more flexible treatment of mental issues—and not treating all issues as illnesses.

“TWO months ago, the British Psychological Society released a remarkable document entitled ‘Understanding Psychosis and Schizophrenia.’ Its authors say that hearing voices and feeling paranoid are common experiences, and are often a reaction to trauma, abuse or deprivation: ‘Calling them symptoms of mental illness, psychosis or schizophrenia is only one way of thinking about them, with advantages and disadvantages’.”

“The report says that there is no strict dividing line between psychosis and normal experience: ‘Some people find it useful to think of themselves as having an illness. Others prefer to think of their problems as, for example, an aspect of their personality which sometimes gets them into trouble but which they would not want to be without’.”

It was particularly encouraging to find the report recommending that psychological approaches be made available as well as medicinal.

“The report adds that antipsychotic medications are sometimes helpful, but that “there is no evidence that it corrects an underlying biological abnormality.” It then warns about the risk of taking these drugs for years.”


Sunday, March 1, 2015

Increased Risks Associated with ADHD Diagnosis

Benedict Carey raised some interesting issues in a New York Times article titled A.D.H.D. Diagnosis Linked to Increased Risk of Dying Young.  The acronym ADHD refers to Attention Deficit Hyperactivity Disorder.  He reported on a study conducted in Denmark where medical records were studied for all children born there between 1981 and 2011.

“The team found that of 32,061 who had been given a diagnosis of A.D.H.D., 107 had died before age 33. That was roughly twice the rate of premature death among those without the disorder, after factors like age, psychiatric history and employment were taken into account.”

This rate is about 3 out of 1000.  Automobile and other types of accidents were described as the most common causes of death.  It was also noted that many of those who died had behavioral problems such as “drug abuse or antisocial behavior” that would have made them more accident prone.  Curiously, the study also concluded that the risk of early death was higher for those diagnosed with ADHD at age 18 or greater.

A psychiatrist was quoted concluding that it is ADHD that is dangerous and the study results led him to claim that it is now more important than ever to seek out children suffering from ADHD and begin treating them.  However, that is inconsistent with what the study said.  Carey had it correct in his title: the Danish data suggest that it is the diagnosis of ADHD that is dangerous.  Any further conclusion is speculation.

Another study was performed by the Danes looking at other results that follow from an ADHD diagnosis.  This work was performed in the same timeframe and was reported in The Copenhagen Post by Lawrence Shanahan under the title Danish ADHD study makes grim reading for trigger-happy diagnosis countries like the US

“Children who take ADHD medication have twice the risk of developing heart problems, according to a study at the National Centre for Register-based Research at Aarhus University, reports Science Nordic.”

“The study, which took data from 714,000 children born between 1990 and 1999, showed that the risk of developing heart problems rose from around 0.5 percent to nearly one percent among children who took ADHD medication.”

Why should this be of special concern to the people of the United States?

“While only two percent of Danish children are prescribed ADHD medication, the rate is over three times as high in the States.”

“Nearly nine percent of American children aged 4-17 are diagnosed with ADHD, of which 69 percent are then medicated, according to the American National Resource Center on ADHD.”

“That amounts to a total of 3.5 million children taking medication that doubles the risk of developing heart problems.”

Please take note of the following when considering the conclusion that an ADHD diagnosis at age 18 or greater is particularly risky.

“ADHD medications such as Ritalin and Adderall are also becoming increasingly popular among university students and young adults who use the drugs for binge-studying purposes.”

This second report suggests that it is, in fact, the diagnosis and subsequent treatment that might be producing high risk among young adults.  In particular, it points to the medications used in treating the condition as a potential contributor to the increased risk.

Should we be more concerned about the frequency with which these drugs are prescribed to young children?  Alan Schwarz has produced a troubling article in the New York Times: Risky Rise of the Good-Grade Pill.  Schwarz investigates the growing practice of teenagers who feel threatened by academic expectations to turn to amphetamines and methylphenidates as study aids.  He provides this background on common ADHD medications which are classified by the Drug Enforcement Agency (DEA) as Class II drugs.

“The D.E.A. lists prescription stimulants like Adderall and Vyvanse (amphetamines) and Ritalin and Focalin (methylphenidates) as Class 2 controlled substances — the same as cocaine and morphine — because they rank among the most addictive substances that have a medical use. (By comparison, the long-abused anti-anxiety drug Valium is in the lower Class 4.) So they carry high legal risks, too, as few teenagers appreciate that merely giving a friend an Adderall or Vyvanse pill is the same as selling it and can be prosecuted as a felony.”

The physical response to these drugs is similar to that of cocaine, but providing it in pill form makes it less addictive because the chemicals reach the brain slowly (and after passing through the liver) producing a minimal high.  Cocaine users prefer to snort, smoke, or inject the drug in order to give the brain a quick and strong jolt.

Schwartz focuses on use in older children as a study aid.  A diagnosis of ADHD is trivial to acquire.  Anyone who has prescription authority can conclude that an individual would be helped by drugs.  There is no training required.  Many students acquire the drug by feigning ADHD symptoms to obtain a prescription.  They can then use the pills for their own purposes or sell the unneeded to others.  Not surprisingly, many of the students taking the drug in an uncontrolled environment develop a dependency, or move on to other, more dangerous drugs.

For young children, even in a controlled environment, it would not be surprising to discover that a few might develop a dependency after years of consumption.  One of the difficulties with drugs such as cocaine and versions of amphetamines (methamphetamine is also in this class) is that growth and familiarity will require greater doses to obtain the same effect.  Even the best-intentioned administration of the drugs can go awry. 

A list of issues associated with the common ADHD drugs can be found at this site provided by the National Institutes of Health (NIH).   Plenty of warnings are provided about risks involved in taking the drugs, particularly if any cardiovascular irregularity exists in the patient or there is a family history of cardiovascular problems.  A long list of “serious” potential side effects is also provided.  Recall that the first study discussed referred to drug abuse and antisocial behaviors as risk factors for early death.  Here are a few serious observed side effects (in addition to addiction) provided by the NIH that fall in the antisocial behavior category:

“motor tics or verbal tics

believing things that are not true

feeling unusually suspicious of others

hallucinating (seeing things or hearing voices that do not exist)

mania (frenzied or abnormally excited mood)

aggressive or hostile behavior”

Isn’t it interesting that some behaviors that might lead to a diagnosis of ADHD can also be caused by the drugs used to control ADHD. 

The drugs do provide behavior modification in most cases pleasing parents and teachers, but dosage must be increased over time.  The drugs also cause withdrawal symptoms if usage stops too rapidly, convincing parents that the drugs were beneficial and should be resumed.  This must be a terribly dangerous situation for untrained physicians who are merely trying to help distressed parents.  The tendency is to prescribe ever larger dosages until “beneficial” behavior modification occurs.  If side effects appear, then there are other pills to control these side effects.  For modern psychiatry to work there must be a pill for each side effect.  The drug companies have been quite happy to provide them.

But there is an even more serious issue with ADHD medications than side effects.  There is data that suggest they have no long-term effect at all on behavior.  L. Alan Sroufe has raised this issue in a New York Times article Ritalin Gone Wrong.

“Attention-deficit drugs increase concentration in the short term, which is why they work so well for college students cramming for exams. But when given to children over long periods of time, they neither improve school achievement nor reduce behavior problems. The drugs can also have serious side effects, including stunting growth.”

“Sadly, few physicians and parents seem to be aware of what we have been learning about the lack of effectiveness of these drugs.”

Sroufe provides data to support the contention that they are ultimately ineffective.

“To date, no study has found any long-term benefit of attention-deficit medication on academic performance, peer relationships or behavior problems, the very things we would most want to improve. Until recently, most studies of these drugs had not been properly randomized, and some of them had other methodological flaws.”

“But in 2009, findings were published from a well-controlled study that had been going on for more than a decade, and the results were very clear. The study randomly assigned almost 600 children with attention problems to four treatment conditions. Some received medication alone, some cognitive-behavior therapy alone, some medication plus therapy, and some were in a community-care control group that received no systematic treatment. At first this study suggested that medication, or medication plus therapy, produced the best results. However, after three years, these effects had faded, and by eight years there was no evidence that medication produced any academic or behavioral benefits.”

 This discussion began with data asserting that diagnosis of ADHD leads to a higher probability of early death.  Psychiatrists will argue that this proves that one must be even more vigilant in rooting out behavioral problems in children (and adults) and treating them—predominantly with medications.  One can also draw the opposite conclusion and demand that teachers, physicians and psychiatrists be extremely careful in prescribing dangerous medications.  The data clearly support this later stance.


Lets Talk Books And Politics - Blogged