Psychological Obstacles to Grief and the Grieving Process

We tend to talk about grief and the grieving process as if it were a separate category of emotional experience altogether,  different somehow from all the others.  Because it means confronting death, mortality and ultimate loss, the grieving process does have a uniquely large and pervasive impact on our psyches; from another point of view, however, grief is but one of the  emotions and when it becomes unbearable, we will ward it off in our characteristic ways.  In other words, when people go through the grieving process, you will often see them resort to their habitual defenses.  As discussed in my post on the tenacity of defenses, as we grow up, our modes of warding off pain become entrenched; even when we've evolved and developed new ways of coping on a day-to-day basis, when confronted with a feeling as difficult to bear as grief, we may fall into the familiar rut of our oldest defenses.

We had to put our dog Maddy to sleep yesterday.  While it's not quite the same as losing a human member of our family, she has been a beloved part of our lives for the last ten years.  Her death has made me notice how we're all responding to our grief, reflective of our particular defenses, and in not such unusual ways, I believe.  It has also stirred a lot of memories from 20 years ago when, within the space of a few months, my dear friend Tom Grant died of kidney cancer at the age of 45 and my mother-in-law Eva, then in her late 50s, succumbed to metastatic breast cancer.  These untimely deaths -- Tom and his wife had two small children and my mother-in-law was fit, dynamic and vitally alive -- have been among the major losses in my life and on occasions such as Maddy's death, the feelings I had back then are still very much present to me.

Splitting and Projection

For the last year or so, Maddy has had a laryngeal problem common in older Labrador Retrievers; she was scheduled for corrective surgery on Monday.  In the four or five days leading up to the surgery, her condition had deteriorated badly and she basically stopped eating.  We thought it might have to do with her medications, but when we took her to the surgeon Monday morning, he immediately said, "This has nothing to do with her larynx problem."  Her lungs were so full of fluid he couldn't even read her X-ray.  He believed she had some fatal condition and presented euthanasia as an option, although he told us that congestive heart disease, a treatable condition, might also be to blame.

Maddy's loss of appetite had filled me with dread.  Both my friend Tom and my mother-in-law lost their appetites as their conditions worsened; I felt sure Maddy had some form of cancer and I wanted to have her put to sleep that day -- to prevent further needless suffering, I told myself.  The rest of the family felt otherwise and wanted to make sure of her condition first before taking such a step.  I felt very rational and level-headed but kept my opinions to myself.  This was my defense:  in order to evade the pain of loss, I split it off and projected it into the rest my family for them to carry; I became a bit detached and efficient, as I am wont to do at such a moment.  I'm good in crisis situations; my defenses help me put emotion aside and do what needs to be done, though in this case, it stopped me from feeling my own grief.

Continue "Psychological Obstacles to Grief and the Grieving Process"

On Everyday Narcissism

In several earlier posts, I've talked about different aspects of narcissism.  Using the film The Social Network as a case study, I discussed characteristics of narcissistic personality disorder displayed by the fictional Mark Zuckerberg; I've described narcissism as the primary defense against shame and used public rants by Charlie Sheen as a way to illustrate it; I've talked about the difference between narcissism and authentic self-esteem; and finally, I've complained about narcissistic behavior and the lost art of conversation -- the way people at social gatherings so often seem interested in talking only about themselves.  There's yet another aspect of narcissism I'd like to discuss, one most of us wouldn't view as pathological.  Let's call it everyday narcissism.

First, a little bit of history.  The term narcissism was coined by Paul Nacke in 1899 to describe someone who treated his or her own body as if it were a sexual object, in lieu of having sexual desires for other people.  Freud took up the term and eventually made a distinction between primary (normal) and secondary (pathological) narcissism.  Primary narcissism is the universal desire to protect ourselves from danger and to preserve our own lives; it has a sexual component that doesn't preclude desire for others.  People who suffer from secondary narcissism, on the other hand, "display two fundamental characteristics:  megalomania and diversion of their interest from the external world -- from people and things" (Freud, On Narcissism, p. 74).

Since then, the concept of narcissism has expanded beyond Freud's original view, enlarging on the elements of megalomania and giving only secondary emphasis to the element of sexual desire.  Merriam-Webster's primary definition for narcissism is "egoism, ego-centrism," relegating "love of or sexual desire for one's own body" to the secondary meaning.  When most people use the word today to describe someone else, they usually mean he or she has megalomaniacal tendencies:  "feelings of personal omnipotence or grandeur" (Merriam-Webster again).   Our use of the word often implies personal vanity, which suggests a sexual desire for one's own body, but it's not the primary meaning for most of us.  In general, what is written today about narcissism focuses on having a grandiose self-image and an excessive need for admiration to sustain it.

Continue "On Everyday Narcissism"

The Rise of Bipolar Disorder Symptoms and Treatment

If you've been around as long as I have, you may remember a time when the diagnostic label "Bipolar Disorder" was relatively unknown.  Although that term has been around since the 1950s, it came into common usage only in 1980 when the APA released its third edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III); before then, mental health professionals discussed and wrote about Melancholia or Manic-Depressive Illness. It was considered quite rare.   As you may know, that revision to the prior version of the DSM sought to eliminate its psychoanalytic/psychodynamic bias and replace it with a supposedly more "scientific" approach, thereby embedding psychiatry within the medical model of treatment.

According to the 1969 book, Manic Depressive Illness by George Winokur of Washington University, Bipolar Disorder used to be fairly rare.  In 1955, only one person in every 13,000 was hospitalized for it.  Today, by contrast, according to the National Institute of Mental Health, Bipolar Disorder symptoms affect an astounding one in every forty adults in our country!!!  It's also worth noting that, before psychiatric medications were introduced, the long-term outcome for those patients was fairly good.  Only 50% of the people hospitalized for a first attack of mania ever suffered a second one.  Studies have found that, in the pre-drug period, 75-80% of hospitalized patients recovered within a year and only half of them had even one more attack within the next 20 years.  Today, Bipolar Disorder is a chronic illness, with patients spending years and years on psychiatric medications.  In other words, Bipolar Disorder was comparatively rare before 1980 and the prognosis for hospitalized patients was fairly good; today it's 325 times more common than it used to be and has become a lifelong illness.

How are we to account for this change, from a rare and acute illness to one that is pervasive and chronic?

Continue "The Rise of Bipolar Disorder Symptoms and Treatment"

Do You Want to Be a ‘Good’ Person?

Many years ago, I was discussing religious beliefs with my friend Phil, a thoughtful man who believes in the God of his faith (Judaism).  When I told him that I was agnostic, that I didn't really know what to believe about the existence of a supreme being, he asked how I could be a moral person.  I insisted that my lack of belief in the Judeo-Christian God didn't mean I had no moral values, but he continued to wonder what force those morals could have without religion to back them up.

It's an interesting question.  Phil's position implies that morality has to come from the outside, from a greater authority or system of values to which we submit; without such a source of authority, he believes we would behave in an amoral fashion.  And yet I don't behave that way.  By most people's standards, I am a "good" person:  I'm a law-abiding citizen, an involved father, considerate friend and a psychotherapist who has helped many people in his career; I care about the welfare of my friends and family and do what I can to help them; I remember birthdays and write thank you notes.  In my financial dealings, I never take advantage of people.  If no God or religion is urging me to behave in these ways, then why do I do so?

You could argue that I'm nonetheless subject to authority in the form of values internalized from my parents and society at large.  This has to be true to a large degree.  It's part of what Freud meant when he developed his theory of the superego.  That internal agency embodies attitudes and values we absorb from our parents, teachers and the people we've chosen as role models.  The superego is a kind of internal God, enforcing standards and punishing us with guilt when we fail to meet expectations.  Fear of internal punishment and guilt may, in part, keep me in line.

Beyond that, I believe two other factors lead to "moral" behavior:  empathy and enlightened self-interest.  First of all, I believe that the capacity to feel what others are feeling, to put yourself in their shoes and emotionally identify with them, is the basis of much behavior sanctioned by moral codes.  For me, and I suspect for a great many people, it's more than a capacity; it's an inclination, something that happens automatically, whether or not I intend to empathize.   Since humans are a social species and function best in groups rather than in isolation, it makes sense that we can empathize:  it improves communication and promotes social cohesion.  To be "moral" in this light is to behave in ways that benefit the family/group/tribe/species as a whole, rather than simply gratifying individual desires without regard to the feelings or needs of anyone else.

I confess that I feel a great deal of empathy only for those who are close to me and the strength of my empathic response diminishes with distance.  When I'm listening to a client in my office, sobbing over a major loss, my body will literally ache in sympathy.   When I see videos of the current suffering in Japan, I feel something, but it's faint compared to what I feel for the suffering of my loved ones.  In other words, empathy (for me) has its limits for promoting moral behavior.   That's where enlightened self-interest comes in.

Continue "Do You Want to Be a ‘Good’ Person?"

“Psychiatric Meds Are Like Insulin for Diabetes” (Big Lie #3)

In Part One of my discussion of Robert Whitaker's Anatomy of an Epidemic, we learned that there is no  scientific basis for the theory that mental illness results from an imbalance in brain chemistry; Part Two showed how, in the main, patients who were never given psychiatric meds have far better outcomes than people exposed early on to such drugs.  In this third and final part, I'll discuss what these medications actually do to your brain chemistry and why they lead to a worse prognosis in the long run.

In order to understand these processes, we need a bit of basic neurology.  I'll try to keep it simple.  As you probably know, the brain is made up of billions of neurons; each one of these neurons is connected to many other neurons.  Messages travel along the neurons, to and from the brain, moving from one neuron to another across a tiny gap called a neural synapse or the synaptic cleft.  One neuron releases a chemical messenger  called a neurotransmitter into the synapse; the molecule then travels across that tiny gap and bonds to the next neuron on the other side, thereby delivering its message.  The message subsequently continues along this second neuron until the next synapse, and so on.  Here's a diagram of a typical neural synapse; you can ignore most of the labels:

Synapse

So the message travels down the yellow neuron, releasing neurotransmitters into the synaptic cleft.  On the other side, the green neuron has receptors (the red ovals) where the neurotransmitter bonds, thereby sending  a message which then travels down the green neuron to the next synapse, and so on.  After the message has been sent, the neurotransmitter is released from the receptor back into the synapse where one of two things occurs:  either another chemical agent, an enzyme, goes to work on the neurotransmitter and dissolves it, or the (yellow) neuron re-absorbs it for later use.

Continue "“Psychiatric Meds Are Like Insulin for Diabetes” (Big Lie #3)"