Following are some of the abstracts from Dr. Martin E.P. Seligman’s scientific articles. The numbering of these abstracts corresponds to the numbering on Dr. Seligman’s Curriculum Vitae.
To investigate the effect of punishment during extinction of an avoidance response, 16 groups of rats were trained to avoid shock in a runway to a criterion of 8 consecutive avoidances. During subsequent extinction, 5 intensities and 3 durations of shock punishment (electric shock) were administered as Ss entered the goal box. Resistance to extinction and running speed decreased systematically as a function of duration and intensity of punishment. No evidence of facilitation was found at any intensity or duration of punishment.
Rats received conditioning involving 2 CSs and electric shock. Two groups received the sequence: S1, S2, shock on all trials. For these groups, S1 was an informative predictor of shock, while S2 was redundant. Two other groups received, in addition, S1 alone on some trials. For these groups, S1was unreliable and S2 was reliable. A control group received the CSs and shock unpaired. When Ss subsequently received these CSs contingent upon bar-pressing for food, all experimental CSs were stronger suppressors than the control CSs. The informative CS was stronger than the redundant CS and the unreliable CS. Implications of these findings for the generalizablity of the Egger and Miller information hypotheses are discussed.
Exposure of dogs to inescapable shocks under a variety of conditions reliably interfered with subsequent instrumental escape/avoidance responding in a new situation. The use of a higher level of shock during instrumental avoidance training did not attenuate interference; this was taken as evidence against an explanation based upon adaptation to shock. Ss curarized during their exposure to inescapable shocks also showed proactive interference with escape/avoidance responding, indicating that interference is not due to acquisition, during the period of exposure to inescapable shocks, of inappropriate, competing instrumental responses. Magnitude of interference was found to dissipate rapidly in time, leaving an apparently normal S after 48 hours.
Dogs which had 1st learned to panel-press in a harness in order to escape shock subsequently showed normal acquisition of escape/ avoidance behavior in a shuttlebox. In contrast, yoked, inescapable shock in the harness produced profound interference with subsequent escape responding in the shuttlebox. Initial experience with escape in the shuttlebox led to enhanced panel pressing during an escapable shock in the harness and prevented interference with later responding in the shuttlebox. Inescapable shock in the harness and failure to escape in the shuttlebox produced interference with escape responding after a 7-day rest. These results were interpreted as supporting a learned “helplessness” explanation of interference with escape responding: Ss failed to escape shock in the shuttlebox following an escapable shock in the harness because they learned that shock termination was independent of responding.
Dogs given an escapable shock in a Pavlovian harness later seem to “give up” and passively accept traumatic shock in shuttlebox escape/avoidance training. A theoretical analysis of the phenomenon was presented. As predicted by this analysis, the failure to escape was alleviated by repeatedly compelling the dog to make the response which terminated shock. This maladaptive passive behavior in the face of trauma may be related to maladaptive passive behavior in humans. The importance of instrumental control over aversive events in the cause, prevention, and treatment of such behaviors was discussed.
Rats which had learned to bar-press for food received CSs paired with electric shocks. For 1 group CSs and shocks were randomly interspersed, then new CSs predicted shocks; these Ss stopped bar pressing completely and formed extensive stomach ulcers. The group for which all shocks were predicted by CSs showed only transitory disruption of bar-pressing and formed no ulcers. Experience with randomly interspersed CS and shock retarded acquisition of a CER when a new CS predicted shock. Implications for (a) a safety-signal explanation of the disruptive effects of unpredictable shock, (b) the learned helplessness hypothesis, and (c) the appropriateness of the random control group in conditioning are discussed.
A common view of control group methodology holds that operational grounds alone can determine appropriate control procedures. It is suggested that not only operational considerations, but also the effects that a control manipulation has, as well as theoretical considerations, are relevant to determining appropriate control groups. A dispute about control procedures in Pavlovian conditioning is discussed in detail to illustrate the point. A control group in which the conditioned stimulus and the unconditioned stimulus occur independently of each other has been suggested when operational grounds as the only adequate control for conditioning. This suggestion is criticized on the basis of findings which show that this “control” procedure has powerful effects of its own which impair its usefulness as a control. An analogous example dealing with the risky-shift phenomenon also is examined. The implications of these examples for the logic of control group inferences are discussed.
That all events are equally associable and obey common laws is a central assumption of general process learning theory. A continuum of preparedness is defined which holds that organisms are prepared to associate certain events, unprepared for some, and contraprepared for others. A review of data from the traditional learning paradigms shows that the assumption of equivalent associability is false: in classical conditioning, rats are prepared to associate tastes with illness even over very long delays of reinforcement, but are contraprepared to associate tastes with footshock. In instrumental training, pigeons acquire key pecking in the absence of a contingency between pecking and grain (prepared), while cats, on the other hand, have trouble learning to lick themselves to escape, and dogs do not yawn for food (contraprepared). In discrimination, dogs are contraprepared to learn that different locations of discriminative stimuli control go-no go responding, and to learn that different qualities control directional responding. In avoidance, responses from the natural defensive repertoire are prepared for avoiding shock, while those from the appetitive repertoire are contraprepared. Language acquisition and the functional autonomy of motives are also viewed using the preparedness continuum. Finally, it is speculated that the laws of learning themselves may vary with the preparedness of the organism for the association and that different physiological and cognitive mechanisms may covary with the dimension.
Dogs who receive repeated, spaced exposure to inescapable electric shock in a Pavlovian hammock fail to escape shock in a shuttlebox one week later, while one session of inescapable shock produces only transient interference. Cage-raised beagles are more susceptible to interference produced by inescapable shock than are mongrels of a known history. These results are compatible with learned helplessness and contradict the hypothesis that failure to escape shock is produced by transient stress.
The Pavlovian conditioning of drinking in rats was demonstrated and shown to be under stimulus control. Distinctive conditioned stimuli previously prepared with injections of thirst-inducing hypertonic saline-procaine enhanced drinking over stimuli paired with no injections. Extinction, reconditioning, and reextinction also were demonstrated.
When a compound CS of a white box and a 1-hour water deprivation are paired with the US of thirst caused by NaCL-procaine injections, rats increase drinking in the presence of the CS. Such conditioned drinking is under stimulus control and shows no sign of extinction across approximately 55 sessions of unreinforced exposure to the CS. Preventing drinking in the presence of the CS produces a small but significant reduction of conditioned drinking. Drinking conditioned without the water deprivation element of the CS, rapidly extinguishes. Rats seem “prepared” to associate mild water deprivation with strong thirst, and this nonarbitrary association may not obey the same “laws” of extinction as associations between arbitrary CSs and USs.
A variable number of unpredictable electric shocks presented to rats bar-pressing for food produced substantial nontransient suppression across 70 sessions. With a fixed number of otherwise predictable shocks in each session, the rats recovered by pressing after the last shock, using its occurrence as a safety signal. When signals predicted a variable number of shocks the rats bar-pressed in the absence of the CS and not in its presence. Pressing recovered with milder predictable shock and more slowly with mild unpredictable shock. Inhibition of delay was found in predictable shock groups. Fear, measured by suppression, correlated with gastrointestinal ulceration (p=.74). These findings confirm the safety-signal explanation of the effects of unpredictable shock.
Conditioned drinking in the rat was demonstrated using three techniques. Increasing concentrations of subcutaneous NaCl injections increased unconditioned drinking (UR) systematically but did not increase conditioned drinking (CR); increasing concentrations of procaine-HCl injections systematically increased both unconditioned and conditioned drinking, and dissociation of UR and CR in classical conditioning was therefore demonstrated. Conditioned drinking was also produced by procaine and NaCl procaine delivered through a chronically implanted perforated tube under the back. In addition, conditioning of drinking was produced using injections of angiotensin to the hypothalamus.Unlike procaine-conditioned drinking which does not extinguish, aniotensin-conditioned drinking extinguished rapidly. Procaine may act like a poison and the conditioned drinking it produces may serve to avoid illnesses while conditioning produced by angiotensin may be more like the conditioning of the natural thirst.
Some inadequacies of the classical conditioning analysis of phobias are discussed: phobias are highly resistant to extinction, whereas laboratory fear conditioning, unlike avoidance conditioning, extinguishes rapidly; phobias comprise a nonarbitrary and limited set of objects, whereas fear conditioning is thought to occur to an unlimited range of conditioned stimuli. Furthermore, phobias, unlike laboratory fear conditioning, are often acquired in one trial and seem quite resistant to change by “cognitive” means. An analysis of phobias using a more contemporary model of fear conditioning is proposed. In this view, phobias are seen as instances of highly “prepared” learning. (Seligman, 1970). Such prepared learning is selective, highly resistant to extinction, probably noncognitive and can be acquired in one trial. A reconstruction of the notion of symbolism is suggested.
Experimental psychologists interested in learning have traditionally studied the behavior of animals and men faced with the rewards and punishments that the subject could control. So, in a typical instrumental learning experiment, the subject can either make some response or refrain from making it and thereby influence the events around him. Nature, however, is not always so benign in its arrangement of the contingencies. Not only do we face events that we can control by our actions, but we also face many events about which we can do nothing at all. Such uncontrollable events can significantly debilitate organisms: they produce passivity in the face of trauma, inability to learn that responding is effective, and emotional stress in animals, and possibly depression in man. This review is concerned with the behavioral and psychological impact of uncontrollable traumatic events.
Conditioned drinking was produced in the rat by conditioned stimuli paired with hypertonic procaine, isotonic procaine, and hypertonic saline, in that order of effectiveness. When these solutions were paired with the taste of saccharin, they produced taste aversions in the same order of effectiveness. Lithium chloride, a well-known poison, produced taste aversions but, in contrast, conditioned decreases in drinking. Like taste aversions which develop after one trial, conditioned drinking also developed in one trial. So procaine and hypertonic saline may be poisons. The malaise they produce is relieved by drinking, can be classically conditioned, and is indexed by conditioned drinking.
Learned helplessness, the interference with instrumental responding following an escapable aversive event, has been found in animals and men. This study tested for the generality of the debilitation produced by uncontrollable events across tasks and motivational systems. Four experiments with college students were simultaneously conducted: (a) pretreatment with inescapable, escapable, or control aversive tone followed by shuttlebox escape testing; (b) pretreatments with insoluble, or controlled discrimination problems followed by anagram solution testing; (c) pretreatments with inescapable, escapable, or control aversive tone followed by anagram solution testing; (d) pretreatments with insoluble, soluble or controlled discrimination problems followed by shuttlebox escape testing. Learned helplessness was found with all four experiments: Both insolubility and inescapability produced failure to escape and failure to solve anagrams. We suggest that inescapability and insolubility both engendered expectancies that responding is independent of reinforcement. The generality of this process suggests that learned helplessness may be an induced “trait.”
Similarity of impairment in naturally occurring depression and laboratory-induced learned helplessness was demonstrated in college students. Three groups each of depressed and nondepressed students were exposed to escapable, inescapable, or no noise. Then they were tested on a series of twenty patterned anagrams. Depressed-no noise subjects were much poorer at solving individual anagrams and seeing the pattern than nondepressed–no noise subjects. Inescapable noise produced parallel deficits in nondepressed subjects relative to escapable or no noise, but inescapable noise did not increase impairment in depressed subjects. These findings support the learned helplessness model of depression, which claims that a belief in independence between responding and reinforcement is central to the etiology, symptoms, and cure of reactive depression.
Blocking the robust conditioned drinking response following conditioning eliminates the isotonic procaine conditioned response (CR) but not the hypertonic procaine CR. Rats allowed to drink in the box during conditioning form a larger CR than the rats merely injected with hypertonic or isotonic procaine who are prevented from drinking in the box: response blocking during extinction eliminates the latter CR. The CR to hypertonic procaine can also be eliminated if no experience of drinking water in the box occurs either during baseline or during conditioning. Blocking the drinking response, like blocking shock avoidance, markedly reduces its high resistance to extinction. This suggests that rats overdrink to avoid anticipated illness, and fail to extinguish partially because they are never exposed to the fact that no illness will occur even if no drinking occurs.
This article reports the transfer of learned helplessness from one aversive motivator, shock, to another, frustration. In Experiment 1, animals were trained to approach food in a runway and concomitantly exposed to either escapable, inescapable, or no shock in a different situation. Extinction was conducted in the runway, and subsequently the animals were tested for hurdle-jump escape from the frustrating goal box. Inescapably shocked rats failed to learn to hurdle jump, whereas escapably or nonshocked animals learned the frustration escape response. Experiment 2 replicated the basic findings of Experiment 1 and showed transfer of learned helplessness from shock to frustration when no running response had been first acquired in the runway.
Four experiments attempted to produce behavior in the rat parallel to the behavior characteristic of learned helplessness in the dog. When rats received escapable, inescapable, or no shock and were later tested in jump-up escape, both inescapable and no-shock controls failed to escape. When bar pressing, rather than jumping up, was used as the tested escape response, fixed ratio (FR) 3 was interfered with by inescapable shock, but not lesser ratios. With FR-3, the no-shock control escaped well. Interference with escape was shown to be a function of the inescapability of shock and not shock per se: Rats that were “put through” and learned a prior jump-up escape did not become passive, but their yoked, inescapable partners did. Rats, as well as dogs, fail to escape shock as a function of prior inescapability, exhibiting learned helplessness.
Rats, like dogs, fail to escape following exposure to inescapable shock. This failure to escape does not dissipate in time; rats fail to escape 5 minutes, 1 hour, 4 hours, 24 hours, and 1 week after receiving inescapable shock. Rats that first learned to jump up to escape were not retarded later at bar-pressing to escape following inescapable shock. Failure to escape can be broken up by forcibly exposing the rat to an escape contingency. Therefore, the effects of inescapable shock in the rat parallel learned helplessness effects in the dog.
The learned helplessness model of depression predicts that depressives should tend to perceive reinforcement as response-independent in skill tasks. Depressed-anxious, nondepressed-anxious, and nondepressed-nonanxious college students estimated their chances for success in a skill or a chance task. (Virtually no depressed-nonanxious subjects could be obtained.) Depressed-anxious subjects showed less expectancy change in skill than nondepressed-anxious subjects, while these two groups exhibited similar expectancy change in chance. Nondepressed-anxious and nondepressed-nonanxious subjects did not differ in either skill or chance. The results for a discrimination learning problem were mixed. The groups did not differ in latency to shut off an aversive noise. So, depressed subjects perceptually distort the outcomes of skilled responding as being response-independent, and they may, under certain conditions, show deficits at learning the consequences of responses. These deficits may reflect learned helplessness and are specific to depression.
In 1967, Overmier and Seligman found that dogs exposed to inescapable and unavoidable electric shocks in one situation later failed to learn to escape shock in a different situation where escape was possible. Shortly thereafter, Seligman and Maier (1967) demonstrated that this effect was caused by the uncontrollability of the original shocks. In this article we review the effects of exposing organisms to aversive events which they cannot control, and we review the explanations which have been offered. There seem to be motivational, cognitive, and emotional effects of uncontrollability. (a) Motivation. Dogs that have been exposed to inescapable shocks do not subsequently initiate escape response in the presence of shock. We review parallel phenomena in cats, fish, rats, and man. Of particular interest is the discussion of learned helplessness in rats and man. Rats are of interest because learned helplessness has been difficult to demonstrate in rats. However, we show that inescapably shocked rats do fail to learn to escape if the escape task is reasonably difficult. With regard to man, we review a variety of studies using inescapable noise and unsolvable problems as agents which produce learned helplessness effects on both instrumental and cognitive tasks. (b) Cognition. We argue that exposure to uncontrollable events interferes with the organism’s tendency to perceive contingent relationships between its behavior and outcomes. Here we review a variety of studies showing such a cognitive set. (c) Emotion. We review a variety of experiments which show that uncontrollable aversive events produce greater emotional disruption than do controllable aversive events. We have proposed an explanation for these effects, which we call the learned helplessness hypothesis. It argues that when events are uncontrollable the organism learns that its behavior and outcomes are independent, and that this learning produces the motivational, cognitive, and emotional effects of uncontrollability. We describe the learned helplessness hypothesis and research which supports it. Finally, we describe and discuss in detail alternative hypotheses which have been offered as accounts of the learned helplessness effect. One set of hypotheses argues that organisms learn motor responses during exposure to uncontrollable shock that compete with the response required in the test task. Another explanation holds that uncontrollable shock is a severe stressor and depletes a neurochemical necessary for the mediation of movement. We examine the logical structure of these explanations and present a variety of evidence which bears on them directly.
Depressed and nondepressed subjects were given escapable, inescapable, or no noise. Then, their perceptions of reinforcement contingencies in skill and chance tasks were assessed. Depressed-no noise and nondepressed-inescapable noise subjects exhibited smaller decreases in expectancy following failure in skill, but not in chance, than nondepressed-no noise subjects. So, depression and inescapablenoise both produced perception of failure in skill as response-independent. Contrary to predictions, neither depression nor inescapable noise had a significant effect on increases in expectancy after success. These results partially support the learned helplessness model of depression which claims that a belief in independence between responding and reinforcement is central to the etiology and symptoms of depression in man.
Vulnerability to sudden death was produced in laboratory rats by manipulating their developmental history. Rats who were reared in isolation died suddenly when placed in a stressful swimming situation. Handling of these singly-housed rats from 25 to 100 days of age potentiated the phenomenon. However, animals who were group housed did not die even when they had been previously handled.
Depressed and nondepressed college students received experience with solvable, unsolvable, or no discrimination problems. When later tested on a series of patterned anagrams, depressed groups performed worse than nondepressed groups, and unsolvable groups performed worse than solvable and control groups. As predicted by the learned helplessness model of depression, nondepressed subjects given unsolvable problems showed anagram deficits parallel to those found in naturally occurring depression. When depressed subjects attributed their failure to the difficulty of the problems rather than to their own incompetence, performance improved strikingly. So, failure in itself is apparently not sufficient to produce helplessness deficits in man, but failure that leads to a decreased belief in personal competence is sufficient.
Therapeutic implications of the learned helplessness model of depression were tested. Nondepressed subjects receiving inescapable noise and depressed/ no-noise subjects later showed noise escape deficits in a shuttlebox and perceptions of response-reinforcement independence when compared with nondepressed/no-noise subjects. Experience with solvable discrimination problems reversed the escape deficits and perceptions of response-reinforcement independence associated with both inescapability and depression. The results support the learned helplessness model of depression, which claims (a) that uncontrollable events induce distorted perceptions of response-reinforcement independence in non-depressed people which cause performance deficits parallel to those found in naturally occurring depression, and (b) that experience with controllable events reversed the perceptions of response-reinforcement independence and the performance deficits associated with both helplessness and depression. The experimental design was proposed as a basic way of testing the effectiveness of therapies for depression in the laboratory, which is relatively free of the difficulties that plague outcome research for depression. Differences and similarities were pointed out between subjects in these experiments andpatients clinically diagnosed as depressed.
Weiss, Glazer, and Pohorecky (1975) reported that repeated sessions of uncontrollable stressors fail to produce an escape learning deficit in the rat. This finding is unexpected in view of the often reported learned helplessness findings where a deficit in escape learning is found following a single session of uncontrollable shock. The discrepancy between these two sets of results may be due to procedural differences. The present experiment tests whether the usual parameters for producing learned helplessness in the rat produce escape deficits after repeated sessions. The results showed that an escape learning deficit is obtained, as expected, after repeated exposure to inescapable shock. The phenomenon reported by Weiss et al. (1975) appears to be different in kind from learned helplessness.
Inescapable shock given to weanling rats produced large deficits in adult escape behavior. Therefore, helplessness learned as a weanling is retained in later life and interferes with adaptive instrumental responding. Experience with escapable shock while a weanling immunizes the animal against the deficits produced by inescapable shock received as an adult. The implications of these findings for animal models of human depression are discussed.
The weaknesses of the equipotentiality premise are rehearsed and an alternative theory of preparedness, with particular reference to phobias, is outlined. Two clinical cases which appear to be contrary to prediction are described. Although their phobias were unprepared (lack of biological significance, rarity, and probably gradual acquisition) they showed features (high resistance and broad generalization) not predicted by the theory. It is suggested that these features may appear as a result of (1) over learning, (2) symbolic transformation or (3) considerable associated psychopathology.
Sixty-nine phobic and 82 obsessional patients, treated at the Maudsley Hospital, were rated for ‘preparedness,’ the evolutionary significance of the content and behavior of the disorder. Reliable ratings (r=0.78 and 0.90) of the dangerousness of the objects or situation to pre-technological man indicated that the content of the large majority of the phobias and obsessions are judged as evolutionarily significant. Degree of preparedness, however, did not predict outcome of therapy, suddenness of onset of the disorder, severity of impairment, intensiveness of the treatment received, or age of onset. Nor was there any significant relationship between preparedness and certain other variables in the obsessional sample: stimulus generalization, effect on lifestyle, impaired reproductive capacity and abnormal personality. The implications of these findings for the hypothesis that human phobias and obsessions are prepared, and for the clinical usefulness of the concept of preparedness, are discussed.
The learned helplessness hypothesis is criticized and reformulated. The old hypothesis, when applied to learned helplessness in humans, has two major problems: (a) It does not distinguish between cases in which outcomes are uncontrollable for all people and cases in which they are uncontrollable only for some people (universal vs. personal helplessness), and (b) it does not explain when helplessness is general and when specific, or when chronic and when acute. A reformulation based on a revision of attribution theory is proposed to resolve these inadequacies. According to the reformulation, once people perceive noncontingency, they attribute their helplessness to a cause. This cause can be stable or unstable, global or specific, and internal or external. The attribution chosen influences whether expectation or future helplessness will be chronic or acute, broad or narrow, and whether helplessness will lower self-esteem or not. The implications of this reformulation of human helplessness for the learned helplessness model of depression are outlined.
Does the learned helplessness model of depression apply to clinically depressed patients and is it specific to depression? Changes in expectancy following success and failure in skill and chance tasks were assessed for depressed nonschizophrenics (unipolar depressives), depressed schizophrenics, nondepressed schizophrenics, and normal controls. The unipolar depressives showed smaller changes in expectancy of future success after failure in the skill task than did the normal controls and both schizophrenic groups. Depressed schizophrenics did not show smaller expectancy changes than nondepressed schizophrenics. The learned helplessness model has been tested primarily in populations with subclinical depression; the present results provide partial support for learned helplessness as a model of one type of severe clinical depression and suggest that learned helplessness is not a general feature of psychopathology.
Although there have been many studies of the interference effect produced by exposure to inescapable shock, little is known about the role of shock intensity. This experiment factorially manipulated four levels of shock intensity during exposure to inescapable shock and three levels of intensity during the test for interference. Interference occurred at each training shock intensity when training and test shocks were similar. Interference was not obtained when training intensity was low or medium and test intensity was high. These findings pose problems for learned helplessness, learned inactivity, competing motor response, and catecholamine depletion hypotheses of the interference effect in the rat.
Those criticisms of the learned helplessness model of depression not anticipated by Abramson, Seligman, and Teasdale are addressed. I suggest that learned helplessness models those depressions that are caused by cognition of response-outcome independence, show passivity and negative cognitive set, and are specifically responsive to anti-helplessness therapies. This subclass may cut across traditional ways of subdividing depressions, both mild and clinical. The necessity of grouping patients not only by depression inventory but by diagnostic category in testing the helplessness model in clinical populations is affirmed. I suggest that Costello’s claim that helplessness experiments do not support the model systematically ignores the supporting evidence. The relevance of skill expectancy shift data to the reformulated learned helplessness model of depression is questioned. Finally, I suggest that mild depression should not be considered merely an “analogue” to some other, more “real,” disorder but is itself a disorder of major importance.
Depressed college students, compared to nondepressed college students, attributed bad outcomes to internal, stable, and global causes, as measured by an attributional style scale. This attributional style was predicted by the reformulated helplessness model of depression. In addition, relative to nondepressed students, depressed students attributed good outcomes to external, unstable causes.
Inescapable shocks of short (.5 sec) and long (5 sec) duration interfered with subsequent shock escape in rats. In addition, there were no differences between groups that received the pretreatment shocks and testing in the same or different apparatuses. These results are consistent with the learned helplessness account but conflict with recent learned inactivity accounts for the interference effects produced by inescapable shocks.
Therapeutic implications of the learned helplessness model of depression were tested in a clinical population. In pretreatment, two groups of nondepressed medical patients waited, two groups of nondepressed medical patients received helplessness training, and two groups of psychiatric patients (diagnosed as Primary Affective Disorder) waited. In treatment, subjects received either Velten’s mood-elation procedure as “therapy” or Velten’s mood-neutral procedure as placebo. Performance on a cognitive task and on a mood task was assessed. Three separate administrations of the Depression Adjective Check List indicated that helplessness training induced depressive affect, and the mood elation procedure decreased depressive affect for both helpless and depressed patients. The mood neutral procedure and the waiting periods were associated with no affective changes. On the cognitive (anagrams) task, performance deficits were associated with helplessness and depression but were reversed by mood elation. Results are interpreted as consistent with the learned helplessness model of depression.
Are internal attributions for bad events always associated with depression? The depressive symptoms of 87 female undergraduates correlated with blame directed at their own characters. In contrast, blame directed at their own behaviors correlated with a lack of depressive symptoms. Behaviorally attributed bad events were seen as more controllable and their causes less stable and less global than were characterologically attributed bad events and their causes. Characterological blame increased with more negative life events during the last year, implying that individuals who blame their character may arrive at this attributional style by a covariation analysis. Finally, characterological blame did not precede the onset of depressive symptoms 6 or 12 weeks later. Thus, characterologicalblame may be a strong concomitant of depression, but not a cause.
Rats experienced inescapable, escapable, or no electric shock one day after being implanted with a Walker 256 tumor preparation. Only 27 percent of the rats receiving inescapable shock rejected the tumor, whereas 63 percent of the rats receiving escapable shock and 54 percent of the rats receiving no shock rejected the tumor. These results imply that lack of control over stressors reduces tumor rejection and decreases survival.
Of current interest are the causal attributions offered by depressives for the good and bad events in their lives. One important attributional account of depression is the reformulated learned helplessness model, which proposes that depressive symptoms are associated with an attributional style in which uncontrollable bad events are attributed to internal (versus external), stable (versus unstable), and global (versus specific) causes. We describe the Attributional Style Questionnaire, which measures individual differences in the use of these attributional dimensions. We report means, reliabilities, intercorrelations, and test-retest stabilities for a sample of 130 undergraduates. Evidence for the questionnaire’s validity is discussed. The Attributional Style Questionnaire promises to be a reliable and valid instrument.
Depressed unipolar male patients (n=30) were more likely to attribute bad outcomes to internal, stable and global causes than were nondepressed schizophrenics (n=15) and nondepressed medical patients (n=61). Also, the depressed patients were more evenhanded in their attributions for good versus bad events than the other patients. These results support the existence, in clinical depression, of the depressive attributional style postulated by the reformulated learned helplessness model and indicate that it is not a general characteristic of psychopathology.
A core prediction of the reformulated model of learned helplessness and depression (Abramson, Seligman, & Teasdale, 1978) is that when confronted with same negative life event, people who display a generalized tendency to attribute negative outcomes to internal, stable, or global factors should be more likely to experience a depressive mood reaction than people who typically attribute negative outcomes to external, unstable, or specific factors. We tested this prediction with a prospective design in a naturalistic setting by determining whether the content of college students’ attributional styles at one point in time predicted the severity of their depressive mood response to receiving a low grade on a midterm exam at a subsequent point in time. Consistent with the prediction, students with an internal or global attributional style for negative outcomes at Time 1 experienced a depressive mood response when confronted with a subsequent low midterm grade, whereas students with an external or specific attributional style for negative outcomes were invulnerable to this depressive mood response. In contrast to the results for the internality and globality dimensions, students’ scores along the stability attribution dimension were not correlated with the severity of their depressive mood response to the low midterm grade. In the absence of a negative life event (i.e., receipt of a high midterm grade), students’ generalized tendencies to make internal or global attributions for negative outcomes at Time 1 were not significantly correlated with their subsequent changes in depressive mood although there was a nonsignificant positive correlation between severity of depressive mood response and the tendency to make global attributions for negative outcomes at Time 1.
The attritional reformulation of learned helplessness holds that depressive symptoms are preceded by internal, stable, and global attributions for bad events. We tested this prediction during the psychotherapy sessions of a patient who showed strong mood swings. His attributions, scored by a new method that analyzes freely occurring causal statements for internatilty, stability, and globality, predicted mood swings as measured by the symptom-context method. These data suggest that attributions can be assessed with predictive validity using verbatim transcripts of verbal material.
Group learned helplessness is demonstrated in Experiment 1. Groups of 2 tried to turn off noise by their joint actions. In the solvable group (s), noise offset was contingent on their sequence of button pushing. In the yoked, unsolvable group (u), noise offset was independent of all sequences of button pushes they produced. In a practice group (o), subjects practiced coordinated sequences of button pushes with their partners, but heard no noise. Later, all 3 groups were tested in pairs in a shuttlebox which required coordinated joint responding to turn off noise. The unsolvable group escaped more poorly than the other 2 groups, paralleling helplessness effects with individuals. Experiment 2 and 3 found no transfer from individual helplessness training to group testing and no transfer from group helplessness training to individual testing. We suggest that the same mechanism, the expectation of response ineffectiveness, may mediate both individual and group learned helplessness.
Depressive symptoms among 40 fourth- and fifth-grade students, as measured by the Children’s Depression Inventory, correlated highly with impaired problem solving at block designs (r=. 64) and anagrams (r=. 67). Similar impairments have been found among depressed adults, suggesting that depression among children may be continuous with depression among adults.
The learned helplessness phenomenon is proposed as a model for the emotional numbing and maladaptive passivity sometimes following victimization. Victims may learn during the victimization episode that responding is futile. This learning is represented as an expectation of future response-outcome independence (helplessness). Causal interpretations of the episode affect the chronicity and generality of deficits resulting from this expectation, as well as the involvement of self-esteem loss. We discuss several problems in applying the helplessness model to victimization, but we conclude that the theory may be useful in explaining why some victims become numb and passive.
In a series of five experiments, we investigated the bidirectional effects of prior experience with both control or lack of control over shock on subsequent shock-motivated activity and escape learning. Rats were tested with inescapable shock rather than escapable shock as is used in typical helplessness experiments. Naïve rats initially shuttled frequently during shock but decreased activity as testing continued. Pretraining with inescapable shock reduced shuttle responding throughout testing. Unexpectedly, rats which first learned to lever press to escape shock continued unabated shuttling through 200 trials of 10 second duration inescapable shocks (Experiment 1). These bidirectional effects were replicated using a shuttle escape response for pretreatment and lever pressing as the test response. During two uninterrupted 1000 second duration inescapable shocks (Experiment 2), escape rats continued to lever press through the 2000-sec of shock. In the third experiment, escapable shock facilitated and inescapable shock hindered later learning when the escape contingency was degraded by a 3 second delay of shock termination. The fourth and fifth experiments demonstrated that (1) this associative facilitation effect is not simply due to an increase in active responding by escape animals (Experiment 4), and (2) no associative facilitation is observed if the contingency is not initially degraded by a 3 second delay (Experiment 5). Taken together, these results are the first demonstration of bidirectional effects of control on aversively motivated behavior in animals. In addition to typical helplessness effects, a “mastery” phenomenon is observed. This mastery induced by experience with escape learning is characterized by (1) a motivational effect: persistent general active behavior in the face of inescapable shock, and (2) an associative effect: facilitation in learning degraded response-shock contingencies. These are the opposite of helplessness effects, operationally and descriptively, and may be opposite in process as well.
The attributional reformulation of the learned helplessness model claims that an explanatory style in which bad events are explained by internal, stable, and global cause is associated with depressive symptoms. Furthermore, this style is claimed to be a risk factor for subsequent depression when bad events are encountered. We describe a variety of new investigations of the helplessness reformulation that employ five research strategies: (a) cross-sectional correlational studies, (b) longitudinal studies, (c) experiments of nature, (d) laboratory experiments, and (e) case studies. Taken together, these studies converge in their support for the learned helplessness reformulation.
According to the logic of the attribution reformulation of learned helplessness, the interaction of two factors influences whether helplessness experienced in one situation will transfer to a new situation. The model predicts that people who exhibit a style of attributing negative outcomes to global factors will show helplessness deficits in new situations that are either similar or dissimilar to the original situation in which they were helpless. In contrast, people who exhibit a style of attributing negative outcomes to only specific factors will show helplessness deficits in situations that are similar, but not dissimilar, to the original situation in which they were helpless. To test these predictions, we conducted two studies in which undergraduates with either a global or specific attributional style for negative outcomes were given one of three pretreatments in the typical helplessness triadic design: controllable bursts of noise, uncontrollable bursts of noise, or no noise. In Experiment 1, students were tested for helplessness deficits in a test situation similar to the pretreatment setting, whereas in Experiment 2, they were tested in a test situation dissimilar to the pretreatment setting. The findings were consistent with predictions of the reformulated helplessness theory.
The reformulation of helplessness theory proposes that an insidious attributional style accompanies and predisposes depressive symptoms. To date, all research investigating the reformulation has used adult subjects. In the present study, we investigated predictions of the reformulation among 8 to13-year-old children. Children who attributed bad events to internal, stable, and global causes were more likely to report depressive symptoms than were children who attributed these events to external, unstable, and specific causes. This depressive attributional style predicted depressive symptoms 6 months later, suggesting that it may be a risk factor for depression. Finally, children’s attributional style for bad events and their depressive symptoms converged with those of their mothers, but not with those of their fathers.
Sixty-six adults wrote essays describing the two worst events that had occurred to them during the preceding year and then completed the short-form of the Beck Depression Inventory (BDI). Causal explanations for bad events were extracted from the essays and rated by judges for internality (vs. externality), stability (vs. instability), and globality (vs. specificity). Ratings were consistent across different attributions made by the same individual. Further, the internality, stability, and globality of these unprompted attributions correlated with depressive symptoms as measured by the BDI. Taken together, the results support the attributional reformulation of the learned-helplessness model of depression.
Weanling experience with shock changes vulnerability to tumors of adult rats, but only if shock challenge occurs in adulthood. Adult rats, which as weanlings received inescapable shock (helplessness training), show lower rates of tumor rejection following adult shock that is either inescapable or escapable. Adult rats, which as weanlings received escapable shock (mastery training), reject tumors well following adult shock that is either inescapable or escapable. We suggest that early experience with inescapable shock causes later shock (even if actually escapable) to be responded to passively, which in turn, increases tumor take. In contrast, early experience with escapable shock causes later shock (even if actually inescapable) to be responded to actively, and this immunizes against increased tumor take.
The reformulated learned helplessness model claims that the tendency to explain bad events by internal, stable, and global causes potentiates quitting when bad events are encountered. We tested this prediction in the work setting with individuals who frequently experience bad events. Explanatory style, as measured by the Attributional Style Questionnaire (ASQ), correlated with and predicted the performance of life insurance sales agents. In a cross-sectional study of 94 experienced agents, individuals scoring in the top half of the ASQ sold 37% more insurance in their first 2 years of service than those scoring in the bottom half. In a prospective 1- year study of 103 newly hired agents, individuals who scored in the top half of the ASQ when hired remained in their job at twice the rate and sold more insurance than those scoring in the bottom half of the of the ASQ. These two studies support the claim that a pessimistic explanatory style leads to poor productivity and quitting when bad events are experienced, and extend the usefulness of the ASQ to the workplace.
In this longitudinal study, the depressive symptoms, life events, and explanatory styles of 168 school children were measured five times during the course of 1 year. Measures of school achievement were obtained once during the year. Depressive symptoms and explanatory styles were found to be quite stable over the year. As predicted by the reformulated learned helplessness theory, explanatory style both correlated with concurrent levels of depression and school achievement and predicted later changes in depression during the year. Depression also predicted later explanatory styles. The implications of these results for intervention with children with depressive symptoms or school achievement problems are discussed.
Dreaming is held to consist of three elements: periodic, unrelated visual bursts, emotional episodes, and the cognitive synthesis of the first two. The theory predicts that there are two kinds of visual imagery in dreams: one vivid, detailed, colorful, large and in the center of the visual field. The other less vivid, less detailed, less colorful, smaller, and in the periphery. These vivid events should be unconstrained by the plot, and the less vivid ones constrained. Evidence is presented that this is so. The theory is consistent with the incongruence of events in dreaming, with the transmogrification of images and setting and with the incorporation of external intrusions. The theory predicts consistent individual differences in how tightly or loosely images are integrated into a plot in dreaming, and that this should correspond to the ability to tie random images together into a plot while awake. Evidence is presented that this is so. The theory suggests why causality is linear in dreaming, specifies the psychological differences between dreaming and waking, and between REM and nonREM sleep, and comments on the notion of ‘scene’ and the ability to ‘control’ dream content. Finally, the theory suggests that the cognitive synthesis is the most likely place to find the ‘meaning’ of dreams.
Explanatory style is an individual difference that influences people’s response to bad events. The present article discusses the possibility that a pessimistic explanatory style makes illness more likely. Several studies suggest that people who offer internal, stable, and global explanations for bad events are at increased risk for morbidity and mortality. We tentatively conclude that passivity, pessimism, and low morale foreshadow disease and death, although the process by which this occurs is unclear.
Is the Attributional Style Questionnaire (ASQ; Seligman, Abramson, Semmel and von Baeyer, 1979; Peterson, Semmel, von Bayer, Abramson, Metalsky and Seligman, 1982) transparent? That is, given an incentive to score well on the questionnaire, can test-takers fake the optimal responses? In two studies we randomized college students into one of three groups. To one group we offered an incentive of $100 to the individual with the best overall score. A second group received both a $100 incentive and a brief coaching on how to do well, in the form of an explanation of what the test measures. A third group, the control group, was simply asked to take the questionnaire. We found no significant differences in the scores among the three groups. This suggests that the ASQ has validity even when administered to test-takers who are highly motivated to ‘beat’ it.
Explanatory style, the habitual way an individual explains the causes of bad and good events, is reliably associated with future health. In this article, we review evidence from three studies which demonstrate a significant relationship between pessimism (the belief that bad events are caused by internal, stable, and global factors and good events are caused by external, unstable, and specific factors) and an increased risk for infectious disease, poor heath, and early mortality. We suggest two possible mechanisms which might mediate the link between pessimism and poor health. Finally, we propose that interventions aimed at changing a pessimistic outlook might lower the probability of future illness.
We administered the Attributional Style Questionnaire to 39 unipolar depressed patients at the beginning and end of cognitive therapy and at one-year follow-up, and we administered it to 12 bipolar patients during a depressed episode. A pessimistic explanatory style for bad events correlated with severity of depression for unipolars at cognitive therapy intake (r=.56, p<.0002), termination (r=.57, p<.0008), and one-year follow-up (r=.64, p<.0005) and among the bipolars (r=.63, p<.03). Explanatory style and depressive symptoms significantly improved by the end of cognitive therapy and remained improved at one-year follow-up. For the unipolars in cognitive therapy, explanatory style change from intake to termination correlated with change in depressive symptoms from intake to termination (r=.65, p<.0001). These results suggest that explanatory style may be one of the mechanisms of change for unipolar depressive patients undergoing cognitive therapy.
Explanatory style, the habitual ways in which individuals explain bad events, was extracted from open-ended questionnaires filled out by 99 graduates of the Harvard University classes of 1942-1944 at age 25. Physical health from ages 30 to 60 as measured by physical examination was related to earlier explanatory style. Pessimistic explanatory style (the belief that bad events are caused by stable, global, and internal factors) predicted poor health at ages 45 through 60, even when physical and mental heath at age 25 were controlled. Pessimism in early adulthood appears to be a risk factor for poor health in middle and late adulthood.
The habitual way people explain causes (explanatory style) as assessed by questionnaire has been used to predict depression, achievement, and health, with a pessimistic style predicting poor outcomes. Because some individuals whose behavior is of interest cannot take questionnaires, their explanatory style can be assessed by blind, reliable content analysis of verbatim explanations (CAVE) from the historical record. We discuss three examples of CAVEing archival material. First, shifts to a more optimistic style in Lyndon Johnson’s press conferences predicted bold, risky action during the Vietnam War, whereas shifts to pessimism predicted passivity. Second, analyses of presidential candidates’ nomination acceptance speeches from 1948 to 1984 showed that candidates who were more pessimistically ruminative lost 9 of the 10 elections. Third, explanatory style and its relation to depressive signs was considered at a societal level. There were more behavioral signs consistent with depression among workmen in East Berlin than in West Berlin bars. This finding corresponded to a comparatively more pessimistic explanatory style in East Berlin newspaper reports concerning the 1984 Winter Olympics. We suggest that pessimism and its consequences can be quantified and compared, not only in contemporary individuals but also across time and culture.
Analyzed explanatory style across the life span. 30 Ss whose average age was 72 responded to questions about their current life and provided diaries or letters written in their youth, an average of 52 years earlier. A blind content analysis of explanatory style derived from these 2 sources revealed that explanatory style for negative events was stable throughout adult life (r = .54, p< .002). In contrast, there appeared to be no stability of explanatory style for positive events between the same 2 time periods. These results suggest that explanatory style for negative events may persist across the life span and may constitute an enduring risk factor for depression, low achievement, and physical illness.
We compare two methods of assessing explanatory style -- the content analysis of verbatim explanations (CAVE) and the Attributional Style Questionnaire (ASQ). The CAVE technique is a method that allows the researchers to analyze any naturally occurring verbatim materials for explanatory style. This technique permits the measurements of various populations that are unwilling or unable to take the ASQ. We administered the ASQ and Beck Depression Inventory (BDI) to 169 undergraduates and content analyzed the written causes on the ASQ for explanatory style by the CAVE technique. The CAVE technique correlated 0.71 with the ASQ (p<0.0001, n=159) and –0.39 with BDI (p<0.0001, n=159). The ASQ correlated –0.51 with the BDI (p<0.0001, n=160). Both the CAVE technique and the ASQ seem to be valid devices for assessing explanatory style.
Two university varsity swimming teams took the Attributional Style Questionnaire (ASQ) at the start of the season.Swimmers with a pessimistic explanatory style went on to show more unexpected poor performances during competition than optimistic swimmers. We then tested the purported mechanism of this effect by experimentally simulating defeat, giving each swimmer falsely negative times. Performance deteriorated for those swimmers with a pessimistic explanatory style for bad events on their next swim, whereas performance for those swimmers with an optimistic style did not.
Two psychological variables, pessimistic explanatory style and rumination about bad events, combine to predict depression and susceptibility to helplessness. We hypothesize that these variables should also predict the appeal of a presidential candidate’s message, and analyzed pessimistic rumination in Democratic and Republican nomination acceptance speeches from 1948 to 1984. A blind, reliable content analyses showed that the candidate who was more a pessimistic ruminator lost 9 out of 10 times, and thevictory margin was proportional to the difference between the candidates in pessimistic rumination. This was not due to a poor showing in the polls at nomination leading both to pessimistic rumination and defeat. Partialing out incumbency and standing in the polls around the time of nomination, the pessimistic rumination difference correlated with the victory margin (partial r=.89, p<.01). This basic finding was replicated for 1900 to 1944. The pessimistic ruminator lost 9 of 12 elections, and the difference in pessimistic rumination correlated with the size of the loss (r=.71). Three mechanisms are proposed by which pessimistic ruminators should lose: (a) voter aversion to depressive personalities, (b) the appeal to voters of hope, and (c) candidate passivity. As evidence for the third mechanism, pessimistic ruminators make fewer stops per day on the campaign trail. These results suggest that the American voter, across historical periods, place a high premium on the appearance of hope.
Quantified behavioral signs of depression in relation to pessimism across cultures. First, by observing workmen in 1985 East and West Berlin bars, we found more behavior consistent with depression in East Berliners than in West Berliners. We then measured pessimism in both cultures by assessing explanatory style in newspaper reports of the 1984 Winter Olympic Games. Despite having more Olympic victories to report, East Berlin newspaper accounts were more pessimistic than West Berlin reports. We suggest that, with proper controls, convergent measurements of explanatory style and behavioral signs consistent with depression allow to quantify pessimism and depression across culture and time.
We correlated pessimistic explanatory style - the belief that negative events are caused by internal, stable, and global factors - with lowered immunocompetence in a sample of 26 older adults. Two measures of cell-mediated immunity - T-helper cell/T-suppressor cell ratio and T-lymphocyte response to mitogen challenge - were lower in individuals with a pessimistic style, controlling for the influence of current health, depression, medication, recent weight change, sleep, and alcohol use. A relative increase in the percentage of T-suppressor cells seemed to underlie this immunosuppression. Although the mechanism by which explanatory style might influence immune function remains unknown, we speculate that a pessimistic style might be an important psychological risk factor - at least among older people - in the early course of certain immune-mediated diseases.
We report data from the first two years of a longitudinal study of depression and explanatory style in children. Measures of these variables have been obtained from a group of elementary school children every six months since they were in the third grade. Results show that the boys consistently reported more depressive symptoms than the girls did. This was particularly true for symptoms of anhedonia and behavioral disturbance. The boys also showed much more maladaptive explanatory styles than the girls. These results are discussed in light of previous studies of sex differences in children’s attributions. Possible reasons for the expected switch in the sex differences in puberty are also discussed.
Recent studies suggest that cognitive therapy may reduce risk following successful treatment of depression. Although not conclusive, these studies suggest that patients treated with cognitive therapy may be at less than half the risk for subsequent symptom return following treatment termination than are patients treated pharmacologically. Change in explanatory style, the tendency to attribute negative events to internal, stable, and global factors, appears to mediate cognitive therapy’s preventive capacity. Whether this prophylactic capacity extends to the prevention of new episodes (including the initial onset of the disorder) remains to be determined.
A 5-year longitudinal study investigated the interrelationships among children’s experiences of depressive symptoms, negative life events, explanatory style, and helplessness behaviors in social and achievement situations. The results revealed that early in childhood, negative events, but not explanatory style, predicted depressive symptoms; later in childhood, a pessimistic explanatory style emerged as a significant predictor of depressive symptoms, alone and in conjunction with negative events. When children suffered periods of depression, their explanatory styles not only deteriorated but remained pessimistic even after their depression subsided, presumably putting them at risk for future episodes of depression. Some children seem repeatedly prone to depressive symptoms over periods of at least 2 years. Depressed children consistently showed helpless behaviors in social and achievement settings.
Explanatory style from nine religious groups, representing fundamentalist, moderate, and liberal viewpoints, was investigated by questionnaire and by blind content analysis of their sermons and liturgy. Fundamentalist individuals were significantly more optimistic by questionnaire than those from moderate religions, who were in turn more optimistic than liberals. The liturgy and sermons showed the parallel pattern of optimism. Regression analyses suggested that the greater optimism of fundamentalist individuals may be entirely accounted for by the greater hope and daily influence fundamentalism engenders, along with the greater optimism of the religious services they hear.
Is optimism heritable? We gave the Attributional Style Questionnaire (ASQ), a measure of optimism, to 115 monozygotic twin pairs (MZ) and 27 dizygotic twin pairs (DZ). The intraclass correlations of the ASQ scores were 0.48 for MZ twins (p<0.0001) and 0 for DZ twins. Though the sample size of DZ twins is small, these results suggest that there may be a substantial genetic effect on optimism. We speculate, however, that the mechanism for the transmission of this, and other complex personality traits, may be highly indirect.
Research based on Seligman's model indicates that a pessimistic explanatory style predicts increased frequency of depression, poorer physical health, and lower levels of achievement. The data show that persons who have a pessimistic outlook on life are more frequent users of the medical and mental health care delivery systems. This paper describes the development of a bipolar MMPI Optimism-Pessimism (PSM) scale that is based on the results of a technique -- Content Analysis of Verbatim Explanation (CAVE) -- applied to the MMPI. Reliability and validity indices show that the PSM scale is highly accurate and consistent with Seligman's theory. Identification of the patient's explanatory style may lead to improved management because intervention measures can be directed more accurately according to the patient's personality style. The new scale also will allow researchers to use existing MMPI data to explore relationships between explanatory style and various outcome variables and behavioral correlates.
The explanatory style scores of George Bush and Saddam Hussein were derived using the content analysis of verbatim explanations technique for periods preceding military actions or political conflict. These leaders' actions were rated on scales of aggression-passivity and risk-caution. Regression and correlational analyses show that increased levels of optimism before conflict predicted heightened aggression and risk taking, whereas increased levels of pessimism prior to an event predicted passivity and caution.
This paper describes the development and preliminary efficacy of a program designed to prevent depressive symptoms in at-risk 10 to13 year-olds, and relates the findings to the current understanding of childhood depression. The treatment targets depressive symptoms and related difficulties such as conduct problems, low academic achievement, low social competence, and poor peer relations, by proactively teaching cognitive techniques. Children were identified as 'at-risk' based on depressive symptoms and their reports of parental conflict. Sixty-nine children participated in treatment groups and were compared to 73 children in control groups. Depressive symptoms were significantly reduced and classroom behavior was significantly improved in the treatment group as compared to controls at post-test. Six-month follow-up showed continued reduction in depressive symptoms, as well as significantly fewer externalizing conduct problems, as compared to controls. The reduction in symptoms was most pronounced in the children who were most at risk.
This research examined whether attributional style is more closely related to depressive symptoms for some people than for others. In Study 1, depressed patients voicing more explanations for negative events showed (nonsignificantly) higher correlations between attributional style and depressive symptoms. In Study 2, subjects reporting a tendency to ruminate about the causes of events showed stronger relations between attributional style and depressive symptoms. Conversely, subjects low in attributional complexity exhibited stronger relations of depressive symptoms with positive-event attributional style. We speculated that by asking for ratings of only the single most important cause of events, attributional style measures might provide a less adequate sample of the causal thinking of attributionally complex subjects. Study 3 partially supported this reasoning: attributional complexity was not significantly correlated with seeing events as having multiple causes, but it was associated with rating second-most-important causes as distinct from first causes on attributional dimensions. Thus, current attributional style measures and theories might be best suited to subjects who (a) tend to ponder causes of events, but (b) arrive at uniform conclusions about the nature of these causes.
After teaching cognitive and social-problem-solving techniques designed to prevent depressive symptoms, we followed 69 fifth and sixth grade children at risk for depression for 2 years. We compared these children with 49 children in a matched no-treatment control group. The prevention group reported fewer depressive symptoms through the 2-year follow-up, and moderate to severe symptoms were reduced by half. Surprisingly, the effects of the prevention program grew larger after the program was over. We suggest that psychological immunization against depression can occur by teaching cognitive and social skills to children as they enter puberty.
Consumer Reports (1995, November) published an article which concluded that patients benefited very substantially from psychotherapy, that long-term treatment did considerably better than short-term treatment, and that psychotherapy alone did not differ in effectiveness from medication plus psychotherapy. Furthermore, no specific modality of psychotherapy did better than any other for any disorder; psychologists, psychiatrists, and social workers did not differ in their effectiveness as treaters; and all did better than marriage counselors and long-term family doctoring. Patients whose length of therapy or choice of therapist was limited by insurance or managed care did worse. The methodological virtues and drawbacks of this large-scale survey are examined and contrasted with the more traditional efficacy study, in which patients are randomized into a manualized, fixed duration treatment or into control groups. I conclude that the Consumer Reports survey complements the efficacy method, and that the best features of these two methods can be combined into a more ideal method that will best provide empirical validation of psychotherapy.
Objective: The study addressed the question of whether unsolvable as opposed to solvable cognitive problems activate discrete neuronal systems in the human brain. Method: Twelve healthy humans tried to solve unsolvable anagrams. Solvable anagrams and a resting baseline after each anagram task served as control conditions in a within-subject design. Activation was measured with the equilibrium infusion method by using O-labeled water and positron emission tomography, with absolute quantitation of anatomically defined regional cerebral blood flow (CBF). Results: Compared to rest, both anagram tasks increased activity in frontal and temporal regions. The solvable task condition increased hippocampal activation and decreased mamillary bodies activity, while unsolvable anagrams were associated with increased CBF to the mamillary bodies and amygdala and decreased hippocampal activity. Conclusions: A limbic network integrating negative emotion and cognition seems reflected in reciprocal diencephalic and limbic activation with solvable and unsolvable anagrams. Since unsolvable anagrams have been used to induce learned helplessness in humans, this finding may provide an initial step toward clarifying its neural substrate.
Well-founded criticisms of the Consumer Reports (CR, 1995) study of psychotherapy include possible bias of the CR sample, limitations of self-report, and the limitations of cross-sectional, retrospective data. Poorly founded criticisms concern "consumer satisfaction" and the claim that the remarkably good effects of long-term therapy resulted from spontaneous remission, that psychotherapy effects were small, and that nondoctoral providers did as well as doctoral-level providers. Both the experimental method (efficacy) and the observational method with causal modeling (effectiveness) answer complementary questions, and they both do so by eliminating alternative possible causes. Efficacy studies, however, cannot test long-term psychotherapy because long-term manuals cannot be written and patients cannot be randomized into two-year-long placebo controls, so the "empirical validation" of long-term therapy will likely come from effectiveness studies. Such studies of long-term therapy, of qualification of providers, and of clinical judgement versus case management are urgently needed as practice confronts managed care.
A total of 613 subjects, including 257 White American students, 312 mainland Chinese students, and 44 Chinese American students, completed the Attributional Style Questionnaire. It was found that (a) mainland Chinese were more pessimistic than Chinese Americans, who were more pessimistic than White Americans, (b) mainland Chinese were less self-blaming (i.e., attributed their failure less internally than the traditional Chinese culture expects) and attributed their success to other people or circumstances, and (c) White Americans attributed their success to themselves and their failure to others or circumstances more often than did mainland Chinese. The authors also found that mainland Chinese optimism was associated more with academic and financial accomplishment, psychological confidence and persistence, and physical health.
The explanatory styles of 387 law students were assessed prior to law school using the Attributional Style Questionnaire (ASQ). Longitudinal performance measures were collected throughout law school and related to each student’s initial explanatory style. In contrast to studies with undergraduates, students who made stable, global, and internal attributions for negative events combined with the converse attributions for success (typically called pessimists) outperformed more optimistic students on measures of grade point averages and law journal success. We discuss the limitations of current attributional research methodologies and suggest the prudent and cautious perspective necessary for law or skill-based professions may account for our findings.
Participants in the Terman Life-Cycle Study completed open-ended questionnaires in 1936 and 1940, and these responses were blindly scored for explanatory style by content analysis. Catastrophizing (attributing bad events to global causes) predicted mortality as of 1991, especially among males, and predicted accidental or violent deaths especially well. These results are the first to show that a dimension of explanatory style is a risk factor for mortality in a large sample of initially healthy individuals, and they imply that one of the mechanisms linking explanatory style and death involves lifestyle.
In this study, we investigated the presence and effects of a positivity bias in older adults’ discussions of close social relationships. Two groups of subjects had survived traumatic events during World War II, while two groups had not gone through major traumas during that time. We hypothesized that trauma survivors would show less of a positivity bias, but would not necessarily be more psychologically distressed as a result. Results supported this hypothesis. Trauma survivors were more balanced in their discussions. Positivity bias was correlated with more positive affect but not with negative affect.
A brief and inexpensive cognitive-behavioral prevention program was given to university students at risk for depression. "At risk" was defined as being in the most pessimistic quarter of explanatory style. 231 students were randomized into either an 8-week prevention workshop that met in groups of 10, once per week for 2 hours, or into an assessment-only control group. Subjects were followed for 3 years and we report the preventive effects of the workshop on depression and anxiety. First, the workshop group had significantly fewer episodes of generalized anxiety disorder than the control group and showed a trend toward fewer major depressive episodes. The workshop group had significantly fewer moderate depressive episodes but no fewer severe depressive episodes. Second, the workshop group had significantly fewer depressive symptoms and anxiety symptoms than the control group, as measured by self-report but not by clinicians' ratings. Third, the workshop group had significantly greater improvements in explanatory style, hopelessness, and dysfunctional attitudes than the control group and these were significant mediators of depressive symptom prevention in the workshop group.
One hundred and twenty entering college freshmen, at risk for depression on the basis of their pessimistic explanatory style scores, were randomly assigned to one of two conditions: an eight week cognitive-behavioral intervention designed to prevent future depression (Seminar Group) or to a no intervention Control Group. We assessed the physical health of these subjects 6 to 30 months after entry into the project. Subjects in the Seminar Group had better physical health than did Control subjects: fewer self-reported symptoms of physical illness, fewer overall doctors visits and fewer illness-related visits to the Student Health Center. They were more likely to visit a doctor for a check-up and had healthier habits of diet and exercise. We postulate that the learning of anti-depression skills produces better physical health.
This study examined two senses in which pessimism might be a risk factor for depressive mood among older adults. The first was that a pessimistic explanatory style would predict changes toward depressive mood when combined with stressful life events. The second was that predictive pessimism, or thinking that bad events will happen in the future, would predict changes in depressive symptoms. We found an interaction between explanatory style and life stressors, but it was the optimists who were at higher risk for depressive symptoms after negative life events. We also found support for predictive pessimism, however, as a predictor of depressive symptoms over time.