Blogginlägg -

Full interview with professor Keith Stanovich on how to think straight about psychology

Stefan:

Dear Keith, I am totally excited to have this conversation with you. Our mission at the Center for Leadership is to bridge research and practice in the field of organizational psychology, something we take a strong stand for. Organizations worldwide spend literally billions of dollars each year on implementing or at least trying to implement practices based on ideas about human behavior and psychology in order to have a positive impact on performance and well being. Many, not to say most, of these ideas are based on gut feelings, intuition, own or others experience, popular books, advice from organizational and leadership consultants, habits, and trends. More seldom these practices are based on rigorous scientific findings on human thought and behavior, something that have caught the attention of numerous writers suggesting the need for evidence based practices in creating and developing better and more efficient workplaces. In fact, lots of the practices that are used at the present time with the aim of improving performance, motivation, well being etc. are more or less useless, and at times even dysfunctional for reaching these goals, if one look at what the research has to say.

This is of course a huge problem since it not only is a waste of money for organizations, but also since it can be devastating for the well being of billions of employees at these same organizations. The problem is by no means isolated to the work place, but is just as (or maybe even more) severe in other domains such as for instance the self-help industry.

The first time I got in contact with your work was at the near end of my Ph.D studies when I was looking through the book shelf of a colleague of mine. There in the middle was a title that caught my attention; How to Think Straight About Psychology written by you. I started to read with no particular expectations but found myself totally absorbed of it from the very first pages. It is still the book I wish that I had written myself and it did to me what numerous courses and books on research methods had not succeed to do. It made me understand and for the first time become truly interested and convinced in the merits of the scientific method for gaining valid and reliable knowledge about the human mind. Since that I have read it a number of times and it is the book I recommend anyone with the slightest interest in psychology or other behavioral sciences to read before reading anything else.

Obviously the theme of How to Think Straight…is in line with my concern for the lack of evidence based practices in the field of work psychology even if it does not deal with the work place specifically. Yet, my intention of this interview is not to talk about psychological practices at work but rather about the general scientific mindset presented in the book. My wish is that the readers of this conversation will gain some insight in why this mindset is a more valid and reliable source of getting knowledge and making predictions than are our own intuition and experiences as well as anecdotes told by others. I also hope that we will be able to describe why people so often put more faith in these intuitive thoughts, experiences and anecdotes than in scientific facts.

My first question to you then Keith is derived directly from the title of the book and goes like this: What does it really mean to think straight about psychology?

Keith:

The essence of thinking straight about psychology is to understand it as a data-based science.  Not as “intuition” about people.  Not as an “art” based on experience.  Instead, as a discipline much closer to biology than to a humanities discipline.

Simply to say that psychology is concerned with human behavior does not distinguish it from other disciplines. Many other professional groups and disciplines—including economists, novelists, the law, sociology, history, political science, anthropology, and literary studies—are, in part, concerned with human behavior. Psychology is not unique in this respect.

Practical applications do not establish any uniqueness for the discipline of psychology either. For example, many university students decide to major in psychology because they have the laudable goal of wanting to “help people.” But helping people is an applied part of an incredibly large number of fields, including social work, education, nursing, occupational therapy, physical therapy, police science, human resources, and speech therapy. Similarly, helping people by counseling them is an established part of the fields of education, social work, police work, nursing, pastoral work, occupational therapy, and many others. The goal of training applied specialists to help people by counseling them does not demand that we have a discipline called psychology.

It is easy to argue that there are really only two things that justify psychology as an independent discipline.  The first is that psychology studies the full range of human and nonhuman behavior with the techniques of science.  The second is that the applications that derive from this knowledge are scientifically based. Were this not true, there would be no reason for psychology to exist.

Psychology is different from other behavioral fields in that it attempts to give the public two guarantees. One is that the conclusions about behavior that it produces derive from scientific evidence. The second is that practical applications of psychology have been derived from and tested by scientific methods. In principle, these are the standards that justify psychology as an independent field. If psychology ever decides that these goals are not worth pursuing—that it does not wish to adhere to scientific standards—then it might as well fold its tent and let its various concerns devolve to other disciplines because it would be a totally redundant field of intellectual inquiry.

Clearly, then, the first and most important step that anyone must take in understanding psychology is to realize that its defining feature is that it is the data-based scientific study of behavior. The primary way that people get confused in their thinking about psychology is that they fail to realize that it is a scientific discipline.

Stefan:

Thank you Keith for this first, very informative and clear, answer regarding what psychology is and is not. A brief summary (with some of my own extensions) of the key points, please correct me if I am wrong, is that psychology is a data-based scientific discipline that aim to understand how behavior (human as well as nonhuman) is related to internal processes (neurological, cognitive, emotional etc.) and the environment (other people, culture, organizations, physical environment etc.). This is something that should be guaranteed to the public, together with a guarantee that the application of methods based on psychological principles (for instance in therapy, leadership, team building, teaching etc.) are based on scientific studies regarding the effects/outcomes of these methods. A huge problem, as I see it, is that there are many consultants, therapists, coaches etc. who claim that their methods are research based when they are in fact not. Most of the time it is probably because of a lack of knowledge on behalf of these “helpers” about what research really is and how to evaluate the scientific evidence of the effectiveness of a method. Nevertheless it is a big problem as many people and organizations invest their time, money and energy in the faith that the methods really are evidence based.

We will get back to this final point later on in the interview, but first I think that it is essential to make sure that our readers are clear about the meaning of some of the central concepts we use. Can you explain in a bit more detail, with some concrete examples, what you mean with the concepts data-based and scientific?

Keith:

In the book, I define scientific as having three broad characteristics: the use of systematic empiricism; the production of public knowledge; and the examination of solvable problems.

Empiricism is the practice of relying on observation. Empiricism pure and simple is not enough, however. Observation is fine and necessary, but pure, unstructured observation of the natural world will not lead to scientific knowledge. Scientific observation is termed systematic because it is structured so that the results of the observation reveal something about the underlying nature of the world. Scientific observations are usually theory driven; they test different explanations of the nature of the world. They are structured so that, depending on the outcome of the observation, some theories are supported and others rejected.  The details of the scientific method (control, random assignment, manipulation of variables, etc.) concern how to structure empirical observations.

The second criterion is that scientific knowledge be public, in a special sense. That sense is that scientific knowledge does not exist solely in the mind of a particular individual. In an important sense, scientific knowledge does not exist at all until it has been submitted to the scientific community for criticism and empirical testing by others. Knowledge that is considered “special”—the province of the thought processes of a particular individual, immune from scrutiny and criticism by others—can never have the status of scientific knowledge. Science makes the idea of public verifiability concrete by procedures such as replication and peer review. One important way to distinguish charlatans and practitioners of pseudoscience from legitimate scientists in psychology is that the former often bypass the normal channels of scientific publication and instead go straight to the media with their “findings.” One ironclad criterion that will always work for the public when presented with scientific claims of uncertain validity is the question:  Have the findings been published in a recognized scientific journal that uses some type of peer review procedure? The answer to this question will almost always separate pseudoscientific claims from the real thing.

The public nature of science is also reflected in the necessity of operationalizing psychological concepts.  Operationism is simply the idea that concepts in scientific theories must in some way be grounded in, or linked to, observable events that can be measured. Linking the concept to an observable event makes the concept public. The operational definition removes the concept from the feelings and intuitions of a particular individual and allows it to be tested by anyone who can carry out the measurable operations.

For example, defining the concept hunger as “that gnawing feeling I get in my stomach” is not an operational definition because it is related to the personal experience of a “gnawing feeling” and, thus, is not accessible to other observers. In contrast, definitions that involve some measurable period of food deprivation or some physiological index such as blood sugar levels are operational because they involve observable measurements that anyone can carry out. Similarly, psychologists cannot be content with a definition of anxiety, for example, as “that uncomfortable, tense feeling I get at times” but must define the concept by a number of operations such as questionnaires and physiological measurements. The former definition is tied to a personal interpretation of bodily states and is not replicable by others. The latter puts the concept in the public realm of science.

Finally, real science deals with solvable, or specifiable, problems. This means that the types of questions that scientists address are potentially answerable by means of currently available empirical techniques. If a problem is not solvable or a theory is not testable by the empirical techniques that scientists have at hand, then scientists will not attack it. For example, the question “Will 3-year-old children given structured language stimulation during day care be ready for reading instruction at an earlier age than children not given such extra stimulation?” represents a scientific problem. It is answerable by currently available empirical methods. The question “Are human beings inherently good or inherently evil?” is not an empirical question and, thus, is simply not in the realm of science. Likewise, the question “What is the meaning of life?” is not an empirical question and so is outside the realm of science.

By saying that scientists tackle empirically solvable problems, I do not mean to imply that different classes of problems are inherently solvable or unsolvable and that this division is fixed forever. Quite the contrary: Some problems that are currently unsolvable may become solvable as theory and empirical techniques become more sophisticated. For example, decades ago historians would not have believed that the controversial issue of whether Thomas Jefferson fathered a child by his slave Sally Hemings was an empirically solvable question. Yet by 1998 this problem had become solvable through advances in genetic technology, and a paper was published in the journal Nature indicating that it was highly probable that Jefferson was the father of Eston Hemings Jefferson.

This is how science in general has developed and how new sciences have come into existence.  Psychology itself provides many good examples of the development from the unsolvable to the solvable. There are many questions (such as “How does a child acquire the language of his or her parents?” “Why do we forget things we once knew?” “How does being in a group change a person’s behavior and thinking?”) that had been the subjects of speculation for centuries before anyone recognized that they could be addressed by empirical means. As this recognition slowly developed, psychology coalesced as a collection of problems concerning behavior in a variety of domains. Psychological issues gradually became separated from philosophy, and a separate empirical discipline evolved.

Stefan:

Interesting indeed, I would like to get back to some of the points you made in this previous answer in order to make things clear for our readers. First you mention some methods (control, random assignment, manipulation of variables, etc.) that scientists use when structuring their empirical observations. Can you explain these central concepts a little and describe why they are so important?

Second you talk about science as public and that it is published in peer-review journals in order for other researchers to replicate and criticize the findings of a particular study. What does peer-review mean and how does it differ from other forms of publications, such as publishing in a newspaper, publishing a book etc.?

Keith:

I’ll turn to your second question first.  Peer review is a procedure in which each paper submitted to a research journal is critiqued by several scientists, who then submit their criticisms to an editor.  The editor is usually a scientist with an extensive history of work in the specialty area covered by the journal.  The editor decides whether the weight of opinion warrants publication of the paper, publication after further experimentation and statistical analysis, or rejection because the research is flawed or trivial. Most journals carry a statement of editorial policy in each issue, so it is easy to check whether a journal is peer reviewed.

Not all information in peer-reviewed scientific journals is necessarily correct, but at least it has met a criterion of peer criticism and scrutiny. Peer review is a minimal criterion, not a stringent one, because most scientific disciplines publish dozens of different journals of varying quality. However, the point is that the failure of an idea, a theory, a claim, or a therapy to have adequate documentation in the peer-reviewed literature of a scientific discipline is a sure sign that the idea, theory, or therapy is bogus.

The mechanisms of peer review vary somewhat from discipline to discipline, but the underlying rationale is the same. Peer review is one way (replication is another) that science institutionalizes the attitudes of objectivity and public criticism. Ideas and experimentation undergo a honing process in which they are submitted to other critical minds for evaluation. Ideas that survive this critical process have begun to meet the criterion of public verifiability. The peer review process is far from perfect, but it is really the only consumer protection that we have. 

Turning now to your question about the methods of control, random assignment, manipulation of variables, etc.  These are the classic methods of experimental science and they are fundamental to psychological research.  Although many large volumes have been written on the subject of scientific methodology, it is not necessary for the layperson, who may never actually carry out an experiment, to become familiar with all the details and intricacies of experimental design. The most important characteristics of scientific thinking are actually quite easy to grasp, because experimental method is little more than many, many variations on the concepts you mention. I like to begin with my students by stressing to them that scientific thinking is based on the ideas of comparison, control, and manipulation. To achieve a more fundamental understanding of a phenomenon, a scientist compares conditions in the world. Without this comparison, we are left with isolated instances of observations, and the interpretation of these isolated observations is highly ambiguous.  Testimonials and case studies do not constitute valid proof of a causal hypothesis for just this reason.

By comparing results obtained in different—but controlled—conditions, scientists rule out certain explanations and confirm others. The essential goal of experimental design is to isolate a variable. When a variable is successfully isolated, the outcome of the experiment will eliminate a number of alternative theories that may have been advanced as explanations. Scientists weed out the maximum number of incorrect explanations either by directly controlling the experimental situation or by observing the kinds of naturally occurring situations that allow them to test alternative explanations. But it would be absurd for scientists to sit around waiting for circumstances that make for good comparable observations. Instead, most scientists try to restructure the world in ways that will differentiate alternative hypotheses. To do this, they must manipulate the variable believed to be the cause and observe whether a differential effect occurs while they keep all other relevant variables constant. The variable manipulated is what textbooks call the independent variable and the variable upon which the independent variable is posited to have an effect is called the dependent variable.

Thus, the best experimental design is achieved when the scientist can manipulate the variable of interest and control all the other extraneous variables affecting the situation. That is the reason why scientists attempt to manipulate a variable and to hold all other variables constant: in order to eliminate alternative explanations. When manipulation is combined with a procedure known as random assignment (in which the subjects themselves do not determine which experimental condition they will be in but, instead, are randomly assigned to one of the experimental groups), scientists can rule out alternative explanations of data patterns that depend on the particular characteristics of the subjects. Random assignment ensures that the people in the conditions compared are roughly equal on all variables because, as the sample size increases, random assignment tends to balance out chance factors. This is because the assignment of the participants is left up to an unbiased randomization device rather than the explicit choices of a human. 

Random assignment is a method of assigning subjects to the experimental and control groups so that each subject in the experiment has the same chance of being assigned to either of the groups. Flipping a coin is one way to decide to which group each subject will be assigned to. In actual experimentation, a computer-generated table of random numbers is most often used. By using random assignment, the investigator is attempting to equate the two groups on all behavioral and biological variables prior to the investigation—even ones that the investigator has not explicitly measured or thought about.

The use of random assignment ensures that there will be no systematic bias in how the subjects are assigned to the two groups. The groups will always be matched fairly closely on any variable, but to the extent that they are not matched, random assignment removes any bias toward either the experimental or the control group.     

As I said before, there are many, many complications to these basic concepts, but the more advanced levels of experimental design are really just complicated variants on a basic set of themes.  These themes, of comparison, control, manipulation, and random assignment are foundational for psychological science.

Stefan:

As I see it, to think straight about psychology is the same as, or at least closely related to the concept of an evidence-based mindset which basically means that some evidence is more valid than others when it comes to determine the truth of various statements (for instance if a particular leadership behavior will increase follower motivation). Evidence obtained through carefully controlled studies that have used randomization, control groups, placebo controls, double-blind procedures, pre- and post measurements and appropriate statistical analysis that are published in peer-reviewed journals outweighs evidence based on our own or others' experience, intuition, or expert opinion. To apply an evidence-based approach means to always search for and evaluate the best available evidence before making important decisions.

There is now an ongoing trend in the field of organizational psychology towards promoting the use of such evidence based practices in organizations, for instance concerning management and decision making processes, which is very hopeful. Nevertheless, at the 2009 SIOP (Society of Industrial and Organizational Psychology) annual meeting, professor Anthony Kovner in one of the opening speeches, concluded that as little as five percent of the operational and strategic decisions made by organizational managers are based on best available evidence. As a comparison, Stanford professors Jeffrey Pfeffer and Robert Sutton in a Harvard Business Review paper from 2006 concluded that medical decisions are evidence-based approximately 15 % of the time (which of course also is frighteningly low but is nevertheless better than the case is for decisions made by organizational managers).

Of course there are occasions when the best available evidence are expert opinions or one’s own experience but more often decisions are made for instance to implement change projects, invest in a new performance management or incentive system, recruit a certain person, or something else even when there are better evidence available that says otherwise. As I see it, there can be only two main reasons for this. One is that there is not sufficient knowledge about how to search for scientific evidence and evaluate the quality of it (actually most HR-people I have met have never held a peer-reviewed journal in their hands). The other is that there is not enough interest to search for scientific evidence as people put more faith in other sources of knowledge. What do you think are the reasons why people don´t think more straight (or evidence based) about psychology?

Keith: 

I agree with you completely when you say that “to think straight about psychology is the same as, or at least closely related to the concept of an evidence-based mindset which basically means that some evidence is more valid than others when it comes to determine the truth of various statements”.  We are the same page completely when you note that evidence derived from true experimental methods as described in my book “outweighs evidence based on our own or others' experience, intuition, or expert opinion.”  You rightly cite the resistance to evidence-based practice in medicine and in your field of organizational psychology.  In psychology we also have the long-standing problem of dragging clinical psychology, kicking and screaming, into the scientific world. In fact, it provides almost a test case of resistance to scientific thinking and evidence.

Some readers of the first few editions of my How To Think Straight book commented that they thought I had “let psychologists get off too easily” by not emphasizing more strongly that unprofessional behavior and antiscientific attitudes among psychologists themselves contribute greatly to the discipline’s image problem. I responded to this criticism by adding much material from Robyn Dawes’s courageous book House of Cards: Psychology and Psychotherapy Built on Myth. Dawes does not hesitate to air psychology’s dirty linen and, at the same time, to argue that the scientific attitude toward human problems that is at the heart of the true discipline of psychology is of great utility to society (although its potential is still largely untapped). For example, Dawes argued that “there really is a science of psychology that has been developed with much work by many people over many years, but it is being increasingly ignored, derogated, and contradicted by the behavior of professionals—who, of course, give lip service to its existence” (p. vii).

What Dawes is objecting to is that the field of psychology justifies licensure requirements based on the scientific status of psychology and then uses licensure to protect the unscientific behavior of psychological practitioners. For example, one thing that a well-trained psychologist should know is that we can be reasonably confident only in aggregate predictions. In contrast, predicting the behavior of particular individuals is fraught with uncertainty and is something no competent psychologist should attempt without the strongest of caveats, if at all.

Dawes argues that the American Psychological Association has fostered an ethos surrounding clinical psychology that suggests that psychologists can be trained to acquire an “intuitive insight” into the behavior of individual people that the research evidence does not support. When pushed to defend licensure requirements as anything more than restraint of trade, however, the organization uses its scientific credentials as a weapon (one president of the APA, defending the organization from attack, said “Our scientific base is what sets us apart from the social workers, the counselors, and the Gypsies”; Dawes, 1994, p. 21). But the very methods that the field holds up to justify its scientific status have revealed that the implication that licensed psychologists have a unique “clinical insight” is false. It is such intellectual duplicity on the part of the APA that spawned Dawes’s book and that in part led to the formation of the Association for Psychological Science in the 1980s by psychologists tired of an APA that was more concerned about Blue Cross payments than with science.

Thus, some resistance to scientific psychology results from so-called “guild” issues.  But there are broader issues that also account for resistance to scientific psychology.  A scientific psychology is threatening to many people. A maturing science of behavior will change the kinds of individuals, groups, and organizations that serve as sources of psychological information. It is natural that individuals who have long served as commentators on human psychology and behavior will resist any threatened reduction in their authoritative role. The advance of science has continually usurped the authority of other groups to make claims about the nature of the world. The movement of the planets, the nature of matter, and the causes of disease were all once the provinces of theologians, philosophers, and generalist writers. Astronomy, physics, medicine, genetics, and other sciences have gradually wrested these topics away and placed them squarely within the domain of the scientific specialist.

Many religions, for example, have gradually evolved away from claiming special knowledge of the structure of the universe. The titanic battles between science and religion have passed into history, with the exception of some localized flare-ups such as the creationism issue in the United States. The right to adjudicate claims about the nature of the world has unquestionably passed to scientists.

Writer Natalie Angier has reminded us that many years ago when lightening would hit the wooden towers of churches and burn them down, the clergy and the populace would engage in an intense debate about whether this was a sign of “the vengeance of God.” However, she reminds us that “in the eighteenth century, Benjamin Franklin determined that lightening was an electric rather than an ecclesiastic phenomenon. He recommended that conducting rods be installed on all spires and rooftops, and the debates over the lightening bolts vanished” (p. 26).

The issue, then, is the changing criteria of belief evaluation, and this is hard for many people, in the habit of pontificating about human behavior in the absence of evidence, to accept.

Stefan:

I totally agree with you that it is a serious problem that people are acting in the name of science, such as the case is for many licensed psychologists as you mentioned, when they actually in a true sense are not. This is perhaps a greater problem than the many people who claim to have knowledge about human behavior that they say are based on their own experience, although this group of enlightened people can also contribute to a lot of the resistance for scientific findings. I liked that you mentioned Robyn Dawes fabulous book, I have read it a couple of times myself and find it very intriguing.

As I see it the problem has a lot of potential causes. One, that we have already mentioned, is the high number of so called experts (licensed psychologists, consultants, people with personal experience, wise men and women etc.) who spread their knowledge to people uncritically swallowing it as indisputable, and many times evidence based, truths. Another is the people buying, believing in and acting on this knowledge, who themselves might be totally convinced that they possess the skills to evaluate what is true knowledge and what is bogus. A third is the unwillingness and perhaps inability of many scientists to spread their findings in other forums than peer reviewed journals (that are read mainly by other scientists who understand the language) with a more easy-to-grasp and practically oriented language.

What would you say are the reasons why people are so easily convinced by for instance consultant suggestions and ideas without critically examine and evaluate the evidence base?

Keith:

Well I want to pick up on your third point, that “A third is the unwillingness and perhaps inability of many scientists to spread their findings in other forums than peer reviewed journals (that are read mainly by other scientists who understand the language) with a more easy-to-grasp and practically oriented language.”  I have written a little bit about the communication difficulties that scientists cause for themselves. 

Many psychologists are very concerned about disassociating their field from the self-help books and pseudosciences that partially define it for the public. This concern has made many psychologists extremely wary about drawing firm conclusions regarding solutions to pressing human problems. This reluctance to claim special knowledge is virtually built into the training of research psychologists in most graduate schools (although, as I said before, this is not necessarily true of clinical psychologists).

Thus, conservatism regarding the communication of psychological findings is deeply ingrained in most psychological researchers. Psychologists are therefore quite reluctant to claim that they have the answers to pressing social problems. This reluctance is, of course, often well advised. The problems surrounding human behavior are complex, and it is not easy to study them. However, some unfortunate side effects result when this attitude among psychologists interact with the media.

The peculiar logic of the media dictates that if the public is interested in a particular psychological question, the media will deliver a story whether or not there is one to tell. A scientist who tells a reporter, "I'm sorry, but that is a complex question, and the data are not yet in, so I wouldn't want to comment on it," by no means terminates the reporter's search for an answer. The investigator will simply continue until he or she finds a scientist (or, often, in the case of psychology, anyone who can be quoted as an “authority”) who is less conservative about coming to conclusions.

In an earlier edition of my book (not the current edition, I’m afraid) I discussed how this unfortunate “media logic” has worked to publicize so-called lunar effects, that is, the belief that the phases of the moon can affect human behavior, especially abnormal behavior. Some researchers analyzed the results of 37 different studies and found that there was no evidence of lunar effects on human behavior. They found instead that it would make more sense to be studying “media effects,” that is, how the media can create a belief in nonexistent phenomena. They pointed to just the media logic I mentioned—that newspapers, the web, television programs, and radio shows favor individuals who claim that a full moon influences behavior.

Journalism professor Curtis MacDougall, in his book Superstition and the Press (1983), gave an example of media logic when he quoted a reporter who routinely wrote stories about “psychic powers.” When asked whether he actually thought that there was evidence indicating that such powers existed, the reporter replied, “I don’t have to believe in it. All I need is 2 Ph.D.’s who will tell me it’s so and I have a story” (p. 558).

The media selection process that presents science to the public has the following logic: Scientists who are cautious about stating opinions are rarely quoted. Only those more willing to go out on a limb become public figures. Again, this is not always a bad thing. For example, the late astronomer and television personality Carl Sagan sometimes went a little too far in his speculations in the opinion of his more conservative, but his vast contributions to the public understanding of astronomy undoubtedly more than compensated for any minor inaccuracies that he may have conveyed.

The situation in psychology, however, is entirely different. Most media psychologists, unlike Sagan, have absolutely no standing among researchers in their field. This difference in psychology is due to a combination of factors. People want the answers to questions concerning human beings much more than they want the answers to questions about other aspects of nature. People want to know how to lose weight, whether psychological therapies actually work, whether absence does make the heart grow fonder, or how to increase the academic achievement of their children much more than they want to know what is the composition of the rings of Saturn or whether a black hole in space is really possible. Combine the fact that answers are sought more urgently of psychology with the reality that the answers to these complex questions are harder to come by, and psychology’s problem becomes clear.

In other disciplines, the media selection process weeds out more conservative scientists and replaces them with scientists who are a little looser with their conclusions. Unfortunately, in psychology, the scientists are often weeded out altogether! The scientifically justified conservatism of psychologists when faced by media representatives creates a void because a tentative statement does not make a story. But justified or not, the void does not remain a void for long. Into it rush all the self-help gurus and psychic charlatans who, bursting forth on TV and radio talk shows, become associated with psychology in the public mind. In short, the conservatism backfires. By exercising proper scientific caution in presenting the results of research to the public, psychology helps to create an image for itself that subsequently leads to its devaluation by both the public and other scientists.

Again, what I have described here is the logic of the typical, cautious, research psychologist.  Ironically, it does not often characterize clinical psychology—the least scientific subarea within psychology.

Stefan:

I have experienced this scientist-media incompatibility myself a number of times and really know what you mean. The tragedy is that this unwillingness of the media to listen to and publish tentative and nuanced answers and the accompanying unwillingness of scientists to give a bit more liberal statements leaves the public, who hardly ever get in contact with peer-reviewed journals, with a totally pseudo-scientifically biased picture of human behavior. Do you have any suggestions or tips for people who don´t have the time or possibility to learn about the scientific method at a university on how to think a little more straight about psychological claims in the media? Basically, can you think of some kind of simple checklist or bogus detection kit that people can use to separate valid knowledge from nonsense?

Keith:

I like a list developed by psychologist Scott Lilienfeld, who developed his points as a list of things that signal that a behavioral claim is in the domain of pseudoscience.  In his view, which is very consistent with many points in my book, pseudoscientific claims tend to be characterized by:

  1. A tendency to invoke ad hoc hypotheses as a means of immunizing claims from falsification
  2. An emphasis on confirmation rather than refutation
  3. A tendency to place the burden of proof on skeptics, not proponents, of claims
  4. Excessive reliance on anecdotal and testimonial evidence to substantiate claims
  5. Evasion of the scrutiny afforded by peer review
  6. Failure to build on existing scientific knowledge (lack of connectivity)

None of these are difficult concepts.  They do not require a college course.  Any layperson could learn to apply them. 

Stefan:

Thank you, and for interested readers I could briefly mention that Scott Lilienfield has written a number of books on this theme, for instance 50 Great Myths of Popular Psychology: Shattering Widespread Misconceptions About Human Behavior” which he co-authored with Steven Jay Lynn, John Ruscio and Barry Beyerstein.

Another source of information regarding  how to deal with various claims is a brilliant brief video on YouTube with Michael Shermer in which he presents the so called Baloney Detection Kit. It can be found at: http://www.youtube.com/watch?v=eUB4j0n2UDU&feature=player_embedded#!

Of course I also strongly recommend anyone interested in these things to read “How to think straight…”, the main reason for this interview in the first place. 

Well Keith, I think that we have reached the end of this conversation. It has been really exciting for me, and hopefully for the readers of our blog as well and I am truly grateful that you took time for this. I would like to finish by asking if there are some other sources of knowledge (books, websites, videos etc.) besides the ones already mentioned that you could recommend for readers who are interested in thinking straight or evidence-based about human behavior?

Keith:

Your recommendations are spot on.  All of Lilienfeld’s work and virtually all of Shermer’s books and other media are relevant here.  There are many great books on scientific/rational thinking that are worth looking at.  Even the older ones are still relevant and are still good guides for thinking straight about psychology and science in general.  Here are some that come to mind:

Angier, N. (2007). The canon: A whirligig tour of the beautiful basics of science. New York: Mariner Books.

Ayres, I. (2007). Super crunchers: Why thinking by numbers is the new way to be smart. New York: Bantam Books.

Baron, J. (2008). Thinking and deciding (4th ed.). Cambridge, MA: Cambridge University Press.

Bronowski, J. (1956). Science and human values. New York: Harper & Row.

Dawkins, R. (1998). Unweaving the rainbow. Boston: Houghton Mifflin.

Gilovich, T. (1991). How we know what isn’t so: The fallibility of human reason in everyday life. New York: Free Press.

Groopman, J. (2007). How doctors think. Boston: Houghton Mifflin.

Haack, S. (2007). Defending science—within reason:  Between scientism And cynicism. Buffalo, NY: Prometheus Books.

Hastie, R., & Dawes, R. M. (2001). Rational choice in an uncertain world. Thousand Oaks, CA: Sage.

Kida, T. (2006). Don't believe everything you think: The 6 basic mistakes we make in thinking. Amherst, NY: Prometheus Books.

Manjoo, F. (2008). True enough: Learning to live in a post-fact society. Hoboken, NJ: John Wiley.

Medawar, P. B. (1984). The limits of science. New York: Harper & Row.

Mlodinow, L. (2008). The drunkard's walk: How randomness rules our lives. New York: Pantheon.

Mook, D. G. (2001). Psychological research: The ideas behind the methods. New York: Norton.

Nickerson, R. S. (2004). Cognition and chance: The psychology of probabilistic reasoning. Mahwah, NJ: Erlbaum.

Park, R. L. (2008). Supersition: Belief in the age of science. Princeton, NJ: Princeton University Press.

Popper, K. R. (1963). Conjectures and refutations. New York: Harper & Row.

Raymo, C. (1999). Skeptics and true believers. Toronto: Doubleday Canada.

Sternberg, R. J., Roediger, H. L., & Halpern, D. F. (Eds.). (2006). Critical thinking in psychology. New York: Cambridge University Press.

Taleb, N. (2007). The black swan: The impact of the highly improbable. New York: Random House.

There are two great magazines that are worth subscribing to:  The Skeptic and The Skeptical Inquirer.

 

Relaterade länkar

Ämnen

  • Företagande

Kategorier

  • evidens
  • evidensbaserat förhållningssätt
  • keith stanovich
  • stefan söderfjäll

Regioner

  • Stockholm

Kontakter

Stefan Söderfjäll

Presskontakt Fil. Dr, konsult och en av Ledarskapscentrums grundare 0730-801 488

Relaterat innehåll