“That’s all very well and good but does it work outside the lab?”: Part one

by endlesspsych

Ecological validity is a phrase that I’d guess nigh on anyone who has ever been an undergraduate psychology student knows and in all likely hood has employed anytime they have a social psychology essay to complete… (Unless that was just me?) The phrase has become associated with criticising the endeavours of Social psychologists, generally inferring that real life is more complicated then we can control for with experimental methods.

The view is, I suppose, that

There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy.

Hamlet, Act 1 Scene 5.

or more correctly it’s become something of a rhetorical device.  It’s a shorthand, normally intellectually acceptable, way to criticise and question the results of a study at one extreme a means of hedging your bets about the reported outcomes  (Well it might be true but in case it’s not I’ll show folk I have my doubts). At the other extreme it’s a catch all term to justify whatever the prejudice of the week is (well science might show this doesn’t work in the lab, but that’s because the energy is different, science can’t prove everything you know).

The current definition of ecological validity is a far cry from the original meaning of the concept as coined by Egon Brunswick. Born in Vienna in 1903, the same year my own dear Dandy Don’s formed, Brunswick studied alongside such luminaries as Paul Lazarsfield (one of the major figures in American Sociology in the 20th century) and Konrad Lorenz (Nobel prize winner and regarded by many to be one of the founders of ethology, regarded by others with suspicion for his involvement with National Socialism…) and is credited with founding the first psychological laboratory in Turkey. Brunswick used his dissertation to challenge the Berlin school of Gestalt psychology on the basis that gestalt psychologists tended to focus solely on structure where Brunswick believed that structure (I.E. Gestalt) and sensation were as deeply intertwined as the concepts of field and matter in the domain of physics.

Brunswicks early research focused for the most part on perceptual constancy but it would be later in his career, after meeting Edward Tolman and moving to the University of Berkley where he would become a professor,  that Brunswick want on to define the term ecological validity. A term intimately related to decision making research, social judgement theory and Brunswicks own lens model (high complexity functional unit) of decision making. As originally defined by Brunswick Ecological Validity is a property of a cue, not of an experiment.

Brunswik introduced the term ecological validity to indicate the degree of correlation between a proximal (e.g., retinal) cue and the distal (e.g., object) variable to which it is related (see Brunswik, 1956, pp. 48-52, on the “Ecological Validity of Potential Cues and Their Utilization in Perception”). Thus, in a perceptual task, ecological validity refers to the objectively measured correlation between, say, vertical position and size of an object (larger objects tend to be higher up in the visual field) over a series of situations.

Hammond, Ecological Validity then and now 1998

Hammonds essay further explores the corruption of the term ecological validity and how it was begining to be taken (in the 1970’s – I’d argue it’s far worse now) to mean “Representative design”.  That is to say instead of being specifically about a cue within a study ecological validity has come to refer to how generalisable the results of the study are.  Although interest in “Brunswickian” psychology may be turning the tide slightly and reclaiming the word for it’s intended meaning. Which is something, as a decision making researcher, I heartily endorse, if only to improve the quality of undergraduate social psychology essays!

I intend to use the rest of this post to explore some of the reasons why people consider psychological studies to be non-ecologically valid (in the sense that they don’t have representitive designs or can be used to make generalisations) and why this might lead them to conclude psychology isn’t a science. All this without having to read Karl Popper as well (In joke for any of the Edinburgh Sceptics reading).

One of the main issues is the concept of operationalisation. Whenever you design a study that explores anything involving humans you will undoubtedly at some point come across the issues of operationism versus essentialism. For example:

“Do physicists really know what gravity is? I mean really. what is the real meaning of the term gravity? What is the underlying esscence of it? What does it ultimatly mean even to speak of it? When you get down to the rock bottom, what is it all about?

This is a textbook, almost literally as I’ve taken it from the (in my opinion) excellent “How to think straight about Psychology” by Stanovich), example of what Karl Popper described as essentialism. Essentialism is the quest to find ultimate explanations in the esscence of concept whereas operationism states that the concept must be measurable , in psychology often this equates with being linked to behaviour. Thus essentialists wish to define everything first whereas operationists proceed with research and this research supplies the meaning of the concept. A key part of psychological, and indeed most scientific, research is therefore to construct operational definitions in order to proceed with research.

An operational definition tells you what is being researched and as such sidesteps the problem of essentialism. Although operational definitons do cause problems when the operational definition uses a word (say love or any other emotion you care to mention) that is in common usage and the operational definition differs from the common usage. This is probably where psychology comes in for most flak, people are often want to claim certain things just aren’t quantifiable and in some cases I’d be inclined to agree although I think a decent proxy could be found for just about anything you wish to measure.

The issue seems to most commonly arise when people misinterpret, or neglect to read, the psychologists operational definition and somehow find it lacking as regards the understading of the concept. Perhaps believing that the psychologist is using an essentialist definition they attack the research and claim it as nonsense “as how can you measure something you can’t understand?”. Many times I have seen people launch into criticising a work of psychology, it’s methods, results and findings and rubbishing it unfairly when in reality all they disagree with are the definitions used.

Another common criticism of psychology is the contention that it is trying to control for far too many variables and as such can only ever give a very limited view of human experience. To those who believe this to be the case I give you randomisation! Discovered by the statistician Fisher in 1926 Randomisation allows for researchers to take into account all number of variables in their research including those they are not aware of… (Assuming the variables are normally distributed) If you want to test this for yourself please do try out the following test:-

Choose a page from a telephone directory and assign all the entries on it into two groups by means of a coin toss or similar. Now work out, seperatly for each group the proportion of names containing the letter ‘r’ or the proportion of telephone numbers containing the number ‘7’ (you may wish to exclude area codes), the proportion of addresses in a specific location of the city or any other objective criterion that comes to mind. What you will find is the two groups are roughly similar in terms of proportions and the larger the group is the more similarities there will be.

(Adapted from “What is Psychology” By Andrew Colman)

I think the above demonstration provides a good illustration of why those who cry “there are too many variables to measure” are really crying wolf.

There are other critiques of psychology as a science of course, but I feel they should be saved for another day.