Posted in: Science

IQ Scores Not An Accurate Indicator Of Intelligence, Study Shows

IQ Article

An intelligence quotient, or IQ, is a measure of relative intelligence determined by standardized tests. A score is derived from one of several tests. IQ scores are used as predictors of educational success, special needs, and performance. The majority of people have an IQ between 85 and 115: the higher the number, the higher the assumed intelligence of a person. Based on the scale, falling below 85 is classified as deficient, whereas exceeding 115 is above average, and up to and over 140 is considered genius.

To put it into perspective: Dolph Lundgren. Yes the actor, who played the 6’5″ Russian boxer in Rocky IV, in real life has a master’s degree in chemical engineering and was awarded a Fulbright Scholarship to MIT. He purportedly has a genius IQ around 160, the same as high school dropout/director Quentin Tarantino. Actress Sharon Stone began attending college at 15, and later graduated with a degree in creative writing and fine arts. They all share a similar score with renowned physicist Stephen Hawking. They are notably surpassed by James Woods, who attended MIT and dropped out to pursue acting. He claims to have an IQ of about 180, like chess-master Bobby Fischer.

There are even organizations that are tailored to high IQ holders, such as Mensa. Membership of Mensa is open to persons who have attained a score within the upper two percent of the general population on an approved intelligence test that has been properly administered and supervised. Mensa provides a forum for intellectual exchange among its members. There are members in more than 100 countries around the world.

Modern mental testing originated in France, when psychologist Alfred Binet was commissioned to help determine which students would likely struggle with an educational curriculum. The government had passed laws requiring that all children attend school. Therefore, it was important to assess which children would need additional assistance. Binet, with his colleague Theodore Simon, began developing a series of questions focusing on memory and problem-solving, in hopes of best assessing predictors of potential. Some children were more apt at answering more advanced questions than others. Based on this observation, Binet suggested the concept of a mental age, or a measure of intelligence based on the average abilities of children of a certain age group. This first intelligence test, the Binet-Simon Scale, became the basis for the intelligence tests still in use today.

However, Binet stressed the limitations of the test. He suggested that intelligence is far too broad a concept to quantify with a single number. Instead, he insisted that intelligence is subjective, stimulated by a number of factors, and ever changing.

In a study published in Neuron, researchers have determined the intelligence quotient may not exactly show how smart someone is. Over 100,000 participants joined the study and completed 12 online cognitive tests. It was determined that there was not a single exam or element thereof that accurately gaged how well a person could perform on mental and cognitive tasks. Instead, they surmised there are at least three different components that make up intelligence, or a cognitive profile: short-term memory, reasoning, and verbal, stated by CBS News. Functional magnetic resonance imaging (fMRI) was also utilized in the study, suggesting that different cognitive abilities were associated with different areas of the brain.

Interestingly, people who played video games did better on reasoning and short-term memory portions of the test. Aging was associated with a decline in memory and reasoning abilities. Those who smoked did worse in short-term memory and verbal portions, while those with anxiety did poorly on short-term memory test modules.

Dr. Adrian Owen, the study’s senior investigator and the Canada Excellence Research Chair in Cognitive Neuroscience and Imaging at the university’s Brain and Mind Institute, said to the Toronto Star:

“People who ‘brain-train’ are no better at any of these three aspects of intelligence than people who don’t. When we looked at the data, the bottom line is the whole concept of IQ, or of you having a higher IQ than me, is a myth. There is no such thing as a single measure of IQ or a measure of general intelligence. We have shown categorically that you cannot sum up the difference between people in terms of one number, and that is really what is important here. Now we need to go forward and work out how we can assess the differences between people, and that will be something for future studies.”

High IQ Actors

Articles And Offers From The Web

Comments

5 Responses to “IQ Scores Not An Accurate Indicator Of Intelligence, Study Shows”

  1. Sam Sewell

    Most scientific papers are probably wrong.

    NewScientist.com news service.

    Kurt Kleiner.
    Most published scientific research papers are wrong, according to a new analysis. Assuming that the new paper is itself correct, problems with experimental and statistical methods mean that there is less than a 50% chance that the results of any randomly chosen scientific paper are true.
    John Ioannidis, an epidemiologist at the University of Ioannina School of Medicine in Greece, says that small sample sizes, poor study design, researcher bias, and selective reporting and other problems combine to make most research findings false. But even large, well-designed studies are not always right, meaning that scientists and the public have to be wary of reported findings.
    "We should accept that most research findings will be refuted. Some will be replicated and validated. The replication process is more important than the first discovery," Ioannidis says.
    In the paper, Ioannidis does not show that any particular findings are false. Instead, he shows statistically how the many obstacles to getting research findings right combine to make most published research wrong.
    Massaged conclusions
    Traditionally a study is said to be "statistically significant" if the odds are only 1 in 20 that the result could be pure chance. But in a complicated field where there are many potential hypotheses to sift through – such as whether a particular gene influences a particular disease – it is easy to reach false conclusions using this standard. If you test 20 false hypotheses, one of them is likely to show up as true, on average.
    Odds get even worse for studies that are too small, studies that find small effects (for example, a drug that works for only 10% of patients), or studies where the protocol and endpoints are poorly defined, allowing researchers to massage their conclusions after the fact.
    Surprisingly, Ioannidis says another predictor of false findings is if a field is "hot", with many teams feeling pressure to beat the others to statistically significant findings.
    But Solomon Snyder, senior editor at the Proceedings of the National Academy of Sciences, and a neuroscientist at Johns Hopkins Medical School in Baltimore, US, says most working scientists understand the limitations of published research.
    "When I read the literature, I'm not reading it to find proof like a textbook. I'm reading to get ideas. So even if something is wrong with the paper, if they have the kernel of a novel idea, that's something to think about," he says.
    Journal reference: Public Library of Science Medicine (DOI: 10.1371/journal.pmed.0020124).