Historically, intelligence has been defined as the ability to adapt to the environment. Intelligent people can learn, reason, solve problems and make decisions that fit their specific circumstances.
Alfred Binet, the co-creator of the first modern intelligence test (1905), clearly understood this. He believed intelligence is modifiable, and wanted to identify children who did not respond to regular schooling but needed special instruction to help them become smarter and have more opportunities regardless of social class.
Binet died in 1911 without developing the idea fully. The law of unintended consequences then took over. The early tests measured memory skills and a narrow range of analytical skills - vocabulary recall, information-processing, numerical operations, spatial visualisation and so on. UK psychologist Charles Spearman noted that if you scored highly in one, people tended to do well in all. He interpreted this as a measure of intelligence - the founding principle of IQ tests. The problem was that Binet used academic types of problem to predict academic performance in typical schooling.
Because of this, there were few serious attempts to measure other broader abilities - thinking creatively, or solving practical problems. New tests were validated against old ones, perpetuating this thinking. IQ tests and school assessments and examinations used the same narrow range of recall and analysis, which impacted on opportunities and career paths open to people. Binet had seen tests as tools to help people realise their full potential, but they developed into ways of restricting opportunities.
Parents who were able to help children with schooling, socialisation and other experiences that allowed them to do well in the tests, gained a self-perpetuating advantage. Their children did well, and passed on the same advantages to their own children. Largely white, well off individuals with a certain academic background held narrow views on what constituted intelligence.
Sternberg and colleague Lynn Okagaki's research showed that different socially defined racial, ethnic and socio-economic groups in the US prioritise different skills in socilaising young people to be intelligent. European-American and Asian-American parents typically focused on cognitive skills, while Latino-American parents emphasised social skills. Predominantly European-American and Asian-American teachers then estimated the abilities of these children to be higher.
US university admission tests favour the skills of white and Asian students and downplay those of black and Hispanic students. The dominant tests don't even measure aspects of analytical reasoning (needed in science, technology, engineering and mathematics) particularly well.
Real world problems are different from the characteristics of problems used in standard tests, which don't work well for complex, new, high stakes (and often emotionally charged) problems we can now face. For example, how to balance individual liberty and public health during the Covid pandemic.
Adaptive intelligence uses four skills to adapt to, shape and select environments. Creative skills to come up with new ideas. Broad-based analytical skills to assess which ideas will work. Practical skills to implement ideas and convince others that they work. Wisdom-based skills to ensure our ideas help to achieve a common good by balancing the interests of ourselves and of others.
Adding creative, practical and wisdom based skill tests to university admission tests increases the accuracy of predictions of both academic and extra-curricular success. They also decreased differences between socially defined racial and ethnic groups. We need to change our views on what it means to be intelligent.
Source: Rethinking Intelligence by Robert J. Sterberg in New Scientist, 16 Jan 2021
[Adaptive Intelligence: Surviving and thriving in times of uncertainty by R.J. Sternberg to be published in Feb. 2021