AI VS HUMANS VS CHIMPANZEES – A FACT-FINDING MISSION
- Mikael Svanstrom
- Mar 3
- 2 min read

Gapminder (Gapminder (https://www.gapminder.org/) is an independent educational non-profit fighting global misconceptions. They’ve tested people on their knowledge on a variety of areas where misconceptions run rife, such as global warming, International conflicts, educational quality, extreme poverty and many other areas.
Taking their test is a sobering experience and I recommend everyone to try it out.
They decided to use the same test on the major generative AI models and the results were interesting, but not maybe surprising. They ran this the first time in April 2024 and found that the AI models were correct 69% of the time. This sounded pretty bloody bad to me, and I was happy to see that in February 2025 it was all the way up to 79.7%. That still meant they were wrong one out of five times, but at least it was moving in the right way.
So how do we, humankind, fare in comparison? On the same question set we get on average 23% correct. This sounds bad. Especially since there are only three possible answers to each question. A chimpanzee, if we assumed it would just pick answers at random, would be 33%!
Check it out here: https://www.gapminder.org/ai/worldview_benchmark/
It isn’t strange that an AI model that has been trained on pretty much all data the AI companies can get their grubby little hands on, outperforms us average people who think we live in a post-fact world. But isn’t it frightening we are worse than just picking random answers?
I’m sure some will argue the answers to the questions are wrong. That they can find something on the internet that disproves it, but I feel that just proves the point. We’ve managed to get to a point where validation of your own opinion is more important than validation of your opinion against facts. I bet we’ll score even worse next time.
I’m on team chimpanzee.
Comments