To be honest, I'd forgotten I'd read the previous books when I bought this one, but referring back to the earlier reviews, I'm getting the same feeling all over again. I noted that Born to be Good was 'strung together rather haphazardly' and that The Power Paradox felt like many business books - a good magazine article strung out to make a tissue-thin book. It's deja vu all over again.
Keltner divides the book into four sections. Only the first is directly about 'a science of awe' (though the scientific references continue throughout). From 69 pages in we get onto 'stories of transformative awe', because this is far more about the experience than the science. Then we move on to 'cultural archives of awe', and finally the life lessons bit: 'living a life of awe'. It's absolutely fine that Keltner personalises the process in writing a lot about his family, but it does feel much of the time that the content is observational without any significant depth beneath it.
The basic concept of the importance of awe, combined with some difficulty in describing just what it is, is interesting and arguably important for us as human beings. I do feel that most of us don't experience enough awe in our lives, potentially making our lives feel relatively pointless. We need awe. But the way that Keltner delivers this wisdom sometimes feels more like we're in a Bill and Ted movie, without the humour or the storyline. It's all 'Whoa!' and 'Feel this, man!' This is a sheep in wolf's clothing: a spiritual self-help book dressed up as popular psychology.
The other problem I have with this book is that I can't take seriously any post-replication crisis psychology book that does not mention it at all and does not explore the quality of the studies it references. Keltner has 250 references at the back, but in the text all we ever get is apparent fact such as 'a study showed this' before moving on snappily to the next observation. It's not just that there is no depth - it's all surface - but we are never told anything about the quality of the studies. Was there p-hacking? Did they use small samples? Were the effects significant but with minimal effect? Did they use the low standard of being considered significant if there is a 1 in 20 chance of the effects being seen if the null hypothesis being true? Have they been successfully replicated? Nothing. Nada. Whoa!
Review by Brian Clegg - See all Brian's online articles or subscribe to a weekly email free here
Comments
Post a Comment