- Joined
- Feb 28, 2007
- Messages
- 9,786
Ken, you must be a research scientist! Nice analysis and explanation.
It would be interesting to see how these knives would group with more testers, but I realize this was a ton of work to do. We always want more data, right?
Yes, I am a and thanks.
It's less about what my intuition tells me and more on what my notes and the hours of video I took during the tests tell me. I go back to the video on certain knives where I see a big difference in results and can't figure it out. Like one of the testers gave Koyote a low score on whittling, when I used the knife last and that thing cuts like a freaking champ. I think the tester was trying to whittle with the spine..![]()
Well that is something that a stats test can't fix no matter what Tony. It is also why the scores can't in themselves be taken too seriously. As mentioned above, we could always use more data from more reviewers. Also, even though most tests are given a score out of 5 it should be understood that this is still qualitative data because the actual score given for a particular test typically reflects a subjective call. When Brian Andrews did his review, he did have a few direct empirical measures like when he took the average of the bevel width at five spots on each knife using a micrometer or when he determined the amount of pressure required to push cut a standard sized thread using a rigged up postal scale. Now Brian, bless his heart, is an engineer and those guys are total FREAKS about quantitative data

On the hand, your points raised above are completely valid. Marcelo, B. Mike and yourself not only provided a set of scores for us to work with, but you also provided us with a rich narrative of the testing experience in your individual descriptions. These narratives are as valid, and surely, far more valid than the scores themselves. I sort of think of the scores like the meta-reviews of RottenTomatoes, that movie review site that compiles all the reviews on a given movie. Gives each review a score usually using the 1 popcorn out of 5 type thing a reviewer would give, then compiles these reviews across all the reviewers. In the end you get the tomatometer that tells you if many reviewers liked a movie or didn't like the movie. This provides some good information in itself, but if I'm really interested in seeing a movie, I will then pick a few favorite reviewers from their list and go and read their review directly. This gets to the next point of why we need that narrative as well.
If the contest results were simply posting up the numbers and ranks of the knives without that narration and the awesome pictures, this thread would not have received the 9000+ views and massive amount of interest. Because in the end, we have traveled this journey with the three of you. Even the long time it took to get it completed forced us to share in your struggles to get this immense job done. It was a hell of a ride, more so for the three of you, but we all got to live vicariously through it via this thread. So the scores are one thing. The community of makers, contest submissions, volunteer time, eye candy and pictures of Rick's head on Tony's body hugging his family were what made this exercise so successful. So again, Kudos to you guys!
Last edited: