not2sharp
Platinum Member
- Joined
- Jun 29, 1999
- Messages
- 20,499
How do you feel?
There are many good knife reviews out there, and I usually like to read every review I can get a copy of, yet most of them tend to leave me somewhat unsatisfied.
Almost all of our reviews are weak in two respects:
1) The reviews are not scientific. We seldom, if ever, see someone test a knife, even in the most basic sense, and I have never seen a knife test involving statistacally valid sampling quantity.
2) Most reviews fail to provide a sufficiently clear perspective. Yes, the knife performed well, or not so well, but how did it compare with other similar knives available in the marketplace? Should we buy this knife over another?
Understandably, cost and time constraints, make it virtually impossible for even the best intentioned and most knowlegeable of us to overcome these two limitations. However, we as a community of knife enthusiats can and do have the ability, and resources, to collect, assemble and correlate the required information. We can overcome these limitations and conduct scientifically conclusive testing, collect the data, and draw meaningful conclusions.
Far starters:
We can start by testing for something simple. Perhaps we can look at edge retention (i.e. examine blade design, steel type performance, and quality of heat treatment).
We can start with a common and basic reference knife, perhaps a Buck110, and use the results to establish a reference benchmark for performance criteria.
Once our data begins to build we can develop additiona reference tests and continue to run new knives through our testing protocols.
The test:
1) 100 members volunteer to conduct independat testing of a specific knife and model.
2) The testing is simple, cheap, and performed under controlled conditions:
a) Startiung with an unsharpened factory edge on a Buck model 110 plain edge folder.
b) Given a certain type of rope (agree on a Home Depot sku for the rope), and a certain cutting surface - like a pine wood board,
c) Count how many times you can cut through the rope, without sharpening the blade, and using only hand pressure.
d) Each participants send their results via E-mail to a moderator who volunteers to consolidate our results. In order to reduce bias, participants do not share test results with each other.
e) The highest and lowest 10% of the results are eliminated to reduce distortion.
f) The results are published and we again ask for volunteers to test another model.
If we do this for some of our popular knives we can at least develop a criteria for an informed discussion.
Who wants in?
There are many good knife reviews out there, and I usually like to read every review I can get a copy of, yet most of them tend to leave me somewhat unsatisfied.
Almost all of our reviews are weak in two respects:
1) The reviews are not scientific. We seldom, if ever, see someone test a knife, even in the most basic sense, and I have never seen a knife test involving statistacally valid sampling quantity.
2) Most reviews fail to provide a sufficiently clear perspective. Yes, the knife performed well, or not so well, but how did it compare with other similar knives available in the marketplace? Should we buy this knife over another?
Understandably, cost and time constraints, make it virtually impossible for even the best intentioned and most knowlegeable of us to overcome these two limitations. However, we as a community of knife enthusiats can and do have the ability, and resources, to collect, assemble and correlate the required information. We can overcome these limitations and conduct scientifically conclusive testing, collect the data, and draw meaningful conclusions.
Far starters:
We can start by testing for something simple. Perhaps we can look at edge retention (i.e. examine blade design, steel type performance, and quality of heat treatment).
We can start with a common and basic reference knife, perhaps a Buck110, and use the results to establish a reference benchmark for performance criteria.
Once our data begins to build we can develop additiona reference tests and continue to run new knives through our testing protocols.
The test:
1) 100 members volunteer to conduct independat testing of a specific knife and model.
2) The testing is simple, cheap, and performed under controlled conditions:
a) Startiung with an unsharpened factory edge on a Buck model 110 plain edge folder.
b) Given a certain type of rope (agree on a Home Depot sku for the rope), and a certain cutting surface - like a pine wood board,
c) Count how many times you can cut through the rope, without sharpening the blade, and using only hand pressure.
d) Each participants send their results via E-mail to a moderator who volunteers to consolidate our results. In order to reduce bias, participants do not share test results with each other.
e) The highest and lowest 10% of the results are eliminated to reduce distortion.
f) The results are published and we again ask for volunteers to test another model.
If we do this for some of our popular knives we can at least develop a criteria for an informed discussion.
Who wants in?