Camp Knife Challenge Results!!!!!

Just saw the review and very pleased with the results, I think you did an outstanding job in the judging of the knife. Thanks for all your effort.

The knife really did perform as I expected, I place alot of importance on the chopping and batoning and figured other tasks could be accomplished with the same knife and that is they way you judged the knife.

thanks
 
Thanks for all the good work you did, Brian. :thumbup:



I botched my knife with some late-night last-minute sharpening, I guess. :foot:

I slanted mine toward toothy-sharp, rather than push-cut sharp - which ended up being the majority of your testing...rather than game/rope/cloth/etc.

Other than that, my knife seems to have done well. :thumbup:





However, I am kinda disappointed that we overwhelmed you with too many knives that you were not able to give a more thorough "all-around" testing.



A few more things you might add to the list next time around (beyond what you already hinted at):

1 - scores for fit/finish
2 - balance (this would be great with pics, helps give a more "real world" understanding of the knife)
3 - ease of carry "out in the sticks"
4 - ergonomics in the various grips (have several people hold each knife and then average the scores)



Since this was intended as more of a "feedback for the makers" contest...it would be good to see these things compared/rated.




Perhaps next time this is done you could limit the number of knives to around 6 or so?

I wouldn't mind "missing out" if it meant a more thorough testing on the next round.



Wish I could send you funds to make up for your lost time...I know what it's like to spend an entire weekend doing nothing but testing other people's stuff and not getting paid for it!



And I salute you for being brave enough to pick an 'overall winner'.

;)


Dan
 
Last edited:
I think in all honesty that none of the knives came out looking bad. I know I'd be proud to have any of the knives that entered in my hand the next time I'm sat beside the campfire !!!
 
Thank you for your outstanding effort Brian :thumbup::) And thanks to all the great makers :thumbup:
 
I think Dan has some good points regarding the additional categories in the future, but I sure do hate to add more work to the future tester(s). Sure seems like Brian had his hands full with the testing he did.

I do suggest this...and I'll be doing it for myself with the test results from this comparison...that results be thrown into an excel spreadsheet with the option to give a weight for each category. So, if sharpness is very important to you then it gets a weight of 1.2 (sharpness score x 1.2 = modified score), and if chopping is less important it might get a weight of 0.7 (chopping score x 0.7 = modified score). Good way for each of us to take the numbers and give them some context in relation to our personal needs.
 
Thanks Brian for the totally awesome write up and pictures.
That was a huge undertaking itself let alone the actual testing.
A big congratulations to all the featured knifemakers as well.
I wish I could afford to buy one of each of the knives tested,
But alas reality sets in and I guess i'll have to settle for the
calender if it ever comes to fruition ... :(
 
I'm with you on the calendar...although I already have orders in with three of these guys already. My wife is going to kill me.
 
That was a lot of work there. Thanks for spending all that time and energy for us. I know how much work it is to test out knives, and try to take useful pics at the same time.

Great review. There are couple of makers in your group that are at the top of my list.
 
I think Dan has some good points regarding the additional categories in the future, but I sure do hate to add more work to the future tester(s). Sure seems like Brian had his hands full with the testing he did.

I do suggest this...and I'll be doing it for myself with the test results from this comparison...that results be thrown into an excel spreadsheet with the option to give a weight for each category. So, if sharpness is very important to you then it gets a weight of 1.2 (sharpness score x 1.2 = modified score), and if chopping is less important it might get a weight of 0.7 (chopping score x 0.7 = modified score). Good way for each of us to take the numbers and give them some context in relation to our personal needs.

I think you could easily get the testing to the point where it is so difficult, so time consuming, so exacting, and so inclusive that one would be lucky to fully test ONE knife to the standards of the group. And like it or not, so many things are subjective, you would never be able to get sane people to agree on all the parameters.

In support of this, look at the threads on "how sharp is sharp" and "what is sharp", etc. There are many different kinds of sharp for many different groups, and the guy that defines sharp as as sharp as he can get will not be happy with a 16" Battle Monster 500 XL that won't shave hair from the factory.

Some will demand a convex edge on any knife; some flat ground and others will want a microbevel. So depending on definitions, how would you define that? If it is rope or canvas cutting, will you make a piece of testing equipment that will produce exactly the same test parameters for each blade. How long will it take to build your test setup?

Would you go as far as to say "this was from the factory" and leave it at that, or would (as most of us do) reprofile the edge before testing? In a shootout situation like this, wouldn't it be a better test to dun the maker that didn't get the knife to you scary sharp, then resharpen all knives to the exact same standards before beginning a new round of tests?

I have a couple of knives that were very poor performers until I reprofiled and sharpened them. Now they (especially my Kershaw Scallion S110V) is a favorite. How could you factor that in on a quantitative test?

The same goes for blade finishes. Handle materials. Ease of deployment. Think of field use knives: camping only, hiking only, survival camping only, camping and hunting/game cleaning use, etc. What good is a survival knife if it can't clean game? The sheer volume of variables would ruin the test.

I think Brian hit the sweet spot. I don't want to go through 25 pages of data generated in lab type conditions that cover the minutiae of details that can be generated to determine the usefulness of one knife. Too many caveats, too many qualifiers, too many disclaimers and the reports are useless to real users.

No knife can be adequately tested and approved until you have it in your own hands. I think running the knives through and hand full of tests that could apply to anyone that uses them (although Brian did leave out the cardboard cutters and the letter openers) is perfect. We were able to get a great idea of what these knives can do, how they are built, some idea of what their makers are about.

But remember too; these were ONE knife from a maker, that submitted ONE model for testing. How would that factor into the tests?

Sometimes less is more. An overview is a good thing.

Since I am tasked with writing extremely detailed reports on occasion, I cannot imagine how long it would take to try to translate a combination of subjective and empirical data into a spreadsheet. Much less the accompanying narrative and the illustrated/notated pictures that would come with it. Worse, developing a rating system for each knife, along with the comparison tables to back up the findings.

I think the pool of volunteers would diminish or disappear once the test parameters generated to satisfy everyone was satisfied. There would be no tests!

Robert
 
Brian, wow, great write up. Congratulations to all involved!

Yeah if there is ever another you should be in it Landi !

I'd also like to see Scott Gossman, Matthew Lesniewski ,G L Drew, Dave Farmer and few others that slip my mind right now !;):thumbup:
 
I just wanted to note that I put these photos on one of my photobucket accounts with a current bandwidth used of 0!

If it happens to hit its limit soon, I will pay to upgrade so people can keep viewing this.

B

I can't view it anyway, because it's on Photobucket:(

But the writtten part made up for not being able to see any of the knives or testing.:thumbup:
 
I think you could easily get the testing to the point where it is so difficult, so time consuming, so exacting, and so inclusive that one would be lucky to fully test ONE knife to the standards of the group. And like it or not, so many things are subjective, you would never be able to get sane people to agree on all the parameters.

In support of this, look at the threads on "how sharp is sharp" and "what is sharp", etc. There are many different kinds of sharp for many different groups, and the guy that defines sharp as as sharp as he can get will not be happy with a 16" Battle Monster 500 XL that won't shave hair from the factory.

Some will demand a convex edge on any knife; some flat ground and others will want a microbevel. So depending on definitions, how would you define that? If it is rope or canvas cutting, will you make a piece of testing equipment that will produce exactly the same test parameters for each blade. How long will it take to build your test setup?

Would you go as far as to say "this was from the factory" and leave it at that, or would (as most of us do) reprofile the edge before testing? In a shootout situation like this, wouldn't it be a better test to dun the maker that didn't get the knife to you scary sharp, then resharpen all knives to the exact same standards before beginning a new round of tests?

I have a couple of knives that were very poor performers until I reprofiled and sharpened them. Now they (especially my Kershaw Scallion S110V) is a favorite. How could you factor that in on a quantitative test?

The same goes for blade finishes. Handle materials. Ease of deployment. Think of field use knives: camping only, hiking only, survival camping only, camping and hunting/game cleaning use, etc. What good is a survival knife if it can't clean game? The sheer volume of variables would ruin the test.

I think Brian hit the sweet spot. I don't want to go through 25 pages of data generated in lab type conditions that cover the minutiae of details that can be generated to determine the usefulness of one knife. Too many caveats, too many qualifiers, too many disclaimers and the reports are useless to real users.

No knife can be adequately tested and approved until you have it in your own hands. I think running the knives through and hand full of tests that could apply to anyone that uses them (although Brian did leave out the cardboard cutters and the letter openers) is perfect. We were able to get a great idea of what these knives can do, how they are built, some idea of what their makers are about.

But remember too; these were ONE knife from a maker, that submitted ONE model for testing. How would that factor into the tests?

Sometimes less is more. An overview is a good thing.

Since I am tasked with writing extremely detailed reports on occasion, I cannot imagine how long it would take to try to translate a combination of subjective and empirical data into a spreadsheet. Much less the accompanying narrative and the illustrated/notated pictures that would come with it. Worse, developing a rating system for each knife, along with the comparison tables to back up the findings.

I think the pool of volunteers would diminish or disappear once the test parameters generated to satisfy everyone was satisfied. There would be no tests!

Robert

In general I agree with what you're saying. Although more categories would be "nice" it would be asking a lot for someone to dedicate that much time and effort. Brian did an excellent job.

The comparison table I was talking about would take about ten minutes max to set up (prior to data entry)...nothing complicated.
 
Don't wanna be a jerk here, but man... I appreciate all the comments of how this could have been better, but I really hope Brian doesn't end up feeling like he didn't do a good enough job. I can't even fathom how much work Brian put into this. I think this was just about perfect in my eyes.

I said it once, and will say it again. Well done Brian. Well done indeed.
 
Don't wanna be a jerk here, but man... I appreciate all the comments of how this could have been better, but I really hope Brian doesn't end up feeling like he didn't do a good enough job. I can't even fathom how much work Brian put into this. I think this was just about perfect in my eyes.

I said it once, and will say it again. Well done Brian. Well done indeed.

Well said buddy, he far exceeded what I was expecting !!!!;):thumbup:
 
Don't wanna be a jerk here, but man... I appreciate all the comments of how this could have been better, but I really hope Brian doesn't end up feeling like he didn't do a good enough job. I can't even fathom how much work Brian put into this. I think this was just about perfect in my eyes.

I said it once, and will say it again. Well done Brian. Well done indeed.

Haven't had time to read all the posts here, but from what I have read it seems like everyone really appreciates the job that Brian did. If someone would have liked to see things done a little differently, particularly one of the makers, I don't have an issue with that. Good to have conversation about it so any bugs can get sorted out for the next time. The makers now have a better understanding of what Brian went through, and Brian now has a better understanding of what the makers are focused on. Win/win in my book.
 
Personally, I think Brian should have had our knives displayed with supermodels. What a disappointment.




Rick
 
Back
Top